“Supercomputing speed is typically boosted by adding more processors, but two new systems funded by the National Science Foundation due to go live next January will take an unconventional approach to speed up calculations and data analysis,” writes Agam Shah for Computerworld.
In 2015 two new supercomputers are expected to be deployed and will have petabytes (PB) of data storage:
- “Wrangler” at the Texas Advanced Computing Center will have 10PB of replicated, secure and high-performing data storage. As well as, 3,000 processing cores dedicated to data analysis and flash layers for analytics. It’s bandwidth will be 1TBps (byte per second) and 275 million IOPS (input/output operations per second)
- “Comet” at the San Diego Supercomputer Center will have 1,024 processor cores, a massive 7PB array of high-performance storage and 6PB of “durable storage for data reliability.” Each node will have 128GB of memory and 320GB of flash
These two supercomputers differ from previous models because they use high levels of storage relative to the number of processors in the system. Amount supercomputers of the past, performance has always been high but throughput has been an issue. Built this new way, scientists believe Wrangler and Comet will have better throughput, in-memory and caching features. These computers will assist in research on economics, geosciences, medicine, earthquake engineering and climate and weather modeling according to Mr. Shah’s article.