
IBM Builds 120-Petabyte Storage Array
A technology team at IBM's Almaden, California technology lab has developed a disk drive array that can store 120 petabytes of data. At that capacity, the system can hold about a trillion average-sized files, providing enough storage for the most demand supercomputing simulations.
Recent article in MIT's Innovation Review
According to a recent article in MIT's Innovation Review, the system was developed for an unnamed customer that requires petascale simulations, however the research could as well apply to conventional ultra-scale storage systems. In particular, the 120-petabyte array could be a run-of-the mill storage setup for cloud computing systems of the future -- that according to Bruce Hillsberg, director of storage technology at IBM and leader of the petabyte storage project.
The storage array is made up of 200,000 conventional hard disk drives and are stored in an extra-dense and extra-wide storage drawer. As is the case for a lot of IBM's cutting-edge supercomputing innovation, the elements are water-cooled in other words than air-cooled.
The Innovation Review piece points out the system capabilities have leveraged recent enhancement's to IBM's General Parallel File System, that the company demonstrated in July. In that case, the file system was able to scan 10 billion files in 43 minutes, which according to the IBM'ers was 37 times faster than 2007-era GPFS.
To build exascale systems, power is probably the biggest technical hurdle on the hardware side. In terms of getting to exascale computing, demonstrating the value of supercomputing to funders and the public is a more urgent challenge. Nevertheless the top roadblock for realizing the potential benefits from exascale is software.Read more...
The RamSan-810
Storage maker Texas Memory Systems has launched the RamSan-810, its first enterprise multi-level cell flash-based product, expanding the company's market reach into the tier 1 storage arena. The move comes as more solid state disk vendors are using the innovation to challenge disk-based systems on performance-demanding applications.Read more...
NFS has been the standard protocol for NAS systems since the 1980s. But, with the explosive growth of Linux clusters running demanding technical computing applications, NFS is no longer sufficient for these big data workloads. Afterwards years of development effort, driven by Panasas and others, pNFS is now just around the corner and promises to dramatically improve Linux client I/O performance thanks to its parallel architecture. Watch the on-demand webinar – "pNFS: Are We There But?"
Copyright © 1994-2011 Tabor Communications, Inc. All Rights Reserved. HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy. Reproduction in whole or in some cases in any form or medium without express written permission of Tabor Communications Inc. is prohibited.Powered by Xtenit
- ·
Voip Storage Array
- ·
Recent Article On Petabyte
- ·
Ibm Dense Storage Array
- · Rackspace debuts OpenStack cloud servers
- · America's broadband adoption challenges
- · EPAM Systems Leverages the Cloud to Enhance Its Global Delivery Model With Nimbula Director
- · Telcom & Data intros emergency VOIP phones
- · Lorton Data Announces Partnership with Krengeltech Through A-Qua⢠Integration into DocuMailer
