Hardware Reference
In-Depth Information
of such packages. This in turn will motivate I/O library developers to make
them as easy to use as compilers are today. Indeed, some of these capabilities
will no doubt find their way into common language compilers.
Container-based I/O libraries like HDF5 and Parallel-NetCDF are also
continuing to find widespread acceptance. The big data phenomenon has shone
a light on these technologies, making them visible in application domains as
varied as oil and gas exploration, healthcare, finance, and movie production.
These successes, coupled with their continuing adoption in the HPC com-
munity, make it very likely they will continue to develop and find ways to
incorporate more of the features we find in the other libraries.
Beyond merely improving how I/O libraries serve current needs, what new
challenges will I/O libraries have to deal with? One of them will be to play
well with the technologies that accompany the big data phenomenon. There
is already pressure on many of these I/O libraries to support data analytics,
including methods such as MapReduce. We have seen a few experiments in
this direction, and PLFS does support Hadoop. Will this become a common
capability for most I/O libraries?
As for the future, every I/O library aims for scalability, but how well will
they respond to the many different forms of scalability that the future brings?
Future systems are expected to deliver many terabytes of data per second.
Millions of processes will be opening, reading, and writing files. File systems
will contain trillions of files on millions of storage devices. How will our I/O
libraries cope? It's going to be fun to watch.
 
Search WWH ::




Custom Search