Hardware Reference
In-Depth Information
OrangeFS builds and runs directly on a Linux installation and requires no
kernel patches or specific kernel versions. It operates on a wide variety of sys-
tems, including IA32, IA64, Opteron, PowerPC, Alpha, and MIPS. It is easily
integrated into local or shared storage configurations and provides support for
high-end networking fabrics, such as Infiniband and Myrinet. OrangeFS is eas-
ily scalable, allowing the addition of storage servers as needed to provide both
additional disk space and improved I/O performance [7]. Hardware failover
provides invaluable services for large storage systems. Some installations want
additional redundancy, and smaller installations want redundancy without the
high cost of hardware solutions. OrangeFS is developing configurable redun-
dancy and failover mechanisms as part of the file system. These allow different
files to have different levels of redundancy as required by the application and
also allow file system redundancy to be turned off for maximum performance.
Currently, files marked immutable can be replicated.
10.3.1 Cluster Shared Scratch
OrangeFS is a highly ecient file system for scratch space, where it es-
sentially provides a working directory for computations to read or write files
that are too large for local storage. It is a temporary file system for actively
running jobs. When used as shared scratch space, OrangeFS provides access
to the same input files and a location to store intermediate and output files
for all nodes involved in a computation job. Job workflow involves staging
data into and out of the shared scratch file system to another location for
permanent storage. For the many jobs that create a large amount of this tem-
porary or intermediate data, a scratch file system can provide higher capacity
than archived production file systems. OrangeFS supports Global Namespace,
which provides a consistent file system view to all compute nodes, allowing
multiple compute nodes to read and write to the same input and output files.
10.3.2 Cluster Node Scratch
Cross-node scratch space is similar to shared scratch space, only no per-
manent, system-wide OrangeFS file system is required. Instead, an OrangeFS
file system is created only for the duration of a job. Part of the scripting
for a cross-node scratch job includes setting up the file system. After nodes
are allocated to the job, one or more is identified to run OrangeFS servers,
where the server software is installed. Storage areas reside on local disks on
the server nodes. Client software is installed on nodes used to access the file
system. Typically, scripts perform cleanup functions at the end of a cross-node
scratch space job. This would include steps to copy results from the temporary
file system to permanent storage and tear down the temporary file system.
 
Search WWH ::




Custom Search