Hardware Reference
In-Depth Information
FIGURE 5.2: Block diagram representing the hardware configuration of an
OSS pair in the instantiation of the ZFS-based Lustre file system for Sequoia
at LLNL.
the ASC and M&IC programs in the facility. The OSS design consists of: Ap-
pro GreenBlade dual socket board, Intel Xeon 8 core processors, 64 GB of
RAM, QDR Mellanox ConnectX-3 IB (LNET to Lustre), and DualPort QDR
ConnectX-2 (to disks).
This hardware is configured into a file system building block consisting of a
NetApp E5460 and two OSS nodes. The file system building block incorporates
two OSS nodes, two NetApp controllers, and six RAID 6 sets consisting of
ten 3-TB drives (Figure 5.2).
Eight of these are integrated into a rack, RSSU (rack scalable storage
unit), and then 48 of them make up the resulting Lustre file system with 55-
PB storage, 850 GB/s sustained write throughput, and 768 OSSs and OSTs
(each OST is 72 TB). Recall the Sequoia Blue Gene system hardware has 96
racks with 768 I/O nodes, and 98304 compute nodes; and a total of 1572864
cores. The comprehensive Sequoia Lustre Architecture is represented in
Figure 5.3.
Tracing an I/O request through the system: the I/O request originates
from an application running on a compute node, which is running a lightweight
kernel (CNK). Then the I/O request is \function shipped" from the compute
node to the I/O node, where the torus network includes an eleventh link
specifically for shipping I/O from the compute nodes to the I/O nodes. A
 
Search WWH ::




Custom Search