Database Reference
In-Depth Information
SWFS Slice ×1
×2
×1
×2
×1
10 Gb/s
Switch
complex
BG/P Rack ×1
×2
×1
×1
×2
×16
10 Gb/s Enet
4× IB
Figure 2.6 (See color insert following page 224.) Each rack of Blue Gene/P
hardware in the ALCF system is connected to storage via 16 10-gigabit-per-
second links. On the storage side, servers are grouped into units of 4 servers
attached to a single rack of disks via InfiniBand. There are 17 of these storage
“slices” providing storage services to the ALCF system.
and a peak performance of 13.9 teraflops. BG/P compute nodes perform point-
to-point communication via a 3-D torus network, while separate networks for
global barriers and collective operations are provided to enable even higher
performance for specific common group communication patterns. An addi-
tional tree network is used for booting and for ooading system call servicing.
This network connects the compute nodes to separate I/O nodes that serve as
gateways to the outside world, using 10 gigabit Ethernet for communication.
2.3.1.1
Argonne Blue Gene/P System
The largest Blue Gene/P in operation at the Argonne Leadership Computing
Facility (ALCF) is a 40-rack system, providing 80 terabytes of system memory
and a peak performance of 556 teraflops (Figure 2.6). The ALCF BG/P is
configured with one I/O node for every 64 compute nodes. A Myricom Myri-
10G switch complex is used to attach the BG/P to storage and provides
redundant connections to each of 128 file servers. Those servers supply the
system with an aggregate of 4.3 petabytes of storage using 17 data direct
network (DDN) storage devices with a peak aggregate I/O rate of 78 gigabytes
per second.
2.3.1.2
Blue Gene/P I/O Infrastructure
Figure 2.7 shows the software involved in the I/O path on the BG/P. Be-
cause there is no direct connection between Blue Gene compute nodes and
the outside world, the BG/P system uses I/O forwarding to accomplish I/O
Search WWH ::




Custom Search