Hardware Reference
In-Depth Information
the burn-in process, approximately 50 drives were placed prior to production
formatting to ensure maximum performance characteristics for the hardware.
7.2.1 Performance
After completing a thorough burn-in procedure, the final step in a large
file system deployment is to format the file systems and benchmark for end-
user performance and scalability. One of the architectural designs of Lustre
is to provide good scaling characteristics across multiple OSSs. To quantify
this scaling with Lustre 2.x, Figure 7.3 presents measured performance data
starting with only a single OSS active (6 OSTs) up to a maximum of 58
servers active (348 OSTs), which corresponds to the entirety of Stampede's
SCRATCHle system. In these tests, 48 write clients were used to test
an active OSS, and a single OSS delivered an aggregate bandwidth of over
2.5 GB/s. As additional OSSs were activated, controlled weak scaling tests
were repeated maintaining the 48 client{server ratio, and near-linear scaling
was observed across the file system. Comparing the performance between 1
and 58 OSSs, a 95% scaling eciency of over was observed.
The results in Figure 7.3 present performance as a function of the number
of active OSSs. In contrast, the results shown in Figure 7.4 present perfor-
mance for Stampede'sSCRATCHle system as a function of the number of
participating clients. These results were obtained by measuring the total time
required for each client to write a fixed payload size of 2 GB per host (using
one MPI task per host) to individual files. The tests compare results for a
single host participating at the small scale, to more than 6,000 hosts writing
simultaneously at the large scale. Note that these tests were carried out on
256
128
64
32
16
8
4
2
1
2
4
8
16
32
64
# of Active Lustre Object Storage Servers
FIGURE7.3:LustrescalingacrossmultipleObjectStorageServers(OSS)
measuredusingLustre(version2.1.3).
Search WWH ::




Custom Search