Hardware Reference
In-Depth Information
put or the size of the benchmark is often changed to increase its running time and to avoid
perturbation in measurement or domination of the execution time by some factor other than
CPU time.
SPEC benchmarks are real programs modified to be portable and to minimize the effect of
I/O on performance. The integer benchmarks vary from part of a C compiler to a chess pro-
gram to a quantum computer simulation. The floating-point benchmarks include structured
grid codes for finite element modeling, particle method codes for molecular dynamics, and
sparse linear algebra codes for fluid dynamics. The SPEC CPU suite is useful for processor
benchmarking for both desktop systems and single-processor servers. We will see data on
many of these programs throughout this text. However, note that these programs share litle
with programming languages and environments and the Google Goggles application that Sec-
tion 1.1 describes. Seven use C++, eight use C, and nine use Fortran! They are even static-
ally linked, and the applications themselves are dull. It's not clear that SPECINT2006 and
SPECFP2006 capture what is exciting about computing in the 21st century.
In Section 1.11 , we describe pitfalls that have occurred in developing the SPEC benchmark
suite, as well as the challenges in maintaining a useful and predictive benchmark suite.
SPEC CPU2006 is aimed at processor performance, but SPEC offers many other bench-
marks.
Server Benchmarks
Just as servers have multiple functions, so are there multiple types of benchmarks. The
simplest benchmark is perhaps a processor throughput-oriented benchmark. SPEC CPU2000
uses the SPEC CPU benchmarks to construct a simple throughput benchmark where the pro-
cessing rate of a multiprocessor can be measured by running multiple copies (usually as many
as there are processors) of each SPEC CPU benchmark and converting the CPU time into a
rate. This leads to a measurement called the SPECrate, and it is a measure of request-level
parallelism from Section 1.2 . To measure thread-level parallelism, SPEC offers what they call
high-performance computing benchmarks around OpenMP and MPI.
Other than SPECrate, most server applications and benchmarks have significant I/O activity
arising from either disk or network traffic, including benchmarks for file server systems, for
Web servers, and for database and transaction-processing systems. SPEC offers both a file
server benchmark (SPECSFS) and a Web server benchmark (SPECWeb). SPECSFS is a bench-
mark for measuring NFS (Network File System) performance using a script of file server re-
quests; it tests the performance of the I/O system (both disk and network I/O) as well as the
processor. SPECSFS is a throughput-oriented benchmark but with important response time re-
quirements. (Appendix D discusses some file and I/O system benchmarks in detail.) SPECWeb
is a Web server benchmark that simulates multiple clients requesting both static and dynam-
ic pages from a server, as well as clients posting data to the server. SPECjbb measures serv-
er performance for Web applications writen in Java. The most recent SPEC benchmark is
SPECvirt_Sc2010, which evaluates end-to-end performance of virtualized datacenter servers,
including hardware, the virtual machine layer, and the virtualized guest operating system.
Another recent SPEC benchmark measures power, which we examine in Section 1.10 .
Transaction-processing (TP) benchmarks measure the ability of a system to handle transac-
tions that consist of database accesses and updates. Airline reservation systems and bank ATM
systems are typical simple examples of TP; more sophisticated TP systems involve complex
databases and decision-making. In the mid-1980s, a group of concerned engineers formed the
Search WWH ::




Custom Search