Hardware Reference
In-Depth Information
Table 5.16 Performance evaluation results for memory latency and context switching times using
LMBench
Context switching times (ms)
(number of processes/process
image size [in bytes])
Memory latency (ns)
Main
memory
Random
memory
Operating system
L1 cache
2p/64 k
8p/64 k
16p/64 k
Linux
4.99
148.30
1,842.05
11.10
43.50
48.05
Linux + PPC error
handler
4.99
150.50
1,912.95
11.50
45
49.40
Overhead (%)
0.00
1.48
3.85
3.60
3.45
2.81
Table 5.17 Performan ce evaluation results for processing times using LMBench
Process times (ms)
Signal
install
Signal
handling
Fork
processing
Execution
processing
Operating system
Null call
Null I/O
Linux
0.41
0.79
1.63
13.80
3,906.50
4,432
Linux + PPC error
handler
0.48
0.87
1.70
13.95
3,911
4,452.50
Overhead (%)
17.07
10.13
4.29
1.09
0.12
0.46
was related to the PPC in response to interrupts or exceptions and passed the processing
to the appropriate normal service routine in Linux if unrelated; otherwise, the PPC
error handler rebooted Linux, causing a serious access-violation error.
To observe the domain partitioning using PPC, we injected access-violation
errors by configuring PPC so that it did not allow applications running on Linux to
access a small memory area assigned to Linux. When an application wrote some
data into the small memory area, the PPC rejected the write access request so that
no data were written in the memory area, and the PPC generated an access-violation
interrupt signal to initiate the PPC error handler to reboot Linux.
Tables 5.16 and 5.17 indicate the overhead of the domain partitioning using the
PPC. The average performance penalty was 2.49%, and the overheads were typi-
cally less than 5%. In memory latency cases, the overheads were due only to the
additional bus access cycle generated by implementing PPC because the PPC error
handler was not initiated during the tests; thus, the overhead was 0.0% for “L1
cache” of the LMBench, 1.48% for “main memory,” and 3.85% for “random mem-
ory.” We presumed that the difference in overhead between the main and random
memories was due to the effect of the CPU core store buffers. The worst cases of
overhead were 17.07% for null call and 10.13% for null I/O. We attributed this
overhead to the PPC error handler because “null call” and “null I/O” are system
calls that only generate exceptions that trigger the PPC error handler's execution.
Implementing the PPC error handler into the Linux service routine using a paravir-
tualization approach [ 18 ] could reduce the overhead.
 
Search WWH ::




Custom Search