Information Technology Reference
In-Depth Information
referenceS
throughput keeps growing and the trend is going
up to 15MB/s. That's three times better. The aver-
age I/O size also improves a lot. It used to drop
sharply to about 5KB, while the new behavior
is to slowly go down to 40KB under increasing
loads. Correspondingly, the disk quickly goes
100% utilization for legacy readahead. It is actu-
ally overloaded by the storm of seeks as a result
of the tiny 1-page I/Os.
Bhattacharya, S., Tran, J., Sullivan, M., & Mason,
C. (2004). Linux AIO Performance and Robust-
ness for Enterprise Workloads. In . Proceedings
of the Linux Symposium , 1 , 63-78.
Brown,A. D., Mowry, T. C., & Krieger, O. (2001).
Compiler-based i/o prefetching for out-of-core
applications. ACM Transactions on Computer Sys-
tems , 19 , 111-170. doi:10.1145/377769.377774
Butt, A. R., Gniady, C., & Hu, Y. C. (2007). The
Performance Impact of Kernel Prefetching on
Buffer Cache Replacement Algorithms. IEEE
Transactions on Computers , 56 (7), 889-908.
doi:10.1109/TC.2007.1029
concluSion
Sequential prefetching is a standard function of
modern operating systems. It tries to discover
application I/O access pattern and prefetch data
pages for them. Its two major ways of improv-
ing I/O performance are: increasing I/O size for
better throughput; facilitating async I/O to mask
I/O latency.
The diversity of application behaviors and
system dynamics sets high standards to the adapt-
ability of prefetching algorithms. Concurrent and
interleaved streams are also big challenges for
their capabilities. These challenges became more
imminent by two trends: the increasing relative
cost of disk seeks and the prevalence of multi-core
processors and parallel computing.
Based upon experiences and lessons gained in
the Linux readahead practices, we designed a de-
mand readahead algorithm with flexible heuristics
that can cover varied sequential access patterns and
support interleaved streams. It also enjoys great
simplicity by handling most abnormal cases in
an implicit way. Its power stems from the relaxed
criteria on sequential pattern recognition and the
exploitation of page and page cache states. The new
design guidelines seem to work well in practice.
Since its wide deployment with Linux 2.6.23, we
have not received any regression reports.
Cao, P., Felten, E. W., Karlin, A. R., & Li, K.
(1995). A study of integrated prefetching and
caching strategies. In Proceedings of the 1995
ACM SIGMETRICS joint international confer-
ence on Measurement and modeling of computer
systems , (pp. 188-197).
Cao, P., Felten, E. W., Karlin, A. R., & Li, K.
(1996). Implementation and performance of
integrated application-controlled file caching,
prefetching, and disk scheduling. ACM Trans-
actions on Computer Systems , 14 , 311-343.
doi:10.1145/235543.235544
Dini, G., Lettieri, G., & Lopriore, L. (2006). Cach-
ing and prefetching algorithms for programs with
looping reference patterns. The Computer Journal ,
49 , 42-61. doi:10.1093/comjnl/bxh140
Ellard, D., Ledlie, J., Malkani, P., & Seltzer, M.
(2003). Passive NFS Tracing of Email and Re-
search Workloads. In Proceedings of the Second
USENIX Conference on File and Storage Tech-
nologies (FAST'03) , (pp. 203-216).
Ellard, D., & Seltzer, M. (2003). NFS Tricks
and Benchmarking Traps. In Proceedings of
the FREENIX 2003 Technical Conference , (pp.
101-114).
Search WWH ::




Custom Search