Information Technology Reference
In-Depth Information
Figure 1. Disks can be better utilized with larger I/O sizes
Two exciting advances about cache media are
the great abundance of dynamic memory and the
general availability of flash memory. One might
expect them to bring negative impacts to the rel-
evance of prefetching techniques. When more data
can be cached in the two types of memories, there
would be less disk I/Os and readahead invocations.
However, in this era of information explosion,
data set and disk size grow rapidly at the same
time. There are increasing I/O intensive applica-
tions and I/O parallelism that demand more flex-
ible, robust and aggressive prefetching. Another
consequence of the larger memory is, aggressive
prefetching becomes a practical consideration for
modern desktop systems. A well known example
is boot time prefetching for fast system boot and
application startup(Esfahbod, 2006).
Flash memory and its caching algorithms fit
nicely in one big arena where magnetic disk and
its prefetching algorithms are not good at: small
random accesses. The Intel (R) turbo memory and
hybrid hard drive are two widely recognized ways
to utilize the flash memory as a complementary
cache for magnetic disks.And apparently the solid-
state disk(SSD) is the future for mobile computing.
However, the huge capacity gap isn't closing any
time soon. Hard disks and storage networks are
still the main choice in the foreseeable future to
meet the unprecedented storage demand created
by the explosion of digital information, where
readahead algorithms will continue to play an
important role.
The solid-state disks greatly reduced the costly
seek time, however there are still non-trivial
access delays. In particular, SSD storage is basi-
cally comprised of a number of chips operating
in parallel, and the larger prefetching I/O will be
able to take advantage of the parallel chips. The
optimal I/O size required to get full performance
from the SSD storage will be different from spin-
ning media, and vary from device to device. So
I/O prefetching with larger and tunable size is
key even on SSD.
In summary, where there is sequential access
patterns, there's arena for I/O prefetching. Whether
it be platter-based disk or solid state disk.
2.2 I/O Optimization and Prefetching
According to (Russel, 1997), there are four basic
I/O optimization strategies:
Avoidance . The best option is to avoid the
costly disk accesses totally or to reduce disk ac-
cess frequency. This can achieved by file caching.
Prefetching is also good at converting small read
requests into large ones, which effectively reduces
the number of disk accesses and therefore the costly
seeks.Another concrete example is the well known
Search WWH ::




Custom Search