Hardware Reference
In-Depth Information
handle large numbers of IOPS. Moreover, memory management and synchro-
nization become more dicult to handle due to the more complex memory
hierarchies, NUMA levels, and system interconnects. In addition, mixing mul-
tiple I/O streams due to the high levels of concurrency creates patterns that
lead traditional magnetic hard disks (HDDs) and modern solid-state disks
(SSDs) to operate ineciently. Overall, modern server platforms are becom-
ing larger, more heterogeneous, with a much wider range of memory access
and synchronization costs, and they are required to support higher degrees of
concurrency at the I/O level.
This chapter will focus briefly on three I/O problems on multicore servers:
NUMA effects. Buffer placement and anity of processes and memory
buffers can result in large variations in performance and can limit scal-
ability.
Eciency of I/O caching. I/O caching is an important function of the
I/O path in the operating system kernel. Modern multicore servers can
improve the eciency of the I/O cache by trading CPU via deduplica-
tion.
I/O scheduling. A single scheduler will not work well for diverse work-
loads in modern systems, and systems can dynamically choose a sched-
uler from the current existing choices in the operating system kernel, by
considering all I/O requests.
Some of these problems are illustrated on real systems, along with general
mitigating approaches, using a custom-designed kernel I/O stack that provides
new mechanisms for managing anity, buffer management, synchronization,
and scheduling in the I/O path.
The rest of this chapter is organized as follows: Section 32.2 discusses chal-
lenges in the storage I/O, Section 32.3 presents ideas about the future, Sec-
tion 32.4 provides solutions to improve the I/O stack, and finally, Section 32.5
summarizes observations and predictions with concluding remarks.
32.2 Storage I/O at Present
Currently, storage I/O faces new challenges. Until recently, storage I/O
was primarily limited by the storage devices themselves, as HDDs were the
main limiting factor for the number of IOPS and the I/O throughput a server
could sustain. As such, the main goal of the host-level I/O path has been to
reduce the number of I/O operations, e.g., by properly managing metadata.
However, with the advent of new storage device technologies, such as SSDs,
modern multicore servers can operate at a performance regime that is two
 
Search WWH ::




Custom Search