Information Technology Reference
In-Depth Information
sequences, and accordingly reduce execution time
by 2.1% with a memory size of 384MB. However,
for the almost-all-random workload diff, more than
80% of the sequences are shorter than 4 blocks.
Unsurprisingly, DULO cannot create sequential
disk requests from workload requests consisting
of purely random blocks. As expected, DULO
cannot reduce the execution time.
16.3% reduction of its execution time with the
memory size of 448MB.
For workloads with patterns mixed of sequen-
tial accesses and random accesses, such as BLAST
and PostMark, an aged file system has different
effects on DULO's performance, depending on
sequentiality of the workloads and memory sizes.
For BLAST, which abounds in long sequences,
DULO reduces its execution time by a larger per-
centage on an aged file system than it does on a
fresh file system when memory size is large. For
workloads with a relatively small percentage of
long sequences, the reduction of long sequences
makes its access pattern close to that in almost-
all-random applications, where the lack of suf-
ficient long sequences causes short sequences to
be replaced quickly. Thus we expect that DULO
may reduce less execution time with an aged file
system than it does with a fresh file system. This
is confirmed by our experimental results.
While programs and file systems are designed
to preserve sequential accesses for efficient disk
accesses, DULO is important in keeping system
performance from degradation due to an aged
file system and to help retaining the expected
performance advantage associated with sequential
accesses.
Experiment Results with
an Aged File System
The free space of an aged file system is usually
fragmented, and sometimes it is difficult to find
a large chunk of contiguous space for creating or
extending files. This usually causes large files to
consist of a number of fragments of various sizes,
and files in the same directory to be dispersed on the
disk. This non-contiguous allocation of logically
related blocks of data worsens the performance
of I/O-intensive applications. However, it could
provide DULO more opportunities to show its
effectiveness by trying to keep small fragments
in memory.
The experiments with an aged file system
shows that, for workloads dominated with long
sequential accesses such as TPC-H, an aged file
system degrades its performance. For example,
with a memory size of 448MB, the execution
time of TPC-H on an aged file system is 107%
more than on a fresh file system. This is because
on an aged file system large data files scanned
by TPC-H are broken into pieces of various
sizes. Accessing of small pieces of data on disk
significantly increases I/O times. Dealing with
sequences of various sizes caused by aged file
system, DULO can reduce execution time by
a larger percentage than it does on a fresh file
system. For TPC-H, with a fresh file system
DULO can hardly reduce the execution time.
However, with an aged file system DULO man-
ages to identify sequences of small sizes and
give them a high caching priority, so that their
high I/O costs can be avoided. This results in a
Experiments on Virtual Memory Paging
In order to study the influence of the DULO
scheme on VM paging performance, we use a
representative scientific computing benchmark
--- sparse matrix multiplication (SMM) from an
NIST benchmark suite SciMark2. The total work-
ing set, including the result vector and the index
arrays, is around 348MB.
To cause the system paging and stress the swap
space accesses, we have to adopt small memory
sizes, from 336MB to 440MB, including the
memory used by the kernel and applications.
To increase spatial locality of swapped-out
pages in the disk swap space, Linux tries to allo-
cate contiguous swap slots on disk to sequentially
Search WWH ::




Custom Search