Information Technology Reference
In-Depth Information
Various operating systems implement the
virtual memory concept using the paging model
i.e. the operating system will load a memory
page into the physical memory only if a process
asks for it. If no free memory frame is available,
the operating system will swap out a page from
the physical memory to the secondary memory
(hard disk). Different techniques for choosing
which pages the operating system will swap out
to the disk have been suggested over the years
(Belady, 1966).
When large memory space is needed, swapping
pages in and out the memory will consume a large
portion of the CPU cycles. This situation is called
Thrashing (Abrossimov et al., 1989). Thrashing
causes a severe overhead time and as a result a
substantial slowdown of the system. Some stud-
ies for alleviating the unwanted consequences of
the thrashing have been carried out over the years
(Galvin and Silberschatz, 1998).
In (Jiang and Zhang, 2001), (Jiang and Zhang,
2002), (Jiang, 2009), the authors propose giving
one of the interactive processes a privilege. The
process's pages will not be swapped out. As a
result, the privileged process will be executed
faster and therefore will free its memory alloca-
tion earlier. This feature may assist the operating
system freeing enough memory fast and to get
back to a normal behavior. However, this tech-
nique will be advantageous only if the memory
allocations slightly exceed the physical memory.
This technique will work like a First-In-First-
Out scheduler if many processes produce a large
memory excess. In such cases this FIFO continues
and the system will keep on thrashing. Linux 2.6
version has a similar mechanism and its scheduler
is described in section 2.
In (Batat and Feitelson, 2000) the authors
suggest not admitting jobs that do not fit into the
current available memory. The system waits for
several processes to finish their execution and
only when enough memory is freed, a new job
can be admitted. The authors also discuss the
dilemma how a memory size needed by a new
job can be assessed. This technique is essentially
very similar to the VMS technique that uses the
“Balance Sets” method. However, the authors of
this paper have implemented the “Balance Sets”
concept for distributed systems.
In (Nikolopoulos, 2003) the author handles the
thrashing problem by adjusting the memory needs
of a process to the current available memory. This
solution is quite different from the other solu-
tions, because it modifies the processes instead
of modifying the operating system.
Some hardware solutions for trashing are also
have been suggested which are implemented in
the cache (Gonzalez et al., 1997), (Chu and Ito,
2000). Typically LRU is the basic scheme that
both hardware and software victim selection al-
gorithms employ. However, the LRU algorithm is
manipulated differently by hardware and software
implementations. Naturally, hardware solutions
must be much simpler for implementation; but on
the other hand, hardware solutions can use data that
the operating system does not know e.g. the cache
can distinguish between an instructions block and
a data block; while the operating system does not
distinguish. Clearly, this parameter can be very
useful for victim selection algorithms.
This chapter suggests a technique that modi-
fies the traditional process scheduling method by
adding a new a layer of scheduling (Reuven and
Wiseman, 2005), (Reuven and Wiseman, 2006).
By using this modification, the operating system
can swap in and out fewer pages; therefore alle-
viating the slowdown stemming from thrashing.
The technique suggested in this chapter is not
restricted to a specific operating system; therefore,
any multitasking paging system can employ it.
The figures and the results given in this chapter
have been produced by running benchmarks on
the Linux operating system (Card et al., 1998).
However, as has been noted, Linux is just an ex-
ample to show the feasibility of our concept.
The rest of the chapter is organized as fol-
low. Section 1 describes the Linux scheduling
algorithms. Section 2 explains the Bin Packing
Search WWH ::




Custom Search