Information Technology Reference
In-Depth Information
problem. Section 3 presents the reduced paging
algorithm. Finally, section 4 gives the results and
evaluates them.
been swapped out if the process was not sleeping.
The concept of the token-ordered LRU policy is
setting one or multiple tokens in the system. The
token is taken by one process when a page fault
occurs. The system prevents the false LRU pages
for the process holding the token from occurring.
This feature allows the process holding the token
a quick establishing of its working set. By giving
this privilege to a process, the total number of
false LRU pages is reduced and the pool of the
competing pages is getting ordered. However, this
policy can be beneficial only when the memory
needs slightly exceed the physical memory space.
A large memory excess of many processes will be
treated by this method as First-In-First-Out, while
other processes still vie for memory allocations and
thrash; therefore, in (Jiang and Zhang, 2005) the
traditional killing approach is of Linux is kept for
severe situations. In this chapter another technique
is suggested that can also handle the cases that
were handled by the killing methods.
Another problem is that the process selection
algorithm of Linux can mistakenly select a pro-
cess executing an endless loop. Such a selection
will even worsen the thrashing. Also selecting a
very long process that is executed for some hours
will be damaging. The selection algorithm can
just estimate which the shortest process is, but
its estimation might be wrong.
thraShing in the linux
operating SyStem
Traditionally, UNIX scheduler is priority based
(Vahalia, 1996). The process scheduling algo-
rithm of Linux is based on the traditional UNIX
scheduler. The Linux scheduler is well-known
and a description of it can be found in many
places e.g. (Beck at el. 1998), (Komarinski and
Collett 1998).
Linux virtual memory mechanism along with
the paging techniques gives Linux the ability
to manage many processes, even when the real
memory requirements are larger than the available
physical memory. However, the virtual memory
mechanism cannot handle some circumstances.
If the memory space required is too much over a
short time, the swapping mechanism cannot sat-
isfy the memory requirements quickly; therefore
pages are swapped in and out time and again and
a little progress is made.
Linux will kill processes if thrashing occurs
and the system is out of swap space. In some sense
there is nothing else that the kernel is able to do
in such situation, because memory is needed but
no more physical or swap memory is available
(Gorman, 2004), (Marti, 2002). If a thrashing
occurs, Linux kernel will kill the most memory
consuming processes. This feature is very harsh;
therefore its implications might be drastic. For
example, if a server runs several applications
with mutual dependencies, killing one of these
applications may yield unexpected results.
Linux 2.6 version has implemented the token-
ordered LRU policy (Jiang and Zhang, 2005). The
key idea of this policy is eliminating page swapping
at some cases called by the developers “false LRU
pages”. Sometimes, a page of a sleeping process
is swapped out, even though it would have not
bin pacKing
The suggested technique needs a set of all the
processes that are currently in the virtual memory.
This set is split into several groups, such that the
total memory size of each group is as close as pos-
sible to the size of the available real memory.
These groups of processes can be built as fol-
lowed: There is a set of processes Pi, i , each with a
memory allocation. Let Mi i denote the maximal
working set size that might be needed by process
P i . We need an algorithm which splits these M i 's
into as few groups as possible, with the sum of
Search WWH ::




Custom Search