Information Technology Reference
In-Depth Information
the lowest priority processes belong to another.
Then, when Linux switches between the processes
within the groups, the priority is not taken into
account.
One solution can be finding out how many
bins there should be, by calculating the total size
of the memory needs and dividing by the size of
the available physical memory (The size of the
bin), just as the medium-term scheduler always
does. Then, sorting the process list by priority, and
finally, taking the processes from the sorted list
and filling the bins in a Round-Robin manner. This
solution cannot be implemented together with the
shared pages solution, because the shared pages
solution requires sorting by the number of the
shared pages, rather than by the priority.
Another solution is assigning different time
slice to each group, according to the average
priority of the processes inside the group. For
each group the average priority is calculated.
A group having a high average priority will be
awarded a longer time slice. This solution was
chosen based on the results that are shown in
section 5.4.
system speed instead of the CPU speed. This
suits us well, because our aim is precisely to
measure the paging system speed; hence, we
used a machine with just 128MB of RAM.
Using machine with a larger RAM would
have been forced us not to use SPEC.
2.
A synthetic benchmark that forks processes
which demand a constant number of pages
- 8MBytes. The processes use the memory
in a random access; therefore they cause
thrashing. This benchmark was tested within
the range of 16MBytes-136MBytes. The
parent process forks processes whose total
size is the required one, and collects the
information from the children. Let us denote
this test by SYN8.
3.
Matlab formal benchmark. This benchmark
executes six different Matlab tasks described
in (MATLAB, 2004).
4.
Another synthetic benchmark using mas-
sive shared memory allocations. The test
has two processes that share 16MBytes
and has 2 more Mbytes for each one of the
processes. The processes copy parts of their
private memory into the shared memory and
parts of the shared memory into their private
memory in a random access. The benchmark
consists of a number of such tests according
to the desired size. Let us denote this test as
SYNSHARED.
performance reSultS
Actually, the best way to evaluate the medium
term scheduler is by considering its performance
results. In the following subsections, an extensive
evaluation that has been made to the medium term
scheduler is described.
5.
For interactive and real-time processes, we
used the Xine MPEG viewer. It was used to
show a short video clip in a loop.
teStbed
The benchmarks were executed on a Pentium
2.4GHz with 128MB RAM and a cache of 1MB
running Linux kernel 2.6.9 with Fedora core 2
distribution. The size of the page was 4KBytes.
It should be noted that even though the platform
machine had 128MBytes of physical memory, we
should take into the bin size considerations that a
certain portion of this memory is occupied by the
daemons of Linux/RedHat and the X-windows,
plus the kernel itself along with its threads. After
We tested the performance of the kernel with the
new scheduling approach using five different
benchmarks to get the widest view we could:
1.
SPEC - cpu2000 (SPEC, 2000). The SPEC
manual explicitly notes that attempting to
run the suite with less than 256Mbytes of
memory will cause a measuring of the paging
Search WWH ::




Custom Search