Information Technology Reference
In-Depth Information
typical example of this configuration is a campus
or enterprise network with many heterogeneous
computers. In order to simulate such scenario by
the disk I/O traces of the very busy running web
server that we have collected, we assume that
servers in our scenario are all web servers with
many users. The disk I/O traces in our simulation
have already been mentioned before.
We have built a discrete event based simulator
of the environment with 1000 different nodes. The
simulation topology of 1000 nodes is generated
using the ASWaxman model through the topol-
ogy generator BRITE (Medina, et al. , 2001). We
use the TopDown method in BRITE to generate
a 2-level network topology, which includes 10
ASes and each AS has 100 router-level nodes
respectively, the nodes placement follows the
heavy-tailed distribution. The generated topology
is a DAG, where vertices are simulation nodes
and each edge is an overlay path between two
vertices. The routing between any two vertices
is the shortest path between them computed by
Dijkstra's Algorithm .
We define parameters of hard disks and the
remote memory to calculate local and remote I/O
overheads. When performing a disk read with n
successive blocks, the overhead is given by:
where T U is the start-up time, T RTT the round-
trip time, and B N the network bandwidth. In our
simulation, T U is set to 5 microseconds, T RTT varies
from 1 millisecond to 4 milliseconds following a
uniform distribution, and B N varies from 0.5MB/s
to 3MB/s following a uniform distribution. These
parameters are from the actual testing of our
campus network.
Results
Simulation 1 . The effect of proportion of user
nodes
In this set of simulations, we test the effect of the
proportion of user nodes on average overheads.
The proportion of user nodes is set to around
c
£ £ in our simulation. Both overheads
of RAM Grid without or with prefetching would
change when the proportion of user nodes chang-
es. As illustrated in Figure 2. When the proportion
of user nodes is within the range from 20% to
30% the overhead changes rapidly. When the
proportion falls out of this range, curves become
flat. This is reasonable, since when the proportion
of user nodes is less than 20%, most of them can
obtain sufficient memory resources, and if it is
more than 30%, the number of user nodes that
can capture resources becomes smaller and curves
thus change little with the increasing proportion
of user nodes. Therefore, bounds 20% and 30%
can be considered as critical proportions. In Fig-
ure 3, we compare three types of hit ratios in the
proposed scheme: 1) the hit ratios of local and
remote memory, which means the percentage of
all accesses except the ones that do not hit any
type of cache and cause the actual disk I/O op-
erations; 2) the hit ratios of local buffer cache,
meaning the percentage of all accesses which hit
the local cache of file system, or hit the prefetch-
ing buffer in our scheme; 3) the hit ratios of
prefetching buffer only, that is, the probability of
hitting the prefetching buffer if the access does
0
c
1
(
)
S
B
p
T
+ + − × + ×
T
n
1
T
n
(
S
L
W
d
where T S is the seeking time, T L the latency time,
T W the waiting time between two successive read-
ings, S p the block size, and B d the disk bandwidth.
Typical values of these parameters are T S =4.9
milliseconds, T L =3.0milliseconds, T W =0.2 mil-
liseconds, S p =4KB, and B d =80MB/s.
For the remote memory, the read overhead for
n successive block readings is given by:
S
B
p
T
+
T
+ ×
n
U
RTT
N
 
Search WWH ::




Custom Search