Information Technology Reference
In-Depth Information
Model vs. Reality
One way of answering what happens under different load conditions, is to simulate
the different loads and so you can see exactly how response time varies. For the precise
answer, it matters exactly when each job arrives. But we also want to understand the
behavior of a system under a range of conditions, and for that we construct a model, an
approximation of reality.
Models
are
not
true
or
false.
If
by
abstracting
detail,
we
can
still
be
approximately
correct,
then
the
model
can
be
useful.
After
all,
systems
get
used
in
ways
that
are
quite
different
from
how
we
predict
they
might!
Picking a workload
anecdote:
students
were
glum
because
they
thought
a
system
they
built
had
horrible
performance
(e.g.,
700ms
response
time).
It
turned
out
the
system
was
fine,
but
their
clients
were
overloading
it)
what if rate of arrivals depends on response time? Eg., fixed number of
customers; each one can't ask for more unless you respond. That means you
can increase load infinitely, and you still be in steady state.
what if there are multiple servers? Is it better to have one queue for everyone,
or one queue per server, or does it matter? Issue is that if a server can be
idle while others have work to do, then that is effectively like reducing system
capacity, so that increases response time.
what if we double the speed of a CPU that's 90unfortunately, you can't say:
suppose each job needs to do both CPU and I/O. If I/O was 80
lessons:
load increases response time
burstiness increases response time
overload is catastrophic {> design systems to operate with low to moderate
load (if load hits 99{> have an overload control plan
scheduling often doesn't matter
measuring systems { know your load {> measure 1 request at a time {>
min latency {> increase load to saturate system {> max throughput Now
Search WWH ::




Custom Search