Information Technology Reference
In-Depth Information
the receiving end. This obviously generates big improvements in terms of bandwidth and
CPU cycles. The maximum size of a jumbo frame is 9 KB, which means a jumbo frame is
equivalent to six standard 1.5 KB frames, resulting in a net reduction of five frames, fewer
CPU cycles used in both ends, and only one TCP/IP and Ethernet header. This results in a
savings of 290 bytes sent over the network. It takes 80,000 standard frames to fill a Gigabit
Ethernet pipe, and as you can imagine, that's a lot of interrupt requests that the CPU needs
to process, which is an enormous overhead. In comparison, only 14,000 jumbo frames are
needed to fill that pipe, resulting in a 4 Mbps reduction in bandwidth cost. The savings in
bandwidth and CPU time cost can produce significant increases in network throughput and
performance.
FIGUREĀ 5.3 Data frame receiving process
CPU
Reads frames
and TCP headers
IRQ
NIC
IRQ
Frames
Network Latency
Network latency, especially as it relates to cloud systems and applications, is not a simple thing
to isolate and define. Before there was a real Internet, latency was simply measured as the
number of hops between the user and the application and the inherent delays resulting from
that travel from source to destination. In a proprietarily owned network like an enterprise net-
work, network latency would remain constant because the number of nodes and traffic density
within that network more or less remained constant. But with thousands of different networks
making up the Internet, and adding to that the virtual networks and spaces that are inherent
to cloud computing, network latency calculations are not simple, to say the least.
First and foremost, on the Internet, end points are not fixed as they can be in an enter-
prise or local network. Cloud application users can be anywhere in the world, and simply
moving within a city block and switching to another cellular tower or Wi-Fi hotspot would
change the paths on which the data travels, change the ISP, and essentially change the net-
work from which the application is being accessed. With the flexibility, failover protection,
and rapid allocation that the cloud offers, it may result in an application being transferred
to a different availability zone or server, which again changes the path that data has to take
in order to get to the user. That is the beauty of the cloud, but flexibility in this case has its
price. The resulting latency can be unpredictable, even if an application is being accessed
through a relatively fast ISP.
In a contained network, latency can be effectively measured, and it traditionally had
three measures: round-trip time (RTT), jitter, and end point computational speed.
Round-trip time (RTT) is the time it takes for a single trace packet to traverse a network
from source to destination and back again, or the time it takes for a source client to hail
a server and then receive a reply. RTT is depicted in Figure 5.4. This is quite useful in
Search WWH ::




Custom Search