HTML and CSS Reference
In-Depth Information
Polling is a good solution if you know the exact interval of message delivery, because
you can synchronize the client to send a request only when you know information will be
available on the server. However, real-time data is often not that predictable, and making
unnecessary requests and therefore superfluous connections is inevitable. Consequently,
you may open and close many connections needlessly in a low-message rate situation.
Long polling is another popular communication method, where the client requests
information from the server and opens a connection during a set time period. If the
server does not have any information, it holds the request open until it has information
for the client, or until it reaches the end of a designated timeout. At that point, the client
re-requests the information from the server. Long polling is also known as Comet, which
we mentioned earlier, or Reverse AJAX. Comet delays the completion of the HTTP
response until the server has something to send to the client, a technique often called a
hanging-GET or pending-POST. It's important to understand that when you have a high
message volume, long polling does not provide significant performance improvements
over traditional polling, because the client must constantly reconnect to the sever to fetch
new information, resulting in the network behavior equivalent to rapid polling. Another
issue with long polling is the lack of standard implementations.
With streaming , the client sends a request, and the server sends and maintains an
open response that is continually updated and kept open (either indefinitely or for a
set period of time). The server updates the response whenever a message is ready to be
delivered. While streaming sounds like a great solution to accommodate unpredictable
message delivery, the server never signals to complete the HTTP response, and thus the
connection remains open continuously. In such situations, proxies and firewalls may
buffer the response, resulting in increased latency of the message delivery. Therefore,
many streaming attempts are brittle on networks where firewalls or proxies are present.
These methods provide almost-real-time communication, but they also involve
HTTP request and response headers, which contain lots of additional and unnecessary
header data and latency. Additionally, in each case, the client must wait for requests
to return before it can initiate subsequent requests, therefore significantly increasing
latency.
Figure 1-2 shows the half duplex nature of these connections over the Web,
integrating into an architecture where you have full duplex connections over TCP in your
intranet.
Figure 1-2. Half duplex over the Web; Full duplex over TCP on the back-end
 
Search WWH ::




Custom Search