Information Technology Reference
In-Depth Information
Streamlining client-server communication
Client-server communication is a common pattern in many systems, and so one can
ask: how can we improve its performance? One step is to recognize that both the client
and the server issue a write immediately followed by a read , to wait for the other side
to reply; at the cost of adding a system call, these can be combined to eliminate two
kernel crossings per round trip. Further, the client will always need to wait for the server,
so it makes sense for it to donate its processor to run the server code, reducing delay.
Microsoft added support for this optimization to Windows in the early 1990's when it
converted to a microkernel design (explained a bit later in this chapter). However, as
we noted earlier, modern computer architectures make extensive use of caches, so for
this to work we need code and data for both the client and the server to be able to be
in cache simultaneously. We will talk about mechanisms to accomplish that in a later
chapter.
We can take this streamlining even further. On a multicore system, it is possible or
even likely that both the client and server each have their own processor. If the kernel
sets up a shared memory region accessible to both the client and the server and no
other processes, then the client and server can (safely) pass requests and replies back
and forth, as fast as the memory system will allow, without ever traversing into the kernel
or relinquishing their processors.
Server:
charrequest[RequestSize];
charreply[ReplySize];
FileDescriptorclientInput[NumClients];
FileDescriptorclientOutput[NumClients];
//loopwaitingforarequestfromanyclient
while(fd=select(clientInput,NumClients){
//readincomingcommandfromaspecificclient
read(clientInput[fd],request,RequestSize);
//dooperation
//sendresult
write(clientOutput[fd],reply,ReplySize);
}
Figure3.12: Server code for communicating with multiple clients.
Search WWH ::




Custom Search