Information Technology Reference
In-Depth Information
encoding the data into a message format and sending the encoded data
over the wire. The socket interface allowed message passing using send and
receive primitives on transmission control protocol (TCP) or user datagram
protocol (UDP) transport protocols for low-level messaging over Internet
protocol (IP) networks. Applications communicated by sending and receiv-
ing text messages. In most cases, the messages exchanged conformed to an
application-level protocol defined by programmers. This worked well but
was cumbersome due to the fact that the data had to be encoded and then
decoded; also programmers developing a distributed application must have
knowledge of what the others were doing to the data.
Programmers had to spend a significant amount of time specifying a mes-
saging protocol and mapping the various data structures to and from the
common transmission format. As the development of distributed comput-
ing applications increased, new mechanisms and approaches became nec-
essary to facilitate the construction of distributed applications. The first
distributed computing technology to gain widespread use was remote pro-
cedure call (RPC) developed in the 1980s by Sun Microsystems. RPC uses
the client/server model and extends the capabilities of traditional procedure
calls across a network. Remote procedure calls are designed to be similar
to making local procedure calls. While in a traditional local procedure call
paradigm the code segments of an application and the procedure it calls are
in the same address space, in a remote procedure call the called procedure
runs in another process and address space across the network on another
processor.
RPC proved to be an adequate solution for the development of two-tier
client/server architectures. As distributed computing became more wide-
spread, the need to develop, for example, n-tier applications emerged and
RPC could not provide the flexibility and functionality required. With such
applications, multiple machines may need to operate simultaneously on the
same set of data, and, hence, the state of that data became of great concern.
Research in the area of distributed objects allowed overcoming this prob-
lem with the specification of two competing technologies: common object
request broker architecture (CORBA) and distributed common object model
(DCOM). Later, Java remote method invocation (RMI) was developed and
also became a competitor.
The CORBA standard was developed by the Object Management Group
(OMG) starting in the 1990s and defines an architecture that specifies
interoperability between distributed objects on a network. With CORBA, dis-
tributed objects can communicate regardless of the operating system they
are running on (e.g., Linux, Solaris, Microsoft Windows, or Mac OS). Another
primary feature of CORBA is its interoperability between various program-
ming languages. Distributed objects can be written in various languages
(such as Java, C++, C, Ada, etc.). The main component of CORBA is the ORB
(object request broker). Objects residing in a client make remote requests
using an interface to the ORB running on the local machine. The local ORB
Search WWH ::




Custom Search