Information Technology Reference
In-Depth Information
(Leiner et al. 2003). In the IETF, Robert Kahn and Vint Cerf devised a protocol that
took into account, among others, four key factors, as cited below (Leiner et al. 2003):
1. Each distinct network would have to stand on its own and no internal changes
could be required to any such network to connect it to the Internet.
2. Communications would be on a best effort basis. If a packet didn't make it to the
final destination, it would shortly be retransmitted from the source.
3. Black boxes would be used to connect the networks; these would later be called
gateways and routers. There would be no information retained by the gateways
about the individual flows of packets passing through them, thereby keeping them
simple and avoiding complicated adaptation and recovery from various failure
modes.
4. There would be no global control at the operations level.
In this protocol, data is subdivided into 'packets' that are all treated indepen-
dently by the network. Data is first divided into relatively equal sized packets
by TCP (Transmission Control Protocol), which then sends the packets over the
network using IP (Internet Protocol). Together, these two protocols form a single
protocol, TCP/IP (Cerf and Kahn 1974). Each computer is named by an Internet
Number, a 4 byte destination address such as 152.2.210.122 , and IP routes the
system through various black-boxes, like gateways and routers, that do not try to
reconstruct the original data from the packet. At the recipients end, TCP collects the
incoming packets and then reconstructs the data.
The Internet connects computers over space, and so provides the physical layer
over which the universal information space of the Web is implemented. However,
it was a number of decades before the latency of space and time became low
enough for something like the Web to become not only universalizing in theory, but
universalizing in practice, and so actually come into being rather than being merely
a glimpse in a researcher's eye. An historical example of attempting a Web-like
system before the latency was acceptable would be the NLS (oNLine System) of
Engelbart (1962). The NLS was literally built as the second node of the Internet,
the Network Information Centre, the ancestor of the domain name system. The
NLS allowed any text to be hierarchically organized in a series of outlines, with
summaries, giving the user freedom to move through various levels of information
and link information together. The most innovative feature of the NLS was a journal
for users to publish information in and a journal for others to link and comment
upon, a precursor of blogs and wikis (Waldrop 2001). However, Engelbart's vision
could not be realized on the slow computers of his day. Although time-sharing
computers reduced temporal latency on single machines, too many users sharing
a single machine made the latency unacceptably high, especially when using an
application like NLS. Furthermore, his zeal for reducing latency made the NLS far
too difficult to use, as it depended on obscure commands that were far too complex
for the average user to master within a reasonable amount of time. It was only
after the failure of the NLS that researchers at Xerox PARC developed the personal
computer, which by providing each user their own computer reduced the temporal
latency to an acceptable amount (Waldrop 2001). When these computers were
Search WWH ::




Custom Search