Information Technology Reference
In-Depth Information
able to offer data processing services (e.g., payrolls) to organizations. Providers of such
services operated many 'service bureaus' where customers would bring their data for
processing in return for a fee. Organizations that were unable to purchase those data
processing equipment found it economically viable to pay for those services. Then came
mainframe computers in the 1950s and 1960s which continued this practice that became
known as 'timesharing'. Organizations that were unable to afford buying mainframes
computers would rent the data processing functionality of those machines from a number of
providers. Connection to mainframes was achieved through a normal telephone line
connecting those massive machines and 'teletype', replaced afterwards with better visual
display machines, at the clients' end. Then came personal computers which effectively killed
timesharing due to the affordability of personal computers and also the flexibility of the
software used on them. Campbell-Kelly (2009) argues that the very things that killed the
timesharing industry in the 1980s have reversed in that computing infrastructure has become
increasingly complex and expensive to maintain due, for example, to issues relating to
security and constant software upgrades thus making cloud computing a more economically
viable alternative. One analyst called cloud computing 'timesharing 2.0' in reference to the
old practice of buying computing resources on demand (Campbell, 2009).
However, despite the popularity of personal computers, a form of utility computing still
existed. Many software providers, known as Application Service Providers (ASPs), emerged
in the 1990s to provide organizations with software as a service via the medium, this time, of
the Internet. These early attempts at “utility computing” did not prove to be popular. There
were two main reasons for this. First, most of the software offered by those providers was
proprietary, which meant organizations using this type of service cannot change providers
very easily, i.e., they were vendor-locked. Second, lack of sufficient bandwidth was another
problem. During the 1990s broadband was neither cheap nor plentiful enough to deliver
computing services with the required speed and reliability (Carr, 2009). Then came Web
services (especially those based on the XML-based SOAP message protocol) that promised to
deliver highly portable software remotely, through the medium of the Internet, without any
ties to any platform (i.e., operating systems) or programming languages. Web services
heralded a new era of 'software as a service' (SaaS).
The idea of delivering software remotely took a new turn with the emergence of cloud
computing. Not only can software be consumed remotely, it can also be consumed as and
when needed through a pay-as-you-go cost structure. Cloud computing also promises many
other exciting possibilities where, not only software but also other computing-related
functionality can be consumed remotely as and when needed, thanks to other relatively new
technologies such as virtualization and grid computing.
Virtualization is a technology that masks the physical characteristics of computing
resources (e.g., a PC, a Server) in order to simplify the way in which other systems,
applications, or end users interact with them. For example, a PC running Windows can use
virtualization to enable another operating system (e.g., Linux) to run besides Windows.
Furthermore, the technology also enables single physical resources (e.g., a server, an
operating system, an application, or storage device) appear as multiple logical resources.
Grid computing involves the use of software to combine the computational power of
many computers, connected in a grid, in order to solve a single problem, often one that
requires a great deal of computer processing power. Furthermore, grid computing also uses
software that can divide and farm out pieces of a program to as many as several thousand
Search WWH ::




Custom Search