Information Technology Reference
In-Depth Information
access to such economies of scale and will therefore be limited in the initial implementation or
in adding additional capital for expansion and upgrades. On-premises solutions simply aren't
as economical as off-premises or public offerings.
Scalability is another facet of cloud computing that is lost. On-premises solutions tend to
be fixed because an organization is not as willing to shell out additional capital expenditure
to increase the system's capacity, not unless the returns are really worth it. And organiza-
tions that opt for on-premises solutions are usually not ones that make a profit off of their
cloud solution directly. They are in it for control and security of data mostly, so a large-
capacity upgrade would not make sense economically but may still be very important to
maintain operations. This is one of the biggest dilemmas after a few years of running on
on-premises solutions.
Over- and Underutilization of Resources
In an on-premises solution, systems are created to exceed the capacity of their maximum
usage scenario by a large margin. But during actual usage scenarios, usage almost never
reaches that maximum capacity margin, let alone the actual capacity of the system. This
leads to underutilization of the system's resources, causing most of them to be idle. This is
equipment that has been paid for with a huge capital expense and it is not being used to its
full potential. An organization has to consider its highest demand and configure the system
for that and not its standard, everyday demand. However, this theoretical limit is seldom
reached. In this case, it would be better to avail of the scalability and flexibility of real
cloud services so you will pay only for the minimum usage and then occasionally pay more
for those rare moments when usage levels peak.
The opposite scenario is overutilization, and it applies to off-premise cloud solutions.
When we provision for new servers and more virtual resources, we tend to get so carried
away by the prospect of virtually unlimited resources that we forget that there are still
imposed limits to our subscriptions and corresponding payments. Sometimes we forget that
these virtual machines and servers are semi-persistent, which means they stay on and keep
using up resources in the system, and since they are easy to provision, we simply provision
more in order to get a fresh environment, especially for testing purposes. But this can add
up dramatically and translates to costs. The best solution would be proper management of
resources using automatic deprovisioning of resources when they are no longer in use.
Automatic provisioning is good for keeping a website afloat despite spikes in traffic, but
it has other uses too. In a non-cloud scenario, a server will eventually succumb to a DDoS
attack because it cannot handle the sheer number of requests. But in a cloud environment
with auto-provisioning , your system will keep adding more servers to handle the increasing
load, so it will always be able to cope. In a legitimate traffic scenario, this would be good, but
in a distributed denial of service (DDoS) attack, it is really bad because costs will skyrocket
with false requests and there will be no real customer traffic and so no income—only addi-
tional costs. In cases like this, proper settings and attack detection systems are important.
Your system should be able to distinguish between legitimate requests and attacks so that it
can simply drop requests that aren't legitimate.
Search WWH ::




Custom Search