Information Technology Reference
In-Depth Information
When your servers are run in-house, you are limited only by your internal network
infrastructure because switches and routers would be enough to handle bandwidth-heavy
applications inside office walls. Cloud solutions make use of special techniques like team-
ing and bonding, discussed in the previous chapter, to handle the ever-increasing band-
width requirements, but that doesn't change the fact that offloading heavy workloads to
the cloud requires a wider pipe. Despite the technology being used by your cloud provider
to handle bandwidth issues, and they have no trouble handling large file sizes on their end,
the ultimate bottleneck would be the end-user Internet connection.
This issue will not matter much when all processes and data traversals occur within your
virtualized environment; the problems arise when your actual workflow requires moving
large data files to and from your cloud system. Multiply that by the number of people using
the service and going about the same types of workflows in your office and you start to feel
the strain in your bandwidth limitations. And then there are the latest trend ISPs are follow-
ing, where they set bandwidth caps, and the speed gets ridiculously throttled after a set limit
or exorbitant fees are added on top of your set monthly fee for the excess.
Because your cloud service provider does not control any variables at your
end, you end up having to look for solutions yourself. This is especially
problematic in countries with very slow Internet speed standards or in
areas with really bad ISPs, which means users really have no real choice.
There is also another caveat from the cloud provider's end: they impose a threshold
on bandwidth consumption of data going out of the system. This means that aside from
your constant struggles with your ISP for bandwidth on your end, you also have to wrestle
with restrictions from your cloud provider. Again, this is not much of a problem if your
workload does not involve moving large chunks of data to and from your cloud environ-
ment. Providers like Microsoft Azure offer 5 GB of outbound bandwidth, while cloud giant
Amazon EC2 allows only 1 GB of outbound bandwidth. After those limits are reached, you
might pay somewhere around $8 to $15 per gigabyte excess.
Another factor is the time it takes to move data between the cloud and local workstations.
If your business relies heavily on moving large amounts of data every day, then off-premises
hosting might not be for you. For example, for an animation studio or game development
studio where heavy work is being done using 3D technologies, a cloud file-sharing service
or even off-premises cluster-rendering service would not fit. That much data would be served
better by in-house hardware.
Hardware Replacements and Upgrades
Hardware replacements and upgrades are not all that uncommon, and the costs are usually
categorized as maintenance and upgrade costs. But sometimes, planners do not foresee what
is going to happen in five years when the decision to move to a public or private cloud is
being made. That entails capital expense and not necessarily upkeep.
The problem with having an on-premises solution is the inflexibility of the implementation.
Compared to a public cloud provider that leverages economies of scale in order to grow its
infrastructure, an organization dedicated to providing cloud services for itself does not have
Search WWH ::




Custom Search