Information Technology Reference
In-Depth Information
financial muscle to invest in initial data center setup. This is not an easy feat. A specialized
skill set is needed to properly set up, configure, and manage a server infrastructure. There
were two problems—steep financial cost and unnecessary and unneeded complexity. Smaller
startups and engineering teams could just co-locate with an operational data center and set
up a few servers to start with to control cost, at least until the product was validated and
experiencing initial adoption. This is precisely when things go bad, infrastructure wise.
When YouTube started adding new users and the team started experiencing exponen-
tial growth, most of their time was spent in keeping the product (website) responsive and
alive. This meant adding dozens of servers every day into the data center and dealing with
the increased complexity and maintenance. They were in Silicon Valley with easy access to
hardware they could purchase over the counter and plug into their co-located data center.
Imagine scaling your application in a physical location where provisioning new servers
would mean a lead time of a few or several days. This is a familiar story with most of the
web startups pre-cloud, or pre-AWS to be more precise.
Another angle to the financial component of this equation is to look at both expansions
and contractions in the usage of the product/service. In times of spikes, you will need to
make investments into infrastructure to plug in more servers, but what happens when the
spike normalizes or, worse, goes into a valley (steep decline in application/service usage)? If
you had not played your infrastructure cards right, you would end up with dozens or maybe
hundreds of servers you do not need anymore, but you have to keep them operational within
the data center just in case another usage spike knocks on the door.
Naturally, not every startup will have the financial and technical expertise needed
to set up initial infrastructure and start serving end users. This is the case with every
consumer-facing startup, where predictions of usage patterns may not be anywhere near
accurate and hence the engineering team will have no solid data to base their resource
provisioning or infrastructure setup on. This may not hold true for most enterprises
where usage density is prior knowledge, as is scaling out. But then, large enterprise
where the primary use case is internal enterprise applications being consumed by the
workforce were not the initial targets for the cloud. It's only now that large enterprises
and financial institutions have started to move to the cloud or building their own private
cloud on which to host enterprise applications.
Pay-as-You-Grow Theory vs. Practice
In theory, pay as you grow would mean that the cost/user would be treated as a constant
and scaling out would mean a linear increase in cloud infrastructure and resource usage
bills. Let's consider the launch of awesome-product.com. Initially, in the pregrowth
phase, there are on average 100 users who interact with the product monthly. The cloud
engineer or team at awesome-product.com calculates the cost incurred per user to be $1/
month. This includes the network bandwidth, storage, compute cycles (CPU/GPU usage),
content distribution network (CDN), and DB cost components. Awesome-product.com
has an SLA with AWS, where the whole product is hosted. In its third month, it starts
to experience growth and users start accessing its product in droves. Adoption increases
Search WWH ::




Custom Search