Information Technology Reference
In-Depth Information
and now it has 10,000 users. If you scale the cost linearly, you will simply have $1 × 10
thousand to estimate the monthly cloud infrastructure cost based on a pay-as-you-grow
model. This does not usually hold true because the cost/user can be treated as a constant
only in a deployment serving a limited user base, or 100 initial users in this example.
Practically, what happens—and this is something that engineers can easily relate to—
is that the cost/user also gradually increases as more users are added. At a finer technical
level, this increase in cost/user is due to increased number of users being put on the same
server even when you are scaling out servers. Within networking, it's a given that when
more users start accessing the limited network backbone on a single server, performance
and capacity changes negatively. This would mean that even if the servers are optimally
configured, packing more users into the servers as you experience growth would nega-
tively impact performance, and hence you will need additional servers to keep deliver-
ing the same experience. These additional infrastructure resources would translate into
increased cost/users.
Alternately, if the SLA specifies a non-licensing-based model (where a fixed set of com-
pute, network, and storage resources are locked in and instead of fewer users in the initial
stages, the cost/user component grows when there is user growth) and instead works on a
growth-based model, additional compute, network, and storage resources are added into
your “virtual cluster” whenever they're needed during the growth stage or released back
into the main pool when they're not needed. This model would compute the cost/user based
not on the total cost of the cloud infrastructure in the early stages but rather on the opti-
mal number of users/server and whether the network would optimally handle usage spikes
(something the cloud provider will have to specify). This type of cost analysis would yield a
more optimal overall cost and stick true to the pay-as-you-grow paradigm.
Chargeback
Chargeback
is a common term in the financial world. In IT, and specifically in cloud com-
puting,
chargeback
refers to implementing a resource usage model where customers or users
of the cloud resources can be billed based on predetermined granular units of compute,
storage, networking, or other resource consumption. Every public cloud provider has a
chargeback model implemented. Without chargeback, these public cloud providers will not
be able to keep operating commercially.
AWS, for example, displays a price list for every cloud resource it offers—
X
dollars for
every hour you keep a virtual machine,
Y
dollars for every GB you put on the network, and
so on. This would mean that its chargeback model would keep a tab on the precise resource
usage by every one of its customers and bill them accordingly. Chargeback is what makes
the pay-as-you-go model of cloud computing possible.
Some cloud providers, like Amazon (AWS), Microsoft (Azure), and Google (Google
Cloud), provide a set of resources for free initially, but this doesn't mean they have metering
enabled for those resources. Amazon, for example, gives a micro instance for free for the
first year coupled with storage, bandwidth, and a few more resource offerings, but you can
always log into your cloud management console and check precise usage.
Search WWH ::
Custom Search