Information Technology Reference
In-Depth Information
them with their full context. Virtualisation allows not only to host the full service
environment, but also pausing, replicating and moving it with little additional
overhead and with little impact caused by the underlying hardware.
However, virtualisation technologies limit the performance of the actual hardware,
restrict the scalability and cannot easily share data and / or code between instances.
As such, the most straight-forward usage for virtualisation consists in providing
completely isolated images, where every user accesses his own instance. In cases
where applications share data (e.g. Wikipedia like social environments) or even parts
of the logic (such as in stock market analysis), solutions become more complicated: In
these cases, it is more advisable to exploit an execution framework of its own,
dedicated to the respective use case and on which specific data and configurations,
rather than code is enacted. This means that every user is effectively using the same
logic with different distributions and instances of data and shared algorithms.
A similar approach can be used to expose a dedicated application programming
interface for the respective usage domain that allows the user to develop his / her own
logic on top of a (cloud) managed infrastructure. This allows best adjustment to the
underlying infrastructure and management of the enactment according to the specific
domain requirements, but it at the same time limits the application scope.
The essence in these approaches is similar: to completely retain control over the
systems and in particular the execution of the hosted logic - only in this fashion is it
possible to realise the essential cloud capability, namely the dynamic adaptation to
load criteria. The elasticity focuses specifically on the number of instances to be
replicated in order to fulfil the respective quality of service criteria. The management
and adaptation framework must thereby be well adjusted to the actual application
case, in order to enact the required consistency mechanisms for shared data, to reroute
messaging according to the instance relationships etc.
Management and adaptation create additional overhead that reduces execution
performance, thus restricting dynamicity considerably. Most cloud environments take
therefore generally a pro-active and cautious approach towards elasticity, i.e. create
instances ahead of time (i.e. before the availability criteria is threatened to be
violated) and keep instances alive even after need, to reduce re-instantiation time.
Again we can make a comparison between different means of instantiating and
relocating an application / service / image, though comparing mechanisms rather than
domains (see Table 2). These figures thereby completely neglect additional overhead
for communicating the associated data over the network as described in the preceding
section. Essentially, with the complexity of the mechanism (e.g. virtual machines over
processes) the amount of data that needs to be shifted with the new instance increases,
too. The effective speed in the according domain is therefore reduced by the factor
produced by the typical interconnect setup (see above).
Table 2. Instantiation / replication handling performance
Delay
Virtual Machines
Minutes
Managed Processes / Services(PaaS)
Seconds
Threads (OS)
Milliseconds
 
Search WWH ::




Custom Search