Technische Universität Dresden Fakultät Informatik Chair of ...

doctorrequestInternet and Web Development

Dec 4, 2013 (3 years and 4 months ago)


Technische Universität Dresden
Fakultät Informatik
Chair of computer networks
Master Thesis
Virtualised cloud computing environments
M.Sc.Andrii Chaichenko
Dr.-Ing.Josef Spillner
Dr.h.c.Alexander Schill
March 1,2013
Fakultät Informatik
, Institut für Systemarchitektur, Lehrstuhl Rechnernetze
Name, Vorname:
Chaichenko, Andrii
DSE 2010
Service and Cloud

Dr.-Ing. J. Spillner
Externe(r) Betreuer:
Verantwortlicher Hochschullehrer:
Prof. Dr. rer. nat. habil. Dr. h. c.

Alexander Schill
Beginn am:
Einzureichen am:
Virtualised cloud computing environments
Recently published nested virtualisation capabilities in hypervisors and operating

systems, as well as established proxy techniques, let users operate a cloud without

owning any hardware. Such environments function by purely using virtualised

resources of existing cloud providers. This brings new advantages such as being

able to select the most suitable underlay provider at any time, to create special

offers and conditions for the access of others to otherwise unused resources, and to

increase independence and decrease vendor lock-in through migration techniques

even for complex setup which already involve virtualisation and private clouds. In

this task, initial research on virtual cloud computing environments shall be extended

and evaluated with a concrete installation of a private cloud or proxy-indirect access

to a public test cloud. In particular, the master thesis shall analyse the current

Highly-Virtualised Cloud Resource Broker (HVCRB) system and suggest, design and

implement improvements until a smooth overall demonstration is possible.
This involves the cloud resource service provisioning and selection on a

broker/marketplace, the service usage preparation phase, the

contract-/configuration-dependent inclusion of additional layers of virtual machines,

and the correct accounting depending on the idle resource allocations to users.

Consolidating knowledge about nested virtualisation and cloud stacks

Analysis of the SPACEflight/HVCRB system architecture

Methodic iteration over suggestions, designs and implementations of


Validation of the improved system through sufficiently complex scenarios about

the sharing of unused resources and the migration of virtualised environments

into the cloud
Unterschrift des verantwortlichen Hochschullehrers
Prof. Dr. rer. nat. habil. Dr. h. c. Alexander Schill
Declaration of Authorship
I,Andrii Chaichenko,declare that this thesis,titled “Virtualized cloud computing envi-
ronments”,and the work presented in it have been done on my own without assistance.All
information directly or indirectly taken fromexternal sources is acknowledged and referenced
in the bibliography.
1 Introduction 1
1.1 Motivation.....................................1
1.2 Addressed Use Cases...............................2
1.3 Goal and Research Questions..........................2
1.4 Structure.....................................3
2 Background and State of the Art 5
2.1 Cloud Computing,Infrastructure as a Service.................5
2.1.1 Infrastructure as a Service........................5
2.1.2 Cloud Resource Markets.........................7
2.2 Virtualization...................................8
2.2.1 Hypervisors................................9
2.2.2 Nested Virtualization...........................11
2.3 Cloud Stacks...................................13
2.3.1 OpenStack.................................14
2.3.2 Eucalyptus................................14
2.3.3 Nimbus..................................15
2.3.4 OpenNebula................................15
2.4 Related Work...................................17
2.5 Summary.....................................19
3 Analysis of the SPACEflight/HVCRB system 21
3.1 SPACE and SPACEflight.............................21
3.2 Cloud Computing Extension – SPACE-Cloud.................23
3.3 HVCRB......................................25
3.4 Summary.....................................26
4 Concept 29
4.1 Requirements...................................29
4.1.1 Functional Requirements.........................29
4.1.2 Non-functional Requirements......................31
4.2 User Interaction Model..............................32
4.3 Architecture....................................33
4.3.1 General Architecture...........................33
4.3.2 Services Overview.............................35
4.4 Marketplace....................................37
4.4.1 Resource Publishing...........................38
4.4.2 Resource Discovery............................39
4.4.3 Resource Acquiring............................40
4.5 Backend......................................40
4.5.1 Resource Allocation............................41
4.5.2 Resource Management..........................42
4.6 Summary.....................................43
5 Implementation 45
5.1 Development Environment............................45
5.2 Database Model..................................46
5.3 Package Diagram.................................47
5.4 Marketplace....................................49
5.4.1 Chosen Tools...............................49
5.4.2 Service Entity Model...........................50
5.4.3 Selected Implementation Aspects....................51
5.5 Backend......................................53
5.5.1 Chosen Tools...............................53
5.5.2 Binding to Marketplace..........................54
5.5.3 Defining Virtual Machine Domains...................55
5.5.4 Resource Allocation............................57
5.5.5 Resource Rescaling............................58
5.6 Summary.....................................60
6 Evaluation 63
6.1 Testbed......................................63
6.2 Test Scenario...................................64
6.2.1 Registration................................65
6.2.2 Computing Service Purchase.......................66
6.2.3 Becoming Provider............................67
6.2.4 Computing Service Publishing......................67
6.2.5 Computing Service Runtime Rescaling.................69
6.3 Overhead Analysis................................70
6.3.1 Test Setup.................................70
6.3.2 Testing Technique.............................71
6.3.3 Results...................................72
6.4 Summary.....................................72
7 Conclusions and Outlook 75
7.1 Conclusion.....................................75
7.1.1 Overall Summary.............................75
7.1.2 Addressed Research Questions......................75
7.2 Future Work....................................77
List of Figures 80
List of Tables 81
A Implementation Summary 87
B Evaluation Summary 89
Chapter 1
Nowadays cloud computing becomes more and more widely used technique throughout
public and business domains.Many companies have built their businesses on providing
infrastructure as a service to their customers.Consumers also benefit of such a product,as
they do not need to establish own server farms and spend large amount of money at once
for expensive hardware and system administration.In other words,cloud computing offers
flexible and dynamic building of infrastructure with QoS guaranteed services in a pay-as-you
go model [1].Cloud computing brings not only cost efficiency,but also numerous advantages,
such as agility,accessibility,improved reliability,monitoring and scalability together with
elasticity [2,3].
The core engine that powers cloud computing and helps building flexible and scal-
able system architectures is the system virtualization technology.Though,provisioning
and managing of cloud computing infrastructures through virtualization is an important
challenge,here cloud stacks,such as Eucalyptus [4],OpenStack [5],OpenNebula [6] and
Nimbus [7] might come in help.To achieve further flexibility,cloud stacks could be virtualized
themselves,giving possibility to select the most suitable cloud provider at any time.
This chapter presents the following sections:motivation of this work (1.1),addressed use
cases (1.2),goals and research questions (1.3).Section 1.4 gives an overview of structure of
this thesis.
1.1 Motivation
Building highly scalable infrastructures on top of cloud providers is a very common trend
nowadays [8].Applications that are hosted in cloud environments could be divided in two
types:constantly running applications (such as web applications and services) and limited-
term computational jobs (such as number crunching,data analysis,etc.).Both application
types,hosted in the cloud,may require complex multi-tier service platforms,which consist
of load balancers,HTTP servers,application servers and databases.But generally,requests
intensity,workloads and traffic peaks are not evenly distributed throughout the day,so
handling them with old-fashioned static capacity management strategies is not efficient from
costs and operations points of view.To overcome such issues with means of horizontal scale
out,running virtual machines with scalability-enabled software might help.But still,a
number of drawbacks are present in these procedures.These drawbacks are mainly resource
overprovisioning,due to coarse-grained resource units,offered by providers,and dependence
on selected provider.Additionally,one should take into consideration aspects of energy
efficiency and utility computing:allocate only as much resources as it is needed at each
point of time.
Therefore in this work initial research on virtual cloud computing environments is presented,
that is extended and evaluated with a concrete installation of a private cloud or proxy-indirect
access to a public test cloud.The master thesis analyses the current Highly-Virtualized Cloud
Resource Broker (HVCRB) system,which has overcoming drawbacks mentioned above on
its aim,proposes improvements to this system and describes their implementation.
1.2 Addressed Use Cases
The use cases are derived from the main shortcomings and limitations of currently existing
cloud providers and emphasize what consumers could not do in standard consumption model:
 Resource repurpose.Means,that in the situation,when a consumer of cloud
computing experience wide range workload fluctuations,and due to that has allocated
large amount of computing resources,but some of the time his infrastructure remains
in idle state,the consumer may resell his overprovisioned resources via Spot Market,
giving opportunity to other customers to buy resources at reduced price.
 Provider independence.Implies that in case of more attractive offers from other
cloud providers,or if a consumer would like to establish multi-provider infrastruc-
ture for higher availability,he has an opportunity to migrate between different cloud
providers avoiding vendor lock-in.Also this enables such features as vertical scaling,
even if it is not supported on the provider side.
 Infrastructure hierarchy.Implies an ability to run a hierarchical recursive infras-
tructure with nested virtual machines,which enables moving of already virtualized
environment into the cloud entirely with much less effort without the need of importing
each standalone virtual machine.
1.3 Goal and Research Questions
The goal of this work is to present a solution to aforementioned shortcomings by extending
currently existing Highly-Virtualized Cloud Resource Broker system.Extensions include
redesigning the resource market,which will give an opportunity to resell unused resources,
and developing underlying backend management software,that will on the background
perform tasks of proper resource allocation by starting and stopping nested virtual machines
of corresponding capacity according to defined QoS parameters.In order to accomplish these
goals,the following research questions should be addressed:
 Which design improvements should be implemented to HVCRB system?
 Can the whole idea be realized completely beginning from the resource market down
to operating system level?
 How can be the cloud resource market designed and implemented?
 Which architecture should the backend mechanism have,how should it be connected
to the marketplace?
 Which software components might be applied to the system in order to support
mentioned use cases?
 How much overhead brings running hierarchical infrastructure?
1.4 Structure
The remaining part of this work is structured as follows:
Chapter 2 presents the background,state of the art and related work.
Chapter 3 focuses on the analysis of current SPACEFlight system.
Chapter 4 introduces concept and requirements for the improvements being developed.
Chapter 5 provides an implementation of the design.
Chapter 6 describes evaluation of introduced improvements.
Chapter 7 completes the thesis,listing overall achievements,summarizing answers to the
research questions,and pointing out possible future research.
Chapter 2
Background and State of the Art
This chapter gives a short overview as well as a brief analysis of the notions and current
technologies of cloud computing,which are necessary for understanding of the following
chapters.In section 2.1 a brief introduction to cloud computing and its lower layer,which
is of interest for this work,is given.Virtualization and nested virtualization together
with hypervisors as a basis technology are described in section 2.2.Section 2.3 presents
infrastructure as a service cloud computing projects for building own clouds.Related work
and conclusions are given in sections 2.4 and 2.5 correspondingly.
2.1 Cloud Computing,Infrastructure as a Service
Cloud computing is a modern trend for organizing computational resources.According to
Wikipedia,“Cloud computing is the use of computing resources (hardware and software)
that are delivered as a service over a network (typically the Internet)."[9].This as-a-service
approach brings numerous benefits for both providers and consumers and helps nowadays
companies to build their solutions without the need in expensive hardware or maintenance.
Cloud computing can be of the following main types:Infrastructure as a service (IaaS),
Platform as a service (PaaS),Software as a service (SaaS) The layered structure of cloud
computing is depicted on the image 2.1.More and more features-as-a-services like API as a
service (APIaaS) or Data as a service (DaaS) are being provided with development of cloud
approach,but these services are out of scope of this thesis.This work mainly focuses on
IaaS aspect of Cloud computing,as this is the most basic cloud model which corresponds to
classic infrastructure,provides less abstraction level and offers more control to end user in
terms of an infrastructure composition.
2.1.1 Infrastructure as a Service
IaaS is considered to be the lower layer of the cloud computing ecosystem and is provided as
an ability to use cloud infrastructure for own management of processing,storage,networks
and other fundamental computing resources.On the contrary to PaaS,where user can use
only predefined by provider tools and frameworks,the user can install and run arbitrary
Figure 2.1:Cloud computing layers
software,which includes operating systems,platform and application software.The user
has control over the operating system,the virtual storage system and installed applications,
as well as limited control over the set of available services (e.g,firewall,DNS).This brings
more flexibility on the one hand,but also more administration efforts on the other hand.
Control and management of the underlying physical and virtual infrastructure of the cloud,
including network,servers,operating systems,storage, maintained by cloud provider.
Consumers usually specify their requirements such as CPU,memory and storage,and are
billed only for resources they use (pay as you go approach).
To make use of IaaS nowadays user has a choice between running his infrastructure
directly on top of available cloud providers,or run a private cloud with the help of cloud
software called cloud stacks.Cloud stacks more precisely described in section 2.3 of this
chapter.The good examples of public IaaS providers are Amazon Web Services,Google
App and Compute Engine,Rackspace and GoGrid.As Google Compute Engine is currently
in trial state,the App Engine would be taken into account.The following table 2.1 presents
an overview of providers characteristics.
Table 2.1:Cloud Providers comparison
Amazon AWS
Compute and
not available
OS support:
Windows and
Linux OS
any stack could
be installed
or found at
Integration with
Google services.
software could
not be installed.
Windows OS
Linux,Mac OS,
Table 2.1 – continued from previous page
Amazon AWS
Compute and
and tools
Rich set of
SDKs,API for
each service
Secure Data
Apps APIs,
Google web
Azure Platform
Training Kit,
Azure SDK,
Visual Studio,
Azure platform
7 Training Kit
For Developers
Beanstalk -
Olark Live
Website Chat,
Vanilla - Free
Forum Hosting
Command Line
Wizard’s Open
Source Cross
Cloud Scripting
Mitch Denny’s
own data store
Azure Sql
S3 - Monthly
Uptime Agree-
ments of at least
99.9%,EC2 -
Annual Uptime
at least 99.95%
100% Uptime
99% Uptime
100% Network
Hour Hardware
ment of Onsite
Data Restores
100% Uptime,
loss <0.1%,
Latency <5ms,
Jitter <0.5ms,
Jitter:10 ms
within any
15-min period.
2,5cm3 DC loca-
tions (2 in US,1
in EU)
number of
Google DCs
6 DC locations
(2 each in US,
EU,and Asia)
2 DC locations
(all in US)
DC in US
2.1.2 Cloud Resource Markets
As popularity of cloud computing grows and the usage of cloud services turns into common
occurrence for institutions and enterprises,the development of spot cloud resource markets
becomes an actual task.The spot market is also known as cash market,in which financial
goods or services are traded in cash at the price set by the market at present and delivered
immediately or in a short amount of time.The trades are based on the contracts made
between the parties and the contracts take effect immediately.
With the cloud resource market,providers have possibility to publish computing and
storage services and trade them in the same way as other material goods.Consumers then
search for the services which suits them best from performance and SLA points of view and
establish a contract with provider.In different scenarios consumers may act as provider and
vice versa.For example a user,who has allocated a powerful computing resource from IaaS
provider,but do not utilize it at 100% will have a possibility to resell unused part of this
resource at such market and add revenue,thus becoming provider himself.
Recently Amazon introduced a marketplace [10],where users can repurpose their instances
if they need to increase/reduce their computing capacity or rent an instance for medium-term.
But it only concerns reserved instances (RI is a type of instance which is rented with upfront
payment for a term of year or three years and reduced hourly fee).Furthermore,increasing
or decreasing capacity means reselling user’s current RIs (in case of decreasing capacity) or
reselling current RIs and renting new ones (in case of increasing) for desired capacity.This
kind of workflow may be not very suitable,as the user must completely resell his instance
and has no possibility to repurpose only part of its resources.Additionally,Amazon allows
to bid on Spot instances if the user wants to save money and have computing power for
small-scale jobs.Though,these jobs must be interruption-tolerant,as the spot instances
may be terminated during calculations if someone overbids user’s price.
Another example is SpotCloud [11].This project claims to be the world’s first global
market for cloud capacity.This market allows a consumer to select provider based on costs
and location,then create or choose an appliance (a VMin special format with preconfigured
OS) that he wants to run.The benefit of the SpotCloud is that it is not provider specific.
Though there are no service level agreements,and quality of service is not specified.Also
this market is tied to Google Account,as it is built upon Google AppEngine.
Project called CloudBay [12] proposes an online resource marketplace for open clouds.
The proposed CloudBay platform stands between HPC service sellers and buyers,and offers
a comprehensive solution for resource advertising and stitching,transaction management,
and application-to-infrastructure mapping.Authors of this work propose providers to run
so called Cloud Appliance,which could be either a package of grid-enabled software and
executed on existing VM,or the same package wrapped into VM,which could be executed
on physical machine.As motivation of this project is different from this thesis,CloudBay
is focused only on execution of HPC jobs and is PaaS-oriented.This means that customers
will not be able to use these appliances for other needs or install own software stacks.
Furthermore,there is no mechanism for resource slicing,which means there is no possibility
for repurposing of part of idle resources from consumer’s perspective,and utility is raised
only from provider’s point of view.
2.2 Virtualization
A main building brick of IaaS is the virtualization technology.Virtualization in a broad
sense is the process of presenting a set of computing resources,or logical association of
these resources,that gives any advantage over the original configuration.This is the new
virtual view over the resources of the components that are not limited by implementation
or physical configuration.In IT virtualization means separation of computing resources.
The main difference for a virtual machine in comparison to physical computer is that VM
logically represents a complete computer in its classical sense,but actually is just a set of
files and running software on top of real physical machine.The main aims of virtualization
are reducing administration efforts,increasing hardware utilization,creation of flexible and
adaptive infrastructure,that can be easily maintained and scaled accordingly to resource
demand.Several types of virtualization exist:hardware,software,desktop,memory,storage,
network.The hardware virtualization is of interest for current topic.
Software virtualization,mainly operating system-level virtualization is a server virtu-
alization method where the kernel of an operating system allows multiple isolated user-
space instances,instead of just one [13].This approach is widely used in virtual hosting
environments,where resources of single physical server allocated across large number of
users.Such kind of virtualization introduces almost no overhead,as application in virtual
partition use OS calls and is not subject of emulation.On the other hand,there is no
possibility to run different from host OS as guest.The most popular examples of such
virtualization technique are chroot,LXC,Linux-VServer.
Hardware virtualization,or virtualization of platforms,is the creation of software systems
based on existing hardware and software systems,dependent on or independent of them.
System that provides hardware resources and software is called the host,and its simulated
system - guest.Hardware virtualization can be also divided into following types:
 Full.Also called hardware-assisted virtualization.This technology provides virtual
environment that almost completely simulates base hardware without need to modify
guest software or operating system.
 Partial.A part of environment is simulated,in this case some of the guest software
needs to be modified in order to run in such environment.
 Paravirtualization.In this case guest operating system’s kernel needs to be modified
to be able to run in virtualized environment.OS interacts with the hypervisor through
an API instead of accessing resources directly.
2.2.1 Hypervisors
A basis for system virtualization is Virtual machine manager (VMM) or,simply,the hyper-
visor.VMM is responsible for creation and execution of multiple virtual machines (VMs)
on single common physical machine,thus it multiplexes a single physical resource onto
multiple virtual resources [1].Each of VMs has its own operating system,the guest OS,and
applications that run inside such VM uses its own virtual resources such as virtual CPU,
virtual memory,virtual disks and virtual network card.
Hypervisor is a computer programor a hardware part that allows simultaneous execution
of several guest operating systems on the same host computer.The hypervisor also provides
isolation of guests fromeach other,protection and security,the distribution and management
of resources between the different running operating systems.VMMmay also provide means
of intercommunication for running guests as if these operating systems were running on
different physical machines.
The hypervisor itself may be considered as minimalistic operating system.It provides a
virtual machine service for guests running under its control,virtualizing or emulating the
real physical hardware of specific machine,manages virtual machines,controls allocation and
release of resources for them.Hypervisor allows independent start-up,reboot and shut-down
for any of the virtual machines.The guest operating system running in a virtual machine
under a hypervisor control is not obliged to know if it is running on a real hardware or in a
virtualized environment.
Two main types of hypervisors exist (image 2.2):
 Native.Also called bare-metal.Has own device drivers and scheduler,runs directly
on hardware and is independent of the host OS.Such type is considered to be more
performant.Examples of such type are KVM and Xen.
 Hosted.This is a component that runs in the same ring with the main OS kernel
(ring 0).Guest code can be executed directly on a physical processor,but access to
the I/O devices from a guest OS is done via a second component,the normal process
of the primary OS — user level monitor.To this type belong VirtualBox,VMware
Figure 2.2:Types of hypervisors
KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on
x86 hardware containing virtualization extensions (Intel VT or AMD-V).It consists of a
loadable kernel module,kvm.ko,that provides the core virtualization infrastructure and
a processor specific module,kvm-intel.ko or kvm-amd.ko.KVM also requires a modified
QEMU although work is underway to get the required changes upstream [14].Unlike Xen
in which the hypervisor performs scheduling and memory management and delegates I/O
functions to a privileged guest,drive domain,KVM takes the Linux kernel as a hypervisor
by adding virtualization capabilities to a standard Linux kernel.Under this model,every
VM is a regular Linux process handled by the standard Linux scheduler by adding a guest
mode execution which has its own kernel and user modes.Its memory is allocated by the
Linux memory allocator.
The kernel component of KVM,the device driver (KVM Driver) for managing the
virtualization hardware,is included in the Linux 2.6.20.The other component that comprises
KVM is the user-space program for emulating PC hardware.This is a modified version of
QEMU.The I/O model is directly derived from QEMU’s.The inclusion of KVM driver in
Linux 2.6.20 has dramatically improved Linux’s ability to act as a hypervisor.
KVM runs inside the Linux kernel as a kernel module and becomes a lightweight VMM.
Compared with Xen,KVM consumes fewer resources,decreases the services for interrupts,
and eliminates context switches between the driver domain and the hypervisor.When
performing intensive network I/O,KVM has some performance advantages.Hence KVM is
a good choice for constructing architectures for cloud computing environments [1].
The Xen hypervisor,the powerful open source industry standard for virtualization,offers
a powerful,efficient,and secure feature set for virtualization of x86,x86_64,IA64,ARM,
and other CPU architectures [15].Xen is a popular open-source VMM on a single x86-based
physical platform.Xen supports paravirtualization which requires the operating system
changes to interact with the VMM.
The Xen hypervisor is at the lowest level and running in the most privileged processor-
level.There can be many domains (namely VMs) running simultaneously above the hyper-
visor.The driver domain (also called dom0),as a privileged VM,uses native device driver to
access I/O devices directly and performs I/O operations on behalf of the other unprivileged
VM (also called guest domains).Guest domains use virtual I/O devices controlled by a
paravirtualized driver (called a front-end driver) to communicate with the corresponding
back-end driver in the driver domain,which in turn,through a software Ethernet bridge,
multiplexes data for each guest onto the physical NIC device driver.Data transfer between
front-end driver and back-end driver is achieved using a zero-copy page remapping mech-
anism.All I/O accesses carried out within the hypervisor and the driver domain leads to
longer I/O latency and higher CPU overhead due to the context switches between domains
and the hypervisor.The driver domain becomes a performance bottleneck for I/O intensive
workloads.In the page remapping mechanism,the paging and grant table management are
higher overhead functions.
2.2.2 Nested Virtualization
Nowadays all of the public clouds and public IaaS providers,that employ virtualized infras-
tructure,focus on flat model of running virtual machines,which means simple concur-rent
allocation of virtual servers for multitenant use guided by a hypervisor,which acts as a
multiplexing middleware and shares same resources between multiple users.This technique
is mainly used for maximization of hardware utilization from provider’s perspective.
The main shortcoming fromthe customer point of view in such virtualized infrastructures
is their dependence on a given virtual environment.For example,Amazon Elastic Compute
Cloud (Amazon EC2) uses Xen as hypervisor.Amazon EC2 invented special packaging
for the guest VMs that are intended to run on top of their infrastructure.They call this
packaging Amazon Machine Images (AMIs).AMI is a special type of preconfigured operating
system and virtual application software which is used to create a virtual machine within
the Amazon EC2.It serves as the basic unit of deployment for services delivered using
EC2 [16].AMIs exist as both fully with preinstalled software for specific needs (e.g.LAMP)
or customizable with only OS installed.Such kind of package (which consists of metadata and
a virtual disk format) and stickiness to given hypervisor is usually an obstacle for customers
who want to make use of multi-cloud deployments or live migration.
To mitigate listed shortcomings the approach with nested virtualization,which is also
called recursive,might come in help.If it is possible to virtualize a hypervisor on top
of another hypervisor,then provider’s VM format is not an obstacle anymore.The only
dependence left is the format of the guest hypervisor itself.This way classic clouds evolves
from simple flat propositions into highly flexible virtualized infrastructures with greater
freedom for its customers.The new abstraction in the context of virtual platforms for
hypervisors is shown on the image 2.3.Here the levels of virtualization named as follows:L0
represents the hypervisor which runs directly on hardware level,L1 the guest hypervisors,
and L2 the guest VMs.
Figure 2.3:Simple illustration of traditional hypervisors vs.nesting hypervisors
This approach provides possibility for either static or dynamic migration from private
cloud infrastructure to public clouds:user now has ability not just package his VM for
specific provider,but on the contrary,can package whole set of VMs together with their
specific hypervisor.The idea of such migration is illustrated in image 2.4.Here the private
L0 hypervisor is transformed into the guest L1 hypervisor in recursive virtualization on top
of public cloud.
Nested virtualization is not a new concept but one that has been implemented for some
time in the IBMz/VMhypervisor.The IBMSystemz operating systemis itself a hypervisor
that virtualizes not just processors and memory but also storage,networking hardware
assists,and other resources.The z/VM hypervisor represents the first implementation of
practical nested virtualization with hardware assists for performance.Further,the z/VM
hypervisor supports any depth of nesting of VMs (with additional overhead,of course) [17].
More recently,x86 platforms have been driven toward virtualization assists based on the
Figure 2.4:Guest hypervisor and host hypervisors in the nested cloud
growing usage models for the technique.The first hypervisor for commodity hardware to
implement nested virtualization was the KVM.This addition to KVMwas performed under
IBM’s Turtles project [18] and permitted multiple unmodified hypervisors to run on top of
KVM(itself a hypervisor as a tweak of the Linux kernel).The Turtles project was motivated
in part by a desire to use commodity hardware in a way that IBMpioneered for IBMSystem
p and System z operating systems.In this model,the server runs an embedded hypervisor
and allows the user to run the hypervisor of his or her choice on top of it.The approach
has gained interest from the virtualization community,as the capabilities (modifications to
KVM) are now part of the mainline Linux kernel.Officially support for nested virtualization
was for the first time included in the Linux kernel version 3.1,which was released on 24
October 2011.
2.3 Cloud Stacks
Together with expansion of virtualization and IaaS,an idea of private clouds occurred.A
private cloud is a software infrastructure that enables end-users to acquire,configure,and
ultimately release data center resources on demand,using automated self-service tools and
software services within an enterprise’s data center [19].An analogy with web e-commerce
can help understanding how does private cloud function.So a private cloud could be
viewed as a service venue that allows end-users to search for compute infrastructure that
fits their specific needs (products),to acquire that infrastructure,and when it is no longer
needed,to release it back to the IT organization.A private cloud must be scalable to
be able to serve multiple simultaneous consumer requests and automated to execute and
implement each user’s request directly and automatically to minimize administration effort.
The “product"in this case,is typically a virtual machine that has either been preconfigured
with a specific set of software applications,or can be customized by the end-user directly once
the acquisition transaction is complete.To establish such a service different cloud platforms
evolved,including OpenStack,Eucalyptus,Nimbus and OpenNebula.A short discussion
about these frameworks is provided in the following subsections.
2.3.1 OpenStack
OpenStack [5] is a collection of open source components to deliver public and private
clouds.These components currently include three main projects:OpenStack Compute
(Nova),OpenStack Networking (Quantum),OpenStack Object Storage (Swift) and Open-
Stack Image Service (Glance).Also there are OpenStack Block Storage (Cinder) and
OpenStack Dashboard (Horizon).
Nova is designed to provision and manage large networks of virtual machines,creating a
redundant and scalable cloud computing platform.It is the computing fabric controller for
the OpenStack Cloud.It has six components including Nova-API,Message Queue,Nova-
Compute,Nova-Network,Nova-Volume and Nova-Scheduler [20].Nova supports all the life
cycles of instances in the cloud,so it can be viewed as a management platform that control
compute resources,networking,authorization and scalability.The architecture components
follow a shared-nothing and messaging-based approach,which means that each component
or each group of components can be installed on any server.Queue Server is in the middle of
the architecture,enabling cloud controller to communicate with all components via AMQP.
At the same time,all communications use a way of connection,which is called asynchronous
eventually consistent communication,to transport message.This means that a call-back gets
triggered once a response is received,and the benefits are that none of the users’ actions get
stuck for long in a waiting state.
Swift is a scalable object storage system including a Proxy Server,an Object Server,an
Account Server,a Container Server and the Ring.It is a long-term storage system for a
more permanent type of static data that can be retrieved,leveraged and updated.Its main
functions and features are:secure storage of large number and size objects,data redundancy,
archival capabilities and media streaming.Glance provides discovery,registration,and
delivery services for virtual disk images.
2.3.2 Eucalyptus
Eucalyptus [4] promises the creation of on-premise private clouds,with no requirements
for retooling the organization existing IT infrastructure or need to introduce specialized
hardware.Eucalyptus implements an IaaS (Infrastructure as a Service) private cloud that
is accessible via an API compatible with Amazon EC2 and Amazon S3.
The architecture of the system is simple,flexible and modular with a hierarchical design
reflecting common resource environments found in many academic settings [21].Users are
able to performstart,control,access,and termination of entire virtual machines with the help
emulation of Amazon EC2’s interfaces.That means that Eucalyptus users,who is familiar
with EC2,interact with the systemusing the exact same tools and interfaces that they use to
interact with Amazon EC2.Currently VMs that run on Xen,KVMand VMware hypervisors
supported.Each high-level system component is implemented as a stand-alone Web service.
This has such benefits:first,each Web service exposes a well-defined language-agnostic API
in the form of a WSDL document containing both operations that the service can perform
and input/output data structures.Second,existing Web-service features can be leveraged for
secure communication between components.There are four high-level components,each with
its own Web-service interface,that comprise a Eucalyptus installation [21]:Node Controller
controls the execution,inspection,and terminating of VM instances on the host where it
runs.Cluster Controller gathers information about and schedules VM execution on specific
node controllers,as well as manages virtual instance network.Storage Controller (Walrus) is
a put/get storage service that implements Amazon’s S3 interface,providing a mechanism for
storing and accessing virtual machine images and user data.Cloud Controller is the entry-
point into the cloud for users and administrators.It queries node managers for information
about resources,makes high-level scheduling decisions,and implements them by making
requests to cluster controllers.
2.3.3 Nimbus
The Nimbus project [7] is working on two products that they term “Nimbus Infrastructure"
and “Nimbus Platform".
The Nimbus project defines the Nimbus Infrastructure to be “an open source EC2/S3-
compatible IaaS implementation specifically targeting features of interest to the scientific
community such as support for proxy credentials,batch schedulers,best-effort allocations
and others".To support this mission,Nimbus is providing its own implementation of a
storage cloud compatible with S3 and enhanced by quota management,EC2 compatible
cloud services,and a convenient cloud client which uses internally WSRF.
The Nimbus platform is targeted to provide additional tools to simplify the management
of the infrastructure services and to facilitate the integration with other existing clouds
(OpenStack and Amazon).Currently,this includes the following tools:cloudinit.d coor-
dinates launching,controlling,and monitoring cloud applications,a context broker service
that coordinates large virtual cluster launches automatically and repeatedly.
2.3.4 OpenNebula
OpenNebula [6] is an open source IaaS toolkit.Its design is flexible and modular to allow
integration with different storage and network infrastructure configurations,and hypervisor
technologies.It can deal with changing resource needs,resource additions,live migration,
snapshotting,and failure of physical resources [22].
OpenNebula’s architecture is flexible and modular,which makes it easier than Open-
Stack to integrate multiple storages,network infrastructures and hypervisor technologies.
OpenNebula architecture includes three layers:Drivers,Core and Tools [20].
Drivers layer contains many important components in the OpenNebula.It communicates
directly with the underlying operating systemand encapsulates the underlying infrastructure
as an abstract service.The functions are responsible for the creation,starting up and shutting
down virtual machines (VMs),allocating storage for VMs and monitoring the operational
status of physical machines and VMs.
The core layer is a centralized layer that manages the virtual machines’ full life cycles,
including setting up a virtual network dynamically so that allocate a dynamic IP address
for a VM,which makes the networking solution transparent to users,and managing a VM’s
storage allocating such as a VM’s disk.
Tools layer provides some interfaces such as command line interface (CLI),browser and
libvirt API to communicate with users and users can manage VM through these interfaces.
A scheduler manages the functionality provided by the core layer.External users are capable
of sharing these functionalities through a cloud interface which is provided by Tools layer.
The following table 2.2 contains a summary of cloud stacks characteristics:
Table 2.2:Cloud Stacks characteristics
VMWare Vsphere,
WMWare is only in
enterprise edition
KVM and Xen
EC2 and S3,REST
EC2 and S3,REST
EC2 and S3,REST
Native XML/RPC,
EC2 and S3,OCCI,
Rest Interface
Two modes:Flat
Creates Bridges
IP forwarding for
public IP;VMs
only have private
Four modes:
(a) managed,
(b) managed-
system,(d) static;
In (a) and (b)
bridges are created
forwarding for
public IP;VMs
only have private
IP assigned using a
DHCP server that
can be configured in
two ways;Bridges
must exists in the
compute nodes
Networks can
be defined to
support Ebtable,
Open vSwitch and
802.1Q tagging;
Bridges must exists
in the compute
nodes;IP are setup
inside VM
Software is
composed by
component that
can be placed in
different machines;
Compute nodes
need to install
Software is
composed by
component that
can be placed in
different machines.
Compute nodes
need to install
Software is
installed in
frontend and
compute nodes
Software is
installed in
Swift (http/s);
Unix filesystem
Walrus (http/s)
Unix Filesystem
filesystem or
LVM with CoW)
Table 2.2 – continued from previous page
<4 month
>4 month
<4 month
>6 month
Apache 2.0
Open source
Apache 2.0
Apache 2.0
2.4 Related Work
The idea of IaaS,as was already mentioned,is to provide computing resources on demand.
The typical model of provisioning considers “pay as you go"approach,which means that
one could start with small amount of resources and increase the amount gradually with the
growth of need.This may seem promising,but usually leads to overprovisioning of resources
due to certain limitations,which result in higher costs for end customer.
Overprovisioning may occur due to resource granularity.In most IaaS clouds,clients
rent resources as a fixed bundle of compute,memory,and I/O resources.Amazon calls
these bundles “instance types” and GoGrid and Rackspace call them “server sizes” [8].These
abstractions help users to find a classic server equivalent and understand how much capacity
and performance they actually rent.But the workload is never known in advance and
generally not evenly distributed,and the user,who started with small instance types while
workload was small,may need larger instances during high peak hours.And for the rest of
the time these larger instances may run nearly idle,because user wants to be on the safe
side without need to change his infrastructure right during peak hours.
As mentioned in [23] three types of limitations exist:spatial (refers to reservations by
some fixed slice of resources,e.g.per Gigabyte,per Gigahertz),temporal (related to billing
by fixed time interval, hour) and structural (represents a flat strategy of running
VMs,i.e user can not use rented VMas a host machine and run several VMs inside of it due
to the lack of nested virtualization support on the side of IaaS provider).
Project xClouds [24] votes for flexibility from user point of view,such as live migration
between different IaaS providers.Authors described three possible designs for extensible
clouds:with downloaded into VMextensions,with exposed through hypervisor hardware and
with middleware VM.First two designs present concept of mutable hypervisor,a VMMsome
part of which is customizable by user.The third design make use of nested virtualization
approach.It was shown that nested virtualization is feasible with reasonable overhead.The
whole system was tested on top of Amazon EC2 with Xen performing as level 1 VMM.
Though no management tools,such as brokers,or resource markets were proposed in this
Xen-Blanket project [25] is the further development of ideas behind xClouds,as it is
created by the same authors.They introduce a thin,immediately deployable virtualization
layer that can homogenize today’s diverse cloud infrastructures that they call Xen-Blanket.
It is shown that a user-centric approach to homogenize clouds can achieve similar perfor-
mance to a paravirtualized environment while enabling previously impossible tasks like cross-
provider live migration.The Xen-Blanket consists of a second-layer hypervisor that runs as
a guest inside a VM instance on a variety of public or private clouds,forming a Blanket
layer.The Blanket layer exposes a homogeneous interface to second-layer guest VMs,called
Blanket guests,but is completely user-centric and customizable.To achieve that provider-
specific blanket drivers were developed for both KVM- and Xen-based systems.It is shown
that performance with nested virtualization still remains on the reasonable level.Tests and
migration were performed both on private servers and Amazon EC2.However,this work
remains at the level of virtualization and does not suggest any broker or market integration.
In [26] a Utility-based resource allocation policy with QoS constrains in virtualized envi-
ronment is proposed.Utility is termed as optimal trade-off between obtained performance
and cost paid.The aim was to create a strategy that would be simultaneously beneficial
from both points of view:maximize Utility and optimize resource allocation.The proper
model,algorithm and utility function were proposed.Though this research was tested only
on-premises and focuses only on CPU resource,which was varied with the help of credit
scheduler in Xen.Public cloud providers,where users have no access to underlying hypervisor
configuration,and resource granularity are not taken into account.
Virtual hypervisor concept was developed in [27].Project proposes to enhance the cloud
model to empower solution managers to have much better control over the resource allocation
for its specific VMs,while maintaining the separation of the management of the cloud
physical infrastructure from the management of the solutions hosted on the cloud.Thus
the solution managers can concentrate on optimizing their well known environments and the
cloud management can focus on making the most efficient use of the physical resources.To
achieve this separation a concept of a Virtual Hypervisor (VH) is introduced which interacts
with the solutions in a manner very similar to a real hypervisor and provides an abstraction
allowing the solution to manage the sharing of its resources and how resource contention
among its VMs should be resolved.However VH is presented only as a model and only
simulations via Matlab performed.
The research on nested virtualization called NestCloud [28] represents a practical high
performance three-level nested virtualization architecture,which fully employs the hard-
ware virtualization extensions.As the hypervisor KVM was chosen.In NestCloud,no
modification on L1 VMM or L2 Guest OS is needed,but to achieve performance boost
some modifications could be made.The evaluation demonstrated that the implementation
of three-level virtualization introduced 5.22% overhead on CPU and 5.69% overhead on
memory,and they may be considered as close to conventional ones.
Authors of paper [29] propose an approach for repurposing unused cloud computational
resources in IaaS clouds.They developed BaTS —a scheduler that estimates possible budget
and makespan combinations using a tiny task sample,and then executes a bag within the
user’s budget constraints.The idea is to replicate already running task to an idle machine
in the tail phase,reducing the makespan and improving the tolerance to outlier tasks.Such
scheduler could be incorporated to follow utility computing concepts on the contrary to
reselling unused capacity.
2.5 Summary
This chapter briefly introduced cloud computing.Main focus was given to its lower layer —
IaaS and building bricks:virtualization technology and hypervisors.The approach of nested
virtualization has been considered as suitable for provider independence and unused resource
Then cloud stacks were described.Cloud stacks are software solutions that enable
building and managing private clouds on top of own hardware or on top of existing cloud
providers.OpenStack,Eucalyptus,Nimbus and OpenNebula were compared as the most
popular frameworks.
Also idea of cloud resource spot market was introduced.Such market would enable cloud
users to become providers themselves by giving an opportunity to resell unused computa-
tional resources.
The related work overview concluded this chapter.The research on nested virtualization
proved that overhead is reasonable,this approach is feasible and could be used in imple-
mentation within this work.Several papers described improvements to the virtualization
technology as a solution to IaaS providers limitations.Others proposed different resource
allocation or management policies.However resource coarse granularity was not taken into
account,proposed improvements often required cooperation with providers,what could be
a complex issue in reality.
Chapter 3
Analysis of the SPACEflight/HVCRB
This chapter provides an analysis of the system which would be used as a basis for current
thesis development.The chapter is structured as follows:section 3.1 presents the SPACE
platform and describes the service provisioning and managing in general;in section 3.2
the cloud computing extension is discussed;section 3.3 focuses on nested virtual machines
running and outlines a baseline for this thesis;finally section 3.4 concludes the chapter.
3.1 SPACE and SPACEflight
(Service Platform Architecture for Contracting and Execution) is a free,user-
centric,modular,powerful integrated set of platform services for the provisioning,configu-
ration,sharing,management and delivery of services in a broad sense (web,human,cloud,
XaaS).It contains both service management functionality (semantic registry,service discov-
ery and configuration,contract management,service deployment,monitoring visualization)
and heterogeneous service hosting functionality (deployment,authentication,execution,moni-
toring and adaptation).
The SPACE ecosystemis designed to carry out whole lifecycle of service:services may be
traded,brokered and hosted all on the same platform.SPACE is structured in component-
oriented way,consisting of loosely-coupled platformcore services.Each of these core services
has its own configuration,database and web frontend.SPACE makes as few assumptions as
possible about the nature of a service.The service which is being published,traded or hosted
may be described in different formats:a USDL-described human service,a WSDL-described
web service,a UISDL-described user interface service or a WADL-described infrastructure
service.The services implementations,if they are present,are also treated in a uniform way
either it is Ruby servicelet,Java servlet,BPEL process,PHP script or virtual machine-alike.
The overall architecture of SPACE is shown on image 3.1.
Image taken from
Figure 3.1:SPACE platform architecture
As an access point for providers in SPACE platform serves the Provider Wizard,which
registers services.Then the deployment mechanism brings these registered services onto
the platform.To be able to use services,consumers perform search for them via semantic
registry/discovery tool ConQo,which makes matching between registered services and user’s
requirements based on service’s description and user’s goal.After that consumers may
configure selected services and enter a contract.The contract is generated by the Contract
Wizard and enables SLA and configuration parameters negotiation.After completing the
contract phase,the actual usage of the service may be carried out through the interactive
service frontend (user access) or via its invocation endpoint (programmatic access).
During the execution stage in the hosting part of SPACE (Puq unified hosting) access
permissions to each service (which are provided during contract phase) are firstly checked
with authentication and authorization access control gate.Every service execution is moni-
tored by the GrandSlam,which collects data,that can be later used for analysis,service
adaptation and visualization.Consumers have possibility to rate services.All ratings and
monitoring results can influence the service offers for the next cycle.
—is a live demonstrator for the SPACEplatform,which runs a GNU/Linux
OS with Internet of Services desktop,integrated SPACE service platform with a number of
execution containers,a cloud hosting extension and a number of development and provision-
ing tools.Additionally it contains system-integrated and preconfigured SPACE components
in the form of packages which constitutes the base of a service marketplace.It also features
several scenario services with full web service descriptions and corresponding rich clients as
service frontends,for Rusco and other implementations.A special service-integrated desktop
environment and a cloud computing environment with more complex scenario services includ-
ing value-added services and resource service slices are also present.Fully-featured toolset
for services engineering and provisioning is included in SPACEflight.The system could
be extended with additional hosting technologies and cloud service options.SPACEflight
does not require any Internet connection for operation,however if the connection i present,
additional features such as distributed grid computing integration become available.
SPACEflight has two goals on its aim:On the one side it allows desktop users to experi-
ence the power of the Internet of Services from a service provider or consumer perspective,
or froma top-down view on a complete service ecosystem.On the other side it could be used
as fully-functional ready-to-operate server system for service hosting with rapid replication
enabled.The system,which is set up in such a distributed way with linking to a central
marketplace SPACEFlight instance,is useful for running experiments and simulations.
3.2 Cloud Computing Extension – SPACE-Cloud
The cloud computing extension to the SPACE service platform provides facilities for SLA-
driven hosting of complex services,delivered as virtual machines and hosted in an Eucalyptus-
based cloud environment.SLAs determine the initial configuration and subsequent reconfig-
uration through adaptation mechanisms such as re-allocation,migration etc.This extension
also affects the model provider-consumer which becomes imageprovider-instanceprovider-
consumer as a refinement.
Figure 3.2:SPACE-Cloud architecture and processes
The service usually considered to be a package consisting of description and executable
parts.In this scope virtual machines could also be viewed as service package,only bigger.
Such VM service packages are very useful for complex services that may need special execu-
tion environment,such as own specific operating system,different kernel version,own
database,custom tools,languages and dependencies,etc.Therefore,the Cloud Computing
extension extends the Unified Hosting part of SPACE with virtual machine detection,infor-
mation extraction and deployment.Furthermore,it extends the Contract Wizard application
Image taken from
with a configurator which is driven by SLA/configuration settings.It right-sizes the virtual
machine instance and configures aspects of the machine inside on startup.
Image 3.2 shows the architecture of cloud computing extension together with workflows
from instanceprovider and consumer sides.The image shows two parts of the system:the
Eucalyptus execution environment on left side and the SPACE-Cloud platform on the right
side.A service provider of complex service solution comes to the platform and publishes the
VM service and its image via Provider Wizard.The VM gets handled by Eucalyptus cloud
stack,information about service is registered at the service registry and contract manager.
Service consumer performs a purchase of complex service packaged as VM via the Contract
Wizard by defining SLA parameters together with operational and utilization parameters.
After the contract is established,service gets instantiated,the instance-specific parameters
for correct sizing of the VM are passed to the node controller and SLA/confoguration
parameters are passed for configurator.When deployment and instantiation are finished,
the final users are able to access and use the service.
The described system has following strengths and weaknesses:
 Powerful existing underlying SPACE platform for services execution.A ready-to-
use server platform,which has core components for service provisioning.
 Integrated development environment.Contains full-featured toolset for develop-
ment of service packages from description to implementation.
 Quick start.Allows desktop users to experience the power of the Internet of
Services from a service provider or consumer perspective.
 Unified services treating.The platformallows provisioning of computing resources
as conventional services.
 Very basic IaaS handling.Only possible to start,stop,terminate VMs,no
dynamic management.
 Classic IaaS limitations still present.No possibility for unused resources sub-
allocation for other users.
 Lack of nesting support.Currently no possibility for recursive virtualization
 No marketplace support for computing resources.Current implementation has
no support for trading of computational resources,only clod storage.
 Still coarse granularity for resources.Platform is making use of Eucalyptus cloud
stack which follows the same concept of instance sizes as Amazon EC2.
 No resource slicing.Inability to sub-allocate parts of resources for other needs or
A Highly-Virtualizing Cloud Resource Broker (HVCRB) is a mediator systembetween cloud
computing infrastructure providers which offer compute and storage resources (IaaS) and
clients who have a need for short-term or best-effort resource capacity.However,as outlined
in papers [23,30] which propose system’s architecture,the described system is mostly a
concept and only test environment with basic prototypes was established.So according to
aforementioned papers,the system consists on the top level of a market ( market)
for resources and a Nested Cloud base virtual machine which delivers certain cloud manage-
ment functionality inside the VM.As such,it uses recursive virtualization technologies in
combination with elastic vertical scaling (i.e.dynamic resource redimensioning) which not
only allows clients to use their resources better by being able to sub-sell or sub-dedicate
parts of them,but also projects existing virtualized infrastructures better into single public
cloud allocations.
Image 3.3 illustrates the conceptual architecture of the HVCRB system.
Figure 3.3:Integration of HVCRB into SPACE
Image shows the dual nature of the core platform SPACEflight,which is being used as
the basis for the HVCRB system.The SPACEflight could be booted in two modes:normal
boot or customized boot.
In normal boot mode it boots Debian
"squeeze"operating system with Linux kernel
version 2.6 and old KVM hypervisor version 0.12.In this mode all the conventional service
trading,provisioning and hosting mechanisms are enabled on the platform.However due to
lack of nested virtualization support in the old kernel,the cloud extension is not able to run
recursive-enabled virtual machines.
Image taken from
Debian operating system,
In the customized boot mode it also runs Debian"squeeze",however the kernel is specially
built maximally minimized custom Linux kernel version 3.6 and a newer KVM hypervisor
version 1.2 is included.In this case the platform is enabled with latest kernel modules
for nested virtualization support and is able to run the NestedCloud base VM on top of
Eucalyptus cloud stack.The base VM is built upon custom Debian"wheezy"distribution
and includes even more minimized kernel,so base VM is designed to be as thin as possible
to mitigate the overhead of nested virtualization.
Current HVCRB implementation contains the cloud resource market prototype called
"SMCRS SpotMarket",which was developed as part of the thesis work [31].It provides
for users such services as registration,service publishing,service review and rating,service
discovery.However,this SpotMarket is able of trading only cloud storage resources in formof
services.So it lacks support for cloud computing resources.In addition,as it is only a proto-
type,it treats services only as some abstract entities and there are no mechanisms for real
resource allocation and management included.This means that this market implementation
needs to be extended with possibility of computational services trading,providing interface
for purchased service executions management,and some underlying broker mechanism,that
could perform real operations with real resources,such as resource allocation of proper size
according to the service description,starting and managing instances for cloud computing
services,VM instances on-the-fly rescaling,enabling slicing of purchased resources and
reselling unused capacity as cloud services on the same market,etc.
3.4 Summary
This chapter presented the SPACE (Service Platform Architecture for Contracting and
Execution) platform and the demonstrator system SPACEflight,which serves also as both
development environment and ready-to-use server installation.The services provisioning
process on the SPACE platformwas described,then the cloud computing extension analyzed
together with workflow for cloud computing services trading and system’s strengths and
weaknesses.Afterwards the initial state of HVCRB system was introduced,giving a baseline
for this work.
The following table 3.1 summarizes the state of the HVCRB system for the time,when
this thesis was started and outlines what needs to be improved.It does not take into
account SPACE platform core services and components,but focuses on the HVCRB-related
Table 3.1:HVCRB system baseline state
Status/What needs to be done
A web application
that acts as interface
for users of HVCRB
system and enables
 enable cloud computational resources
Table 3.1 – continued from previous page
Status/What needs to be done
cloud services provi-
 give possibility to manage service
 real resource allocation
Base VM
A virtual machine
that will enable
users with running
nested virtual
machines for unused
resource slicing and
 KVM hypervisor needs upgrade to
latest version with most recent capa-
 a mechanismfor managing nested VMs
should be included
 set of packages inside of image should
be revised to ensure that VM is
minimized and introduces as small as
possible overhead
Scheduler and
task manager
The centralized
dispatcher and
controller for nested
This component is not present and needs to
be implemented from scratch.Its tasks:
 track changes on the marketplace
 perform scheduling of service execu-
 issue proper tasks for Nested VM
Nested VM
The worker daemon
that resides on the
side of provider and
performs management
tasks received from
task manager,e.g.
dynamic resource
allocation and
This component is not present and needs to
be implemented from scratch.Its tasks:
 gather system state and send it to
central controller
 execute tasks received from controller,
e.g.VM rescaling
Chapter 4
This chapter is focused on designing a solution,which will give possibility to resell unused
resources,and underlying management software,that will on the background performtasks of
proper resource allocation by starting and stopping nested virtual machines of corresponding
capacity according to defined QoS parameters.The proposed system design should tackle
previously discussed limitations of cloud computation and meet goals of this work.
The chapter is structured as follows:at first requirements will be imposed in both
functional and non-functional aspects.Then the use cases and user interaction fromprovider
and consumer perspectives will be discussed.Afterwards the system architecture will be
exposed,showing the system’s components and their interaction.Chapter ends with market
and backend mechanisms description and summary.
4.1 Requirements
When it comes to requirements,that are supposed to be fulfilled,it is important to distin-
guish between functional (what the systemought to do) and non-functional (what the system
should be) requirements.This section describes both types of requirements that should be
taken into consideration during system design phase.
4.1.1 Functional Requirements
Web application for resource market
An interface for users needs to be created in the HVCRB system.It should be
implemented as a web application for resource market and give an opportunity to
register for both consumers and providers to be able to buy and sell computing
resources.During registration on the marketplace,users need to provide their personal
data (,password,contact information etc.).After the registration each user
receives a status of consumer,which means that he is able to perform search for
resources,choose between provider offers and carry out purchases.
Provider profile
In the HVCRB system each user can play both roles of consumer and provider simul-
taneously,meaning that in one scenario user acts as a consumer,while in other he may
act as a provider.To be able to sell unused computing capacity,user needs to become
provider,which means that user should fill out additional information in his profile,
such as website address and QoS information (availability,throughput,response time
parameters).After completing his profile,user receives the status of provider and
therefore has possibility to manage resources:publish,update and delete services.
Service rating and reviewing
Each consumer should have an opportunity to provide his opinion on a service.This is
carried out by posting reviews and giving a mark from 1 to 5 grade.For each service
then an average rating is calculated,which may be helpful for other consumers when
they make their purchases.
Dynamic rescaling and controlling
The marketplace web application should also have an interface for service execution
management,so providers as well as consumers should have mechanismto view current
instances status,stop,start or terminate their running service instances.Also there
should be possibility to dynamically change capacity parameters of already reserved
and running services:storage capacity in case of storage service,CPU capacity,avail-
able RAM amount and storage in case of computing service.
Flexible pricing
Resource providers should have an opportunity to change the price of their services
according to current market situation.Both consumers and providers can benefit from
this feature,as due to dynamic pricing providers have possibility to maximize income
on the one hand,and still maintain attractive and competitive prices on the other.
Therefore consumers can choose the most suitable service based on capacity/price
ratio.The price fluctuation should be visible in the market interface in form of graphs
and price trend designations.Therefore price history for each service should be stored.
Additionally the pay-per-use approach should be maintained,so the service execution
may be terminated by user at any time,and he will be charged only for actually used
Resource trading
The marketplace should be capable of trading different resource types,such as cloud
storage or cloud computing resources.The resource buying process includes step of
service level agreement negotiation,which is carried out by a contract,where SLA
and QoS parameters of service,which is being traded,are adjusted.After that actual
resource allocation and provision starts.
4.1.2 Non-functional Requirements
The non-functional requirements will be divided in two groups:primary and secondary.
Primary group will contain the requirements which are essential to be considered during
first prototype implementation.The secondary requirements collects best-practice advices,
that are not relevant for the first prototype implementation,but are essential for the system
in production.Secondary requirements may be left out in prototype,as they require much
engineering efforts and consume much time without coming to functional goals.
The primary non-functional requirements are:
The system which is being designed should consider availability as one of its main
parameters.This means that the marketplace web application as well as its background
logic should be themselves highly available even during high requests peak times,as
they act as a service for the end providers and consumers.Also the service execution
exemplars should match the negotiated availability parameters,this could be achieved
by ensuring availability of hosting and execution environments.
The system should be able to handle growing amount of requests and traffic as well as
wide range workload fluctuations in order to remain performant.In addition,it should
provide mechanisms for dynamic scaling up and down of its service executions.
Platform independence
As one of the main goals is cloud resource trading,these resources should be treated by
the system uniformly independent of their types.Furthermore,the system should be
designed in a way,that ensures its independence from an underlying operating system,
a cloud stack software,as well as cloud provider.The only requirement might be
that underlying OS should be UNIX-like.So no platform-specific tools or components
should be used.This would enable heterogeneity and make possible cloud migration.
While designing the system,it is important to plan it in such a way,that it should be
able to cope with and control multiple service executions,which may run on different
physical machines,even in different networks and on top of different providers.This
requirement is important,as it also helps to achieve aforementioned availability and
scalability requirements.
The secondary non-functional requirements are:
Component crashes or/and communication failures should not impact on independent
components and should not lead to the unrecoverable system’s state.
Hardware commodity
Current trend of running highly loaded systems involves usage of commodity hard-
ware.Therefore the hardware,used on the back-end infrastructure,is supposed to be
conventional personal computers,or virtual machines equivalent to such computers.
The system may store and operate sensitive data,such as payment information or
personal details.Therefore it should resist basic fraud,such as SQL injections or
CSRF attacks etc.Additionally all the data manipulations should be carried out only
by authorized users.
4.2 User Interaction Model
In the HVCRB marketplace user may have two roles:consumer and provider.Each user,
who registers at the market,automatically receives status of consumer.After extending own
profile with necessary additional information,user receives the provider status.Basically
this means that user gets the consumer profile assigned to him right after registration,and
the provider profile after giving in additional information.
The diagram 4.1 shows logical user model together with its profiles and distribution of
attributes and methods between these components.Attributes represent information,which
may be specific to user or one of his profiles,and methods represent interactions with market,
which are specific for each role.
As it is seen from the diagram,some properties and capabilities are common for both
consumers and providers,some are specific for each profile.The list of user’s interactions is
discussed below:
 Before accessing any services,users need to register themselves at the marketplace.
 After the registration users may perform login/logout,authenticate/authorize and
update their consumer profiles,which are attached automatically.
 Consumers are able to perform service discovery to search for services which suit their
Figure 4.1:User interaction model
 After finding the best suitable service consumers perform a purchase of this service.
 Consumers may rate and leave their opinion in form of review on the service they have
 After completing the purchase consumers can operate their service execution:start,
 To become provider,user fills out provider-specific information and gets provider profile
attached to him.
 Providers are able to publish their unused computational resources at the marketplace
to add revenue.
 When consumer purchases the service,provider enters the selling phase.
 Providers are able to manage services:change QoS parameters and adjust price.Also
they have possibility to manage service executions from their side:scale up and down,
4.3 Architecture
This section focuses on generalized coarse-grained system architecture.More detailed each
component will be described in upcoming sections.
4.3.1 General Architecture
Image 4.2 depicts the overall architecture of the HVCRB system.It is split in three segments.
The first segment represents users,which interact with the system.The second segment
shows the system side together with the marketplace,its underlying backend logic and core
SPACE components,the third represents provider’s side and providers,that come to the
marketplace and publish their resources.
Figure 4.2:Overall system architecture
The users of the HVCRB marketplace.They may be either consumers of cloud
computing resources,or providers of these resources.They interact with the market
via web application user interface or via API invocation.After registration each
user automatically becomes consumer and is able to purchase services.After filling
out additional information,user may become provider to publish his unused cloud
computational resources to add revenue.
HVCRB Marketplace and Backend
The HVCRB marketplace deals with cloud computational resource trading by giving
all the necessary services for purchasing resources from customer side and publishing
resources from provider side.The HVCRB backend performs real resource allocations
and management.The section 4.3.2 describes the services,which the marketplace and
backend provide.Detailed architectures of the marketplace and backend are described
in sections 4.4 and 4.5 correspondingly.
Cloud resource providers come to the marketplace to resell their unused computational
capacity to add revenue.The resources which may be published are cloud computing
and cloud storage resources.In the terms of computing resources,they are actually
nested virtual machines and can be run on top of public providers,such as Amazon
EC2,Rackspace,etc.,private clouds or even private computers.
4.3.2 Services Overview
HVCRB Market.Provides the following features:
1.Registration.To be able to access the marketplace and use its services,users need
to register and provide some personal information.This is needed for further
identification of users and access control.At any time users can extend their
profile to become providers and change current data.
2.Service discovery.Users may navigate through the market interface to search for
suitable services,or use service discovery.They need to specify their requirements
for the service,such as type of service,storage capacity etc.The service discovery
mechanism will match services that best fit to the specified requirements.
3.Service publishing.After receiving status of provider,users have possibility to
publish their unused capacity as services for other customers.Services could be
of two types:either cloud computing service or cloud storage service.To publish
a service an interface of market could be used,or programmatic access via API.
4.Service acquiring.To purchase a service users firstly use discovery and get list of
most corresponding to their request services.Then after selection of convenient
service and completing contract phase,service gets reserved by the user,and
credentials for accessing are sent to the user.The average mark of the service is
shown on service detail page.
5.Review and rating.Each service consumer has an opportunity to leave his opinion
on the provided service.He also may rate the service he has used in the grade
from 1 to 5 (1 is lowest,5 is highest).
HVCRB Backend.Provides the following features:
1.Resource allocation.After completing of contract phase,the resource allocation
phase begins.The actual service execution starts with described capacity param-
eters (storage amount for storage service;CPU,RAM and storage for computing
service).The allocation is carried out by the worker daemon.
2.Scheduling.Each service could be available only for specific time frame,which
means that the actual execution could be scheduled to some time in the future.
This frame could also be limited either only with start time (the beginning of
service usage) or end time (the ending of service usage).Enforcing of these time
frames is achieved via the scheduler component.
3.Resource management.This part is responsible for managing currently running
service executions.Users might want to stop,suspend,start or terminate their
instances.The resource management component is in charge to perform these
4.Dynamic rescaling.During execution,it may be essential to change the computing
capacity and scale up or scale down currently running instances.This could be
achieved by changing CPU,available RAM or storage capacity amounts.This
component is responsible for making on-the-fly rescaling.
SPACE platform Provides the following services:
1.Contracting.Users need to establish a contract with provider before they will be
able to actually use service.This component is liable to establish such a contract.
In the contract phase QoS parameters negotiated and SLA is reached.During
service execution all the negotiated parameters must be maintained by provider,
otherwise he might be charged a fee,which is stated in contract.
2.Pricing.This gives possibility for providers to change price for their published
service.On the other side,users make the choice depending on computational
parameters and price.It is important,as providers might want to be competitive
and be able either to lower the price to look more attractive for customer,or
higher the price and earn more money for the service.The price history is saved,
and upon each change the plot which shows price fluctuation is updated,so users
have a possibility to see the history of changes.Also each service has a designation
mark of current price trend in comparison to previous price.This way customers
know if the price was lowered or raised since last time.
3.Payment.This component ensures all the payment processes between customer
and provider.Payments may be carried out in both directions,from customer
to provider for usage of the service,and backwards,if provider has violated SLA
agreement and needs to pay a fee back to user.
4.Monitoring.This is a subsystem which measures and aggregates system and
service properties and stores the monitoring information for further refinement.It
informs about compliance and violations of service contracts and can incorporate
measurements from external sensors such as instance-level monitoring.
5.Access control.All services could be accessed only by authorized users.Therefore
it is needed to check if user has sufficient rights for specific service.The access
control component enforces authentication and authorization mechanisms.
4.4 Marketplace
The marketplace is an entry point for both consumers and providers.It empowers trading
of computing resources like any other service.On the one hand,providers come to the
marketplace,register and publish slices of their unused capacity as services,on the other,
consumers are free to choose the resource with most suitable capacity and QoS parameters.
The HVCRB marketplace architecture is shown on the image 4.3.
Figure 4.3:HVCRB Market
The main feature of the marketplace is to accept service from provider and present it
to the consumer,so it needs to provide necessary interfaces.As seen from the schema,it
has the publishing interface for providers on the one side and the discovery interface for
customers on the other.Both types of users need to authenticate at the marketplace to
perform any actions and fill up corresponding profile.The computational services which are
traded currently could be of two types:cloud computing and cloud storage.Each service
also has price information and ratings & reviews related to it.All the meta information,
required for the marketplace functioning is stored in the database,the service descriptions are
propagated to the service registry component,which is also responsible for making matching
between services parameters and user’s needs during the discovery process.
4.4.1 Resource Publishing
To be able so sell the computational service,providers need to publish it at the marketplace.
Image 4.4 illustrates resource publishing mechanism.
Figure 4.4:Resource publishing
As it could be seen fromthe diagram,publishing of resources includes the following steps:
1.To publish computing service provider interacts with marketplace’s user interface or
API and submits service information.Information contains the type of service,amount
of storage,RAM,CPU capacity,etc.
2.On the marketplace side then request parsing takes place.
3.The next step is generation of service description files,i.e.wsml file.If necessary,other
description formats could also be generated.
4.To save the service,which is being published,the marketplace starts a transaction.
During the transaction it tries to save the service to the database and the service
5.If the service registry is reachable and responded with"success"message,then the