refers to any type of
that utilizes some form of
wireless network connection.
It is a method by which homes,
and enterprise (business)
installations avoid the costly process of introducing cables into a building, or as a connection
tween various equipment locations.
telecommunications networks are generally
implemented and adminis
. This implementation takes
place at the physical level (layer) of the OSI model network structure.
Types of wireless networks
personal area networks
(WPANs) interconnect devices within a relatively small
area that is generally within a person's reach.
For example, both
light provides a WPAN for interconnecting a headset to a
also supports WPAN applications.
Fi PANs are becoming commonplace
(00) as equipment designers start to integrate Wi
Fi into a variety of con
"My WiFi" and
Fi" capabilities have made Wi
simpler and easier to set up and configure.
A wireless local area network (WLAN) links two or more devices over a short distance using
a wireless distribution method, usually providing a connection through an acce
. The use of
technologies may allow users to
move around within a local coverage area, and still remain connected to the network.
Products using the
standards are marketed under the
links between computers or networks at two
distant locations, often using dedicated
paths. It i
s often used in cities to connect networks in two or more buildings without
installing a wired link.
Wireless mesh network
A wireless mesh network is a wireless network made up of radio nodes organized in a mesh
topology. Each node forwards messages on behalf of the other nodes. Mesh networks can
"self heal", automatically re
routing around a node that
has lost power.
metropolitan area networks
are a type of wireless network that connects several
is a type of Wireless MAN and is described by the
Wireless wide area networks
are wireless networks that typically cover large areas, such as
between neighboring towns and cities, or city and
suburb. These networks can be used to
connect branch offices of business or as a public internet access system. The wireless
connections between access points are usually
point to point
on the .
GHz band, rather than
with smaller networks. A typical system contains base station gateways, access p
wireless bridging relays. Other configurations are mesh systems where each access point
acts as a relay also. When combined with renewable energy systems such as photo
solar panels or wind systems they can be stand alone systems.
is a radio network distributed over land areas called
cells, each served by at least one fixed
. In a cellular network, each cell characteristically uses a different set of r
frequencies from all their immediate neighbouring cells to avoid any interference.
When joined together these cells provide radio coverage over a wide geographic area. This
enables a large number of portable transceivers (e.g., mobile phones,
, etc.) to
communicate with each other and with fixed transceivers and telephones anywhere in the
network, via base stations, even if some of the transceivers are moving through more than
Although originally intended for cell phones, with the development of
routinely carry data in addition to telephone conversations
Global System for Mobile Communications
(GSM) The GSM network is divided into three
major systems the switching system, the base station system, and the operation and
support system. The cell phone connects to th
e base system station which then
connects to the operation and support station; it then connects to the switching station
where the call is transferred to where it needs to go. GSM is the most common standard
and is used for a majority of cell phones.
Personal Communications Service
(PCS) PCS is a radio band that can
be used by mobile
phones in North America and South Asia. Sprint happened to be the first service to set
up a PCS.
Digital Advanced Mobile Phone Service, an upgraded version of AMPS, is
phased out due to advancement in technology. The newer GSM networks are replacing
the older system.
Some examples of usage include cellular phones which are part of everyday wireless
networks, allowing easy personal communications. Another example, Inte
network systems, use radio
to communicate across the world.
such as the police utilize wireless networks to communicate effectively as well.
Individuals and businesses use wireless networks to send and share data rapidly, whether it
be in a small office building or across the world.
general sense, wireless networks offer a vast variety of uses by both business and home
"Now, the industry accepts a handful of different wireless technologies. Each
technology is defined by a standard that describes unique functions at both the Physical and
the Data Link layers of the OSI Model. These standards differ in their specified signaling
methods, geographic ranges, and frequency usages, among other
things. Such differences
can make certain technologies better suited to home networks and others better suited to
network larger organizations."
ndard varies in geographical range, thus making one standard more ideal than the
next depending on what it is one is trying to accomplish with a wireless network.
rmance of wireless networks satisfies a variety of applications such as voice and video.
The use of this technology also gives room for expansions, such as from
technology, which stands for fourth generation of cell phone mobile
communications standards. As wireless networking has become commonplace,
increases through configuration of network hardware and software, and
greater capacity to send and receive larger amounts of data, faster, is achieved.
Space is another
characteristic of wireless networking. Wireless networks offer many
advantages when it comes to difficult
wire areas trying to communicate such as across a
street or river, a warehouse on the other side of the premise or buildings that are physically
parated but operate as one.
Wireless networks allow for users to designate a certain
space which the network will be able to communicate with other devices through that
twork. Space is also created in homes as a result of eliminating clutters of wiring.
technology allows for an alternative to installing physical network mediums
, which can also be expensive.
For homeowners, wireless technology is an effective option compared to
sharing printers, scanners, and high speed internet connections
. WLANs help save the cost
of installation of cable mediums, save time from physical installation, and also creates
mobility for devices connected to the network.
ess networks are simple and require as
few as one single
wireless access point
connected directly to the Internet via a
Wireless Network Elements
The telecommunications network at the physical layer also
consists of many interconnected
Network Elements (NEs)
. These NEs can be stand
alone systems or products that
are either supplied by a single manufacturer, or are
assembled by the service provider (user)
or system integrator with parts from several different manufacturers.
Wireless NEs are products and devices used by a wireless carrier to provide support for
network as well as a
Mobile Switching Center (MSC)
Reliable wireless service depends on the
network elements at the physical layer to be
protected against all operational environments and applications (see GR
Requirements for Network Elements Used in Wireless Networks
Physical Layer Criteria
What are especially important are the NEs that are located on the cell tower to the
cabinet. The attachment
hardware and the positioning of the antenna and
associated closures/cables are required to have adequate strength, robustness, corrosion
resistance, and rain/solar resistance for expected wind, storm, ice, and other weather
conditions. Requirements for ind
ividual components, such as hardware, cables, connectors,
and closures, shall take into consideration the structure to which they are attached.
2. Network management
refers to the activities, methods, procedures, and tools that pertain
, maintenance, and
Network management is essential to command and control practices and is
carried out of a
network operations center.
Operation deals with keeping the network (and the services that the network provides)
up and running smoothly.
It includes monitoring the network to spot problems as soon
as possible, ideally before users are affected.
Administration deals with keeping track of res in the network and how they are
assigned. It includes all the "housekeeping" that is necessary to ke
ep the network under
Maintenance is concerned with performing repairs and upgrades
for example, when
equipment must be replaced, when a router needs a patch for an operating system
image, when a new switch is added to a network. Maintenance also i
and preventive measures to make the managed network run "better", such as adjusting
device configuration parameters.
Provisioning is concerned with configuring res in the network to support a given service.
For example, this might includ
e setting up the network so that a new customer can
receive voice service.
A common way of characterizing network management functions is
Configuration, Accounting, Performance and
Functions that are performed as part of network management accordingly include
controlling, planning, allocating, deploying, coordinating, and monitoring the res of a
network, network planning,
Data for network management is collected through several mechanisms,
installed on infrastructure,
transactions, logs of activity,
and real. In the past network management mainly
consisted of monitoring whether devices were up or down; today performance
management has become a crucial part of the IT team's role which brings about a host of
especially for global organizations.
Network management does not include user
A small number of accessory methods exist to support network and network device
management. Access methods include the
, and the
Java Management Extensions
(ISP) use a technology known as
deep packet inspection
in order to
Schemas include the
Common Information Model
Medical Service Providers provide a niche marketing utility for
as HIPAA legislation consistently increases demands for knowledgeable providers. Medical
Service Providers are liable for the protection of their client’s confidential information,
including in an electronic realm. This liability creates
a significant need for managed service
providers who can provide secure infrastructure for transportation of medical data.
3. Systems management
refers to enterprise
including (and commonly in practice)
Systems management is
strongly influenced by
application performance management
(APM) technologies are now a subset of Systems
management. Maximum productivity can be achieved more efficiently through event
correlation, system automation and predictive analysis which is now a
ll part of APM.
Centralized management has a time and effort trade
off that is related to the size of the
company, the expertise of the
staff, and the amount of technology being used
with ten computers, automated centralized processes may
take more time to learn how to use and implement than just doing the management
work manually on each computer.
A very large business with thousands of similar employee computers may clearly be able
o save time and money, by having IT staff learn to do systems management
A small branch office of a large corporation may have access to a central IT staff, with
the experience to set up automated management of the systems in the branch office,
without need for local staff in the branch office to do the work.
System management may involve one or more of the following tasks
Server availability monitoring and metrics.
Software inventory and installation.
User's activities monitoring.
Network capacity and utilization monitoring.
Functional groups are provided according to
Telecommunication Standardization Sector
.700) standard. This framework is also known as
and statistics gathering
However this standard should not be treated as comprehensive, there are obvious
omissions. Some are recently e
merging sectors, some are implied and some are just not
listed. The primary ones are
Business Impact functions (also known as
Business Systems Management
time Application Relationship Discovery (which supports Configuration Managemen
Security Information and Event Management
functions can also be split into end
measuring and infrastructure component measuring functions. Another recently emergi
(OI) which focuses on real
time monitoring of business
events that relate to business processes, not unlike
business activity monitoring
Distributed Management Task Force
Common Information Model
Desktop and mobile Architecture for System Hardware
Directory Enabled Networking
Systems Management Arch
itecture for Server Hardware
Java Management Extensions
1. Electromagnetic force
is one of the four
, the other
three being the
, and gravitation. This
, and has innumerable physical instances including the
particles and the interaction of uncharged magnetic force
fields with electrical conductors.
is a compound form of two
", and μαγνήτης,
of electromagnetic phenomena is
defined in terms of the electromagnetic force, sometimes called the
as elements of one phenomenon.
The electromagnetic force is the interaction responsible for almost all th
encountered in daily life, with the exception of gravity. Ordinary matter takes its form as a
by electromagnetic wave mechanics into orbitals around
which are the building blocks of molecules. This governs the processes involved
, which arise from interactions between the
of neighboring atoms,
which are in turn deter
mined by the interaction between electromagnetic force and the
momentum of the electrons.
There are numerous
mathematical descriptions of the electromagnetic field
are described as
are associated with
describe how electric and magnetic
fields are generated and altered by each other and by charges and currents.
The theoretical implications of electromagnetism, in particular the establ
ishment of the
speed of light based on properties of the "medium" of propagation
), led to the development
Originally electricity and magnetism were thought of as two separate forces. This view
changed, however, with the publication of
James Clerk Maxwell
Electricity and Magnetism
in which the interactions of positive and negative charges were
shown to be regulated by one force. There are four main effects resulting from these
tions, all of which have been clearly demonstrated by experiments
Electric charges attract or repel one another with a force inversely proportional to
the square of the distance between them unlike charges attract, like ones repel.
Magnetic poles (or state
s of polarization at individual points) attract or repel one
another in a similar way and always come in pairs every North Pole is yoked to a
An electric current in a wire creates a circular magnetic field around the wire, its
ise or counter
clockwise) depending on that of the current.
A current is induced in a loop of wire when it is moved towards or away from a
magnetic field, or a magnet is moved towards or away from it, the direction of
current depending on that of the
While preparing for an evening lecture on 21 April 1820,
Hans Christian Ørsted
surprising observation. As he was setting up his materials, h
e noticed a
when the electric current from the battery he was
using was switched on and off. This deflection convinced him that magnetic fields radiate
from all sides of a wire carrying an electric current, just as light and heat do, and that it
confirmed a direct rel
ationship between electricity and magnetism.
At the time of discovery, Ørsted did not suggest any satisfactory explanation of the
phenomenon, nor did he try to represent the phenomenon in a mathematical framework.
However, three months later he began more
intensive investigations. Soon thereafter he
published his findings, proving that an electric current produces a magnetic field as it flows
through a wire. The
) is named in honor of his
contributions to the field of electromagnetism.
resulted in intensive research throughout the scientific community
. They influenced French physicist
of a single mathematical form to represent the magnetic forces between current
conductors. Ørsted's discovery also represented a major step toward a unified concept of
This unification, which was observed by
, extended by
and partially reformulated by
, is one of th
accomplishments of 19th century
. It had far
one of which was the understanding of the nature of
. Unlike what was proposed in
Electromagnetism, light and other electromagnetic
are at the present seen as taking the
electromagnetic field disturbances which
have been called
of oscillation give rise to the different forms
at the lowest frequencies, to visible light at
intermediate frequencies, to
Ørsted was not the only person to examine the relation between electricity and magnetism.
Gian Domenico Romagnosi
, an Italian le
gal scholar, deflected a magnetic needle by
electrostatic charges. Actually, no
current existed in the setup and hence no
electromagnetism was present. An account of the
discovery was published in 1802 in an
Italian newspaper, but it was largely overlooked by the contemporary scientific community.
e electromagnetic force is one of the four known
. The other
fundamental forces are
weak nuclear force
, which binds to all known particles in the
causes certain forms of
is the unified description of two of the four known fundamental interactions
of nature electromagnetism and the weak interaction);
strong nuclear force
, which binds
, and binds nucleo
All other forces (e.g.,
) are ultimately derived from these fundamental forces and
momentum carried by the movement of particles.
The electromagnetic force is the one responsible for practically all the phenomena one
encounters in daily life a
bove the nuclear scale, with the exception of gravity. Roughly
speaking, all the forces involved in interactions between
can be explained by the
electromagnetic force acting on the
and around the atoms, together with how these partic
les carry momentum by their
movement. This includes the forces we experience in "pushing" or "pulling" ordinary
material objects, which come from the
forces between the individual
bodies and those in the objects. It also includes all forms of
A necessary part of understanding the intra
atomic to intermolecular forces is the effective
orce generated by the momentum of the electrons' movement, and that electrons move
between interacting atoms, carrying momentum with them. As a collection of electrons
becomes more confined, their minimum momentum necessarily increases due to the
. The behaviour of matter at the molecular scale including its density is
determined by the balance between the electromagnetic force and
the force generated by
the exchange of momentum carried by the electrons themselves.
proposed, in his
(1600), that electricity and
magnetism, while both capable of causing attraction and repulsion of objects, were distinct
Mariners had noticed that lightning strikes had the ability to disturb a compass
needle, but the link between lightning and electricity was not confirmed until
's proposed experiments in 1752. One of the first to discover and publish a link
made electric current and magnetism was
, who in 1802 noticed
a wire across a
deflected a nearby
the effect did not become widely known u
ntil 1820, when Ørsted performed a similar
Ørsted's work influenced Ampère to produce a theory of electromagnetism
that set the subject on a mathematical fo
A theory of electromagnetism, known as
, was developed by
over the course of the 19th century, culminating in the work of
o unified the preceding developments into a single theory and discovered
the electromagnetic nature of light. In classical electromagnetism, the electromagnetic field
obeys a set of equations known as
, and the electromagnetic force is
given by the
Lorentz force law
One of the peculiarities of classical electromagnetism
is that it is difficult to reconcile
, but it is compatible with special relativity. According to Maxwell's
equations, the speed
in a vacuum i
s a universal constant, dependent only on the
. This violates
standing cornerstone of cla
ssical mechanics. One way to reconcile the two theories is
to assume the existence of a
through which the light propagates.
However, subsequent ex
perimental efforts failed to detect the presence of the aether. After
important contributions of
, in 1905, Albert Einstein
solved the problem with the introduction of special relativity, which replaces classical
kinematics with a new theory of kinematics that is compatible with classical
(For more information, see
History of special relativity
In addition, relativity theory shows that in moving frames of reference a magnetic fiel
transforms to a field with a nonzero electric component and vice versa; thus firmly showing
that they are two sides of the same coin, and thus the term "electromagnetism". (For more
Classical electromagnetism and special relativity
of classical electromagnetism
In another paper published in that same year, Albert Einstein undermined the very
foundations of classical elect
romagnetism. In his theory of the
which he won the Nobel prize for physics) and inspired by the idea of
"quanta", he posited that light could exist in discrete particle
like quantities as well, which
later came to be known as
. Einstein's theory of the photoelectric effect extended
the insights that appeared in the solution of the
in 1900. In his work, Planck showed that hot objects emit electromagnetic radiation
in discrete packets ("quanta"), which leads to a finite total
. Both of these results were in direct contradiction with the classical view of lig
a continuous wave. Planck's and Einstein's theories were progenitors of
, which, when formulated in 1925, necessitated the invention of a quantum
heory of electromagnetism. This theory, completed in the 1940s
1950s, is known
(or "QED"), and, in situations where
applicable, is one of the most accurate theories known to physics.
is a colloquial expression used to describe a variety
of different types
concepts that involve a large number of computers connected through a real
Cloud computing is a
without a commonly accepted non
ambiguous scientific or technical definition. In
science, cloud computing is a synonym for
over a network and means
the ability to run a program on many connected computers at the same time. The phrase is
also, more commonly used to refer to network based services which appear to be provided
real server hardware, which in fact are served up by virtual hardware, simulated by
software running on one or more real machines. Such virtual servers do not physically exist
and can therefore be moved around and scaled up (or down) on the fly without aff
the end user
arguably, rather like a cloud.
The popularity of the term can be attributed to its use in marketing to sell hosted services in
the sense of
application service provisioning
software on a remote
Cloud computing relies on sharing of res to achieve coherence and
similar to a
) over a network.
At the foundation of c
computing is the broader concept of
The cloud also focuses on maximizing the effectiveness of the shared res. Cloud res are
usually not only shared by multiple users but as dynamically re
allocated per demand. This
can work for allocating res to users in different time zones
. For example, a cloud computer
facility, which serves European users during European business hours with a specific
application (e.g. email) while the same res are getting reallocated and serve North American
users during North America's business hours wi
th another application (e.g. web server). This
approach should maximize the use of computing powers thus reducing environmental
damage as well. Since less power, air conditioning,
, and so on, is required for the
The term "moving
to cloud" also refers to an organization moving away from a
model (buy the dedicated hardware and depreciate it over a period of
time) to the
model (use a shared cloud infrastructure and pay as you use it).
Proponents claim that cloud computing allows companies to avoid upfront infrastructure
costs, and focus on
projects that differentiate their businesses instead of
Proponents also claim that cloud computing allows enterprises to get their
applications up an
d running faster, with improved manageability and less maintenance, and
enables IT to more rapidly, adjust res to meet fluctuating and unpredictable business
In marketing, cloud computing is mostly used to sell hosted services in
application service provisioning
software at a remote location. Such
services are given popular acronyms like 'SaaS' (Software as a Service), 'PaaS' (Platform as a
Service), 'IaaS' (Infrastructure as a Service), 'HaaS' (Hardware as a Service) and finally 'EaaS'
(Everything as a Service).
End users access cloud
or a light
and user's data
are stored on servers at a remote locati
The underlying concept of cloud computing dates back to the 1950s, when large
became available in academia and corporations, accessible
, often referred to as "dumb terminals", because they
were used for communications but had no internal processing capacities. To make more
efficient use of costly mainframes, a practice evolved that allowed multiple users to share
physical access to the computer from multiple terminals as well as to share
time. This eliminated periods of inactivity on the mainframe and allowed for a
greater return on the investment.
The practice of sharing CPU time on a mainframe became
known in the industry as
opined in the 1960s that "computation may someday be organized as
most all the modern
day characteristics of cloud computing (elastic
provision, provided as a utility, online, illusion of infinite supply), the comparison to the
electricity industry and the use of public, private, government, and community forms, were
roughly explored in
's 1966 book,
The Challenge of the Computer Utility
Other scholars have shown that cloud computing's roots go all the way back to the 1
(the author of
) postulated that the entire world
te on dumb terminals powered by about 15 large data centers.
Due to the
expense of these powerful computers, many corporations and other entities could avail
computing capability through time sharing and several organizations, such as
GE's GEISCO, IBM subsidiary The
Service Bureau Corporation
(SBC, founded in
Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun &
Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), and
(BBN) marketed time sharing as a commercial venture.
In the 1990s, telecommunications companies
previously offered primarily dedicated
point data circuits, began offering
virtual private network
(VPN) services with
comparable quality of service, but at a lower cost. By switching traffic as they saw fit to
balance server use, they
could use overall network bandwidth more effectively. They began
to use the cloud symbol to denote the demarcation point between what the provider was
responsible for and what users were responsible for. Cloud computing extends this
boundary to cover serv
ers as well as the network infrastructure.
As computers became more prevalent, scientists and technologists explored ways to make
scale computing power available to more u
sers through time sharing, experimenting
with algorithms to provide the optimal use of the infrastructure, platform and applications
with prioritized access to the CPU and efficiency for the end users.
played a key role in all the development of cloud
computing by modernizing their
, which, lik
using as little as 10% of their capacity at any one time, just to leave room for occasional
spikes. Having found that the new cloud archite
cture resulted in significant internal
efficiency improvements whereby small, fast
pizza teams" (teams small
enough to feed with two pizzas) could add new features faster and more easily, Amazon
initiated a new product development effort to pro
vide cloud computing to external
customers, and launched
Amazon Web Services
(AWS) on a utility computing basis in
In early 2008,
the first open
, AWS API
compatible platform for
deploying private clouds. In early 2008,
, enhanced in th6666e RESERVOIR
funded project, became the
software for deploying
private and hybrid clouds, and for the federation of clouds.
In the same year, efforts were
guarantees (as requ
ired by real
time interactive applications)
based infrastructures, in the framework of the IRMOS European Commission
funded project, resulting to
time cloud environment
2008, Gartner saw an
opportunity for cloud computing "to shape the relationship among consumers of IT services,
those who use IT services and those who sell them"
and observed that "organizations are
switching from company
owned hardware and software assets to per
models" so that the "projected shift to computing
... will result in dramatic growth in IT
products in some areas and significant
reductions in other areas."
On March 1, 2011, IBM announced the
framework to support Smarter
Among the various components of the Smarter Computing foundation, cloud
computing is a critical piece.
The development of the Internet from being document centric via semantic data towards
more and more services was described as "Dynamic Web".
This contribution focused in
particular in the need for better meta
data able to describe not only implementation details
but also conceptual details of model
The present availability of high
capacity networks, low
cost computers and st
as well as the widespread adoption of
, and utility computing have led to a growth in cloud
Financials Cloud vendors are experiencing growth rates of 90% per annum.
Origin of the term
The origin of the term
is unclear. The expression
is commonly used in
science to describe a large agglomeration of objects that visually appear from a distance as a
cloud and describes any set of things whose details are not inspected further in a given
Meteorology: a weather cloud is an a
Mathematics: a large number of points in a coordinate system in mathematics is seen as
a point cloud;
Astronomy: stars that appear crowded together in the sky are known as nebula (
for mist or cloud), e.g. the Milky Way;
Physics: The ind
eterminate position of electrons around an atomic kernel appears like a
cloud to a distant observer
In analogy to above usage the word
was used as a metaphor for the Internet and a
like shape was used to denote a network on telepho
ny schematics and
later to depict the Internet in
computer network diagrams
. The cloud symbol was used to
represent the Internet as early as 1994,
in which servers were then shown connected to,
but external to, the cloud symbol.
References to cloud
computing in its modern sense can be found as early as 2006, with the
earliest known mention to be found in a
claim that usage of the expression is directly derived from the practice of
using drawings of stylized clouds to denote networks in diagrams
of computing and
communications systems or that it derived from a marketing term.
The term became
Elastic Compute Cloud
Similar systems and concepts
Cloud Computing is the result of evolution and adoption of existing technologies and
The goal of cloud computing is to allow users to take beneﬁt from all of these
technologies, without the need for deep knowledge about or expertise with each one of
them. The cloud aims to cut costs, and help the users focus on their core business instead
being impeded by IT obstacles.
The main enabling technology for cloud computing is
. Virtualization abstracts
the physical infrastructure, which is the most rigid component, and makes it available as a
soft component that is easy to use and manage. By doing so, virtualization provides the
agility required to speed up IT ope
rations, and reduces cost by increasing
. On the other hand, autonomic computing automates the process
through which the user can provision res
. By minimizing user involvement,
automation speeds up the process and reduces the possibility of human errors.
Users face difficult business problems every day. Cloud computing adopts concepts
that can help the user break these problems
that can be integrated to provide a solution. Cloud computing provides all of its
as services, and makes use of the well
established standards and best practices gained in
the domain of SOA to allow global and easy access to cloud services in a standardized way.
Cloud computing also leverages concepts from
in order to
for the services used. Such metrics are at the core of the public
use models. In addition, measured services are an essential part of the feedback loop in
autonomic computing, allowing services to scale on
demand and to perform automatic
Cloud computing is a kind of
; it has evolved from grid computing by
(quality of service) and
problems. Cloud computing provides
the tools and technologies to build data/compute intensive parallel applications with much
le prices compared to traditional
g shares characteristics with:
refers broadly to any
that distinguishes between service providers (servers) and service
"A form of
, whereby a 'super and
virtual computer' is composed of a
acting in concert to perform very large tasks."
Powerful computers used mainly by large organizations for
critical applications, typically bulk data processing such as
, industry and
consumer statistics, police
and secret intelligence services,
enterprise re planning
The "packaging of
, such as computation and storage,
as a metered service similar to a traditional
public utility, such as electricity."
means distributed architecture without the need for central coordination.
Participants are both suppliers and consumers of res (in contrast to the traditional
also known as on
is a way of delivering games to
computers. Gaming data is stored in the provider's server, so that gaming is
independent of client computers used to play the gam
Cloud computing exhibits the following key characteristics:
improves with users' ability to re
cal infrastructure res.
Application programming interface
(API) accessibility to software that enables
machines to interact with cloud sof
tware in the same way that a traditional user
interface (e.g., a computer desktop) facilitates interaction between humans and
computers. Cloud computing systems typically use Representational State Transfer
: cloud providers claim that computing costs reduce. A public
cloud delivery model
barriers to entry
, as infrastructure is typically provided by a third
party and does
not need to be purchased for one
time or infrequent intensive comp
uting tasks. Pricing
on a utility computing basis is fine
grained, with usage
based options and fewer IT skills
are required for implementation (in
contains several articles looking into cost aspects in more detail, most of
them concluding that costs savings depend on the type of activi
ties supported and the
type of infrastructure available in
Device and location independence
enable users to access systems using a web
browser regardless of their location or what device they use (e.g., PC, mobile phone). As
infrastructure is off
site (typically provided by a third
party) and accessed via the
can connect from anywhere.
technology allows sharing of servers and storage
devices and increased
utilization. Applications can be easily migrated from one physical server to another.
enables sharing of res and costs across a large pool of use
rs thus allowing
of infrastructure in locations with lower costs (such as real estate,
increases (users need not engineer for highest possible load
utilisation and efficiency
for systems that are often only 10
improves with the use of multiple redundant sites, which makes well
designed cloud computing suitable for
via dynamic ("on
of res on a
service basis near real
without users having to eng
ineer for peak
is monitored, and consistent and loosely coupled architectures are
as the system interface.
can improve due to c
entralization of data, increased security
focused res, etc.,
but concerns can persist about loss of control over certain sensitive data, and the lack of
security for stored kernels.
Security is often as good as or better than other traditional
systems, in part because providers are able to devote res to solving security issues that
many customers cannot afford to tackle.
However, the complexity of security is greatly
increased when data is distributed over a wider area or over a greater number of
devices, as well as in multi
tenant systems shar
ed by unrelated users. In addition, user
access to security
may be difficult or impossible. Private cloud installations are
in part motivated by users' desire to retain contro
l over the infrastructure and avoid
losing control of information security.
of cloud computing applications is easier, because they do not need to be
stalled on each user's computer and can be accessed from different places.
The National Institute of Standards and Technology's definition of cloud computing
identifies "five essential characteristics":
A consumer can unilaterally p
rovision computing capabilities, such
as server time and network storage, as needed automatically without requiring human
interaction with each service provider.
Broad network access
Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms
(e.g., mobile phones, tablets, laptops, and workstations).
The provider's computing res are pooled to serve multiple consumers using a
tenant model, with
different physical and virtual res dynamically assigned and
reassigned according to consumer demand.
Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensur
ate with demand. To the
consumer, the capabilities available for provisioning often appear unlimited and can be
appropriated in any quantity at any time.
Cloud systems automatically control and optimize re use by leveraging a
ability at some level of abstraction appropriate to the type of service (e.g.,
storage, processing, bandwidth, and active user accounts). Re usage can be monitored,
controlled, and reported, providing transparency for both the provider and consumer of the
service requirement of cloud computing prompts infrastructure vendors to create
cloud computing templates, which are obtained from cloud service catalogues.
Manufacturers of such templates or blueprints include
(BMC), with Service
Blueprints as part of their cloud management platform
names its templates as HP Cloud Maps
, which n
The templates contain predefined configurations used by consumers to set up cloud
services. The templates or blueprints provide the tech
nical information necessary to build
Each template includes specific configuration details for different
cloud infrastructures, with information a
bout servers for specific tasks such as hosting
and so on.
The templates also include predefined
Web service, the operating syste
m, the database, security configurations and load
Cloud computing consumers use cloud templates to move applications between clouds
through a self
ortal. The predefined blueprints define all that an application
requires to run in different environments. For example, a template could define how the
same application could be deployed in cloud platforms based on Amazon Web Service,
VMware or Red Hat.
The user organization benefits from cloud templates because the
technical aspects of cloud configurations reside in the templates, letting users to deploy
cloud services with a
push of a button.
Developers can use cloud templates to create a
catalog of cloud services.
Cloud computing providers offer their services according to several fundamental
infrastructure as a service (IaaS), platform as a service (PaaS), and software as a
service (SaaS) where IaaS is the most basic and each higher mod
el abstracts from the details
of the lower models. Other key components in anything as a service (XaaS) are described in
a comprehensive taxonomy model published in 2009,
Service, Business Process
Service, etc. In
2012, network as a service (NaaS) and communication as a service (CaaS) were officially
included by ITU (International Telecommunication Un
ion) as part of the basic cloud
computing models, recognized service categories of a telecommunication
Infrastructure as a service (IaaS)
In the most basic cloud
service model, providers of IaaS offer computers
physical or (more
often) virtual machines
and other res. (A
, such as
, runs the virtual
machines as guests. Pools of
hypervisors within the cloud operational support
support large numbers of virtual machines and the ability to scale services up and down
according to customers' varying requirements.) IaaS clouds often offer additional res such as
library, raw (block) and file
based storage, firewalls, load
balancers, IP addresses,
virtual local area netw
(VLANs), and software bundles.
cloud providers supply these res on
demand from their large pools installed in
connectivity, customers can use either the Internet or
(dedicated virtual private networks).
To deploy their applications, cloud users install operating
system images and their
application software on the cloud infrastructure. In this model, the cloud user patches and
the operating systems and the application software. Cloud providers typically bill
IaaS services on a utility computing basis cost reflects the amount of res allocated and
Examples of IaaS providers include:
Google Compute Engine
, rather than replacing local computing
e local telecommunications infrastructure with
Voice over IP
site Internet services.
Platform as a service (PaaS)
In the PaaS model, cloud providers deliver a
, typically including
operating system, programming language execut
ion environment, database, and web
server. Application developers can develop and run their software solutions on a cloud
platform without the cost and complexity of buying and managing the underlying hardware
and software layers. With some PaaS offers, th
e underlying computer and storage res scale
automatically to match application demand so that the cloud user does not have to allocate
Examples of PaaS include:
AWS Elastic Beanstalk
Google App Engine
Windows Azure Cloud
Software as a service
software as a service (SaaS), users are provided access to
application software and databases. Cloud providers manage the infrastructure and
platforms that run the applications. SaaS is sometimes referred to as "on
and is usually priced on
use basis. SaaS providers generally price applications
using a subscription fee.
In the SaaS model, cloud providers install and operate application software in the cloud and
cloud users access the software from cloud clients. Cloud users do not
manage the cloud
infrastructure and platform where the application runs. This eliminates the need to install
and run the application on the cloud user's own computers, which simplifies maintenance
and support. Cloud applications are different from other ap
plications in their scalability
which can be achieved by cloning tasks onto multiple
time to meet
changing work demand.
distribute the work over the set of virtual
machines. This process is transparent to the cloud user, wh
o sees only a single access point.
To accommodate a large number of cloud users, cloud applications can be
is, any machine serves more than one cloud user organization. It is common to refer to
special types of cloud based application software with a similar naming convention:
as a service
, business process as a service,
test environment as a se
, communication as
The pricing model for SaaS applications is typically a monthly or yearly flat fee per user,
price is scalable and adjustable if user
s are added or removed at any point.
Examples of SaaS include:
Microsoft Office 365
Proponents claim SaaS allows a business the potential to reduce IT operational costs by
outsourcing hardware and software maintenance and support to the cloud provider. This
enables the business to reallocate IT operations costs away from hardware/
spending and personnel expenses, towards meeting other goals. In addition, with
applications hosted centrally, updates can be released without the need for users to install
new software. One drawback of SaaS is that the users' data are stored on t
provider's server. As a result, there could be unauthorized access to the data.
Network as a service (NaaS)
A category of cloud services where the capability provided to the cloud service user is to use
network/transport connectivity services and/or inter
cloud network connectivity
NaaS involves the optimization of re allocations by considering netwo
computing res as a unified whole.
Traditional NaaS services include flexible and extended VPN, and bandwidth on
NaaS concept materialization also includes the provision of a virtual network
service by the owners of the network infrastructure to a third party (VNP
Legacy management infrastructures, which are based on the concept of dedicated system
relationships and architecture constructs, are not well suited to cloud environments where
instances are continually launched and decommissioned.
Instead, the dynamic nature of
cloud computing requires monitoring and management tools that are adaptable, extensible
Cloud management challenges
Cloud computing presents a number of management challenges. Companies using public
clouds do not have ownership
of the equipment hosting the cloud environment, and
because the environment is not contained within their own networks, public cloud
customers don’t have full visibility or control.
Users of public cloud services must also
integrate with an architecture defined by the cloud provider, using its specific parameters
for working with cloud components. Integration includes tying into the cloud APIs for
configuring IP address
es, subnets, firewalls and data service functions for storage. Because
control of these functions is based on the cloud provider’s infrastructure and services, public
cloud users must integrate with the cloud infrastructure management.
Capacity management is a challenge for both public and private cloud environments
because end users have the ability to deploy applications using self
Applications of all sizes
may appear in the environment, consume an unpredictable amount
of res, then disappear at any time.
or, pricing re use on a granular basis
is a challenge for
both public and
private cloud environments.
Chargeback is a challenge for public cloud service providers
because they must price their services competitively while s
till creating profit.
public cloud services may find chargeback challenging because it is difficult for IT groups to
assess actual re costs on a granular basi
s due to overlapping res within an organization that
may be paid for by an individual business unit, such as electrical power.
For private cloud
is fairly straightforward, but the challenge lies in guessing how to
allocate res as closely as possible to actual re usage to achieve the greatest operational
efficiency. Exceeding budgets can be a risk.
Hybrid cloud environments, which combine public and private cloud services, sometimes
with traditional infrastructure elements, present their own set of management challenges.
These include security concerns if
sensitive data lands on public cloud servers, budget
concerns around overuse of storage or bandwidth and proliferation of mismanaged
Managing the information
flow in a hybrid cloud environment is also a significant
premises clouds must share information with applications hosted off
premises by public cloud providers, and this information may change constantly.
cloud environments also typically include a complex mix of policies, permissions and limits
that must be managed consistently across both public and private clouds.
Users access cloud computing using networked client devices, such as
e of these devices
cloud computing for all or a majority of their applications so as to be essentially useless
without it. Examples are
. Many cloud
applications do not require specific software on the client and instead use a web browser to
interact with the cloud application. With
achieve a similar, or even better,
look and feel
to native applications. Some cloud
applications, however, support specific client software dedicated to these a
clients and most email clients). Some legacy applications (line of
business applications that until now have been prevalent i
delivered via a screen
Private cloud is cloud infrastructure operated solely for a single organization, whether
managed internally or by a third
ty and hosted internally or externally.
private cloud project requires a significant level and degree of engagement to virtualize the
business environment, an
d requires the organization to reevaluate decisions about existing
res. When done right, it can improve business, but every step in the project raises security
issues that must be addressed to prevent serious vulnerabilities.
They have attracted criticism because users "still have to buy, build, and manage them" and
thus do not benefit from less hands
essentially "lacking the economic
model that makes cloud computing such an intriguing concept".
A cloud is called a 'Public cloud' when the services are rendered over a network that is open
for public use. Technically ther
e is no difference between public and private cloud
architecture, however, security consideration may be substantially different for services
(applications, storage, and other res) that are made available by a service provider for a
public audience and whe
n communication is effected over a non
Generally, public cloud service providers like Amazon AWS, Microsoft and Google own and
operate the infrastructure and offer access only via Internet (direct connectivity is not
shares infrastructure between several organizations from
community with common concerns (security, compliance, jurisdiction, etc.), whether
managed internally or by a third
party and hosted internally or externally. The costs are
spread over fewer users than a public cloud (but more than a private cl
oud), so only some of
the cost savings potential of cloud computing are realized.
Hybrid cloud is a composition of two or more clouds (private, community or public) that
remain unique entities but are bound together, offering the benefits of multiple deployment
Such composition expands deployment options for cloud services, allowing IT
organizations to use public cloud computing res to meet temporary needs.
enables hybrid clouds to employ cloud bursting for scaling across clouds.
Cloud bursting is an application deployment model in which an application runs in a pr
cloud or data center and "bursts" to a public cloud when the demand for computing
capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that
an organization only pays for extra compute res when they are needed.
Cloud bursting enables data centers to create an in
house IT infrastructure that supports
average workloads, and use cloud res from public or private clouds, during spikes in
By utilizing "hybrid cloud" architecture, companies and individuals are able to obtain
degrees of fault tolerance combined with locally immediate usability
on internet connectivity. Hybrid cloud architecture requires both on
premises res and off
site (remote) server
based cloud infrastructure.
Hybrid clouds lack the flexibility, security and certainty of in
cloud provides the flexibility of in house applications with the fault tolerance and scalability
of cloud based services.
Personal cloud is an application
of cloud computing for individuals similar to a
. While a vendor organization may help manage or maintain a personal cloud, it
never takes possession of
the data on the personal cloud, which remains under control of
Cloud computing can also be provided by a distributed set of machines that are running at
different locations, while still connected to a single network or hub service. Older examples
of this include distributed computing platforms such as
, as well
as new crowd
d cloud providers such as
Cloud management strategies
Public clouds are managed by public cloud service providers, which include the public cloud
servers, storage, networking and data center operations.
Users of public
cloud services can generally select from three basic categories:
Customers purchase cloud services directly from the provider,
typically through a web form or console interface. The customer pays on a per
Customers contract in advance a predetermined amount of res,
which are pre
pared in advance of service. The customer pays a flat fee or a monthly fee.
The provider allocates res when the customer needs them, then
decommissions them when they are no longer needed. The customer is charged on a
Managing a private cloud requires software tools to help create a virtualized pool of
compute res, provide a self
service portal for end users and handle security, re allocation,
tracking and billing.
Management tools for private clouds tend to be service driven, as
opposed to re driven, because cloud environments are typically highly virtualized and
organized in terms of portable workloads.
In hybrid cloud environments, compute, network and storage res must be managed across
multiple domains, so a good management strategy should start by defining what needs to
be managed, and where a
nd how to do it.
Policies to help govern these domains should
include configuration and installation of images, access control, and budgeting and
Aspects of cloud management systems
A cloud management system is a combination of software and technologies designed to
manage cloud environments.
The industry has responded to the management challenges
of cloud computing with cloud management systems. HP, Novell, Eucalyptus,
ix and are among the vendors that have management systems specifically for managing
At a minimum, a cloud management solution should be able to
manage a pool of
heterogeneous compute res, provide access to end users, monitor security, manage re
allocation and manage tracking.
Enterprises with large
oud implementations may require more robust cloud
management tools that include specific characteristics, such as the ability to manage
multiple platforms from a single point of reference, include intelligent analytics to automate
processes like applicatio
n lifecycle management. And high
end cloud management tools
should also be able to handle system failures automatically with capabilities such as self
monitoring, an explicit notification mechanism, and include failover and self
4. Utility computing
is the packaging of
, such as computation, storage and
services, as a metered service. This model has the advantage of a low or no initial cost to
acquire computer res; instead,
are essentially rented.
This repackaging of computing services became the foundation of the shift to "
software as a service
models that further propagated the
idea of computing, application and netw
ork as a service.
There was some initial skepticism about such a significant shift.
However, the new model of
computing caught on and eventually became mainstream.
IBM, HP and
Microsoft were early leaders in the new field of Utility Computing with their
business units and researchers working on the architecture, payment and development
challenges of the new computing model. Google, Amazon and others started to take the
2008, as they established their own utility services for computing, storage and
Utility Computing can support grid computing which has the characteristic of very large
computations or a sudden peaks in demand which are supported via a large n
"Utility computing" has usually envisioned some form of
so that the amount of
storage or computing power available is consi
derably larger than that of a single
computer. Multiple servers are used on the "back end" to make this possible. These
might be a dedicated
specifically built for the purpose of being rented out,
or even an under
utilized supercomputer. The technique of running a single calculation on
multiple computers is known as
The term "
" is often used to describe a particu
lar form of distributed
computing, where the supporting nodes are geographically distributed or
. To provide utility computing service
s, a company can
"bundle" the res of members of the public for sale, who might be paid with a portion of the
revenue from clients.
One model, common among
applications, is for a central server to
dispense tasks to participating nodes, on the behest of approved end
users (in the
commercial case, the paying customers). Another model, sometimes called the
is more decentralized, with organizations buying
as needed or as they go idle.
The definition of "utility computing" is sometimes extended to specialized tasks, such
Utility computing merely means "Pay and Use", with regards to computing power. Utility
computing is not a new concept, but rather
has quite a long history. Among the earliest
IBM and other mainframe providers conducted this kind of business in the following two
decades, often referred to as time
sharing, offering computing power and database storage
to banks and other
large organizations from their world wide data centers. To facilitate this
business model, mainframe operating systems evolved to include process control facilities,
security, and user metering. The advent of mini computers changed this business model, by
making computers affordable to almost all companies. As Intel and AMD increased the
power of PC architecture servers with each new generation of processor, data centers
became filled with thousands of servers.
In the late 90's utility computing re
. InsynQ (
), Inc. launched on
applications and desktop hosting services in 1997 using HP equipment. In 1998, HP set up
the Utility Computing Division in Mountain View, CA, assigning former Bell Labs computer
cientists to begin work on a computing power plant, incorporating multiple utilities to form
a software stack. Services such as "IP billing
tap" were marketed. HP introduced the
Utility Data Center in 2001. Sun announced the
service to consumers in 2000. In
launched Alexa Web Search Platform, a Web search building tool for
h the underlying power is utility computing. Alexa charges users for storage, utilization,
etc. There is space in the market for specific industries and applications as well as other
niche applications powered by utility computing. For example,
clustered file system
based on commodity server and storage hardware that creates highly
utility computing environments for mission
critical applications including Oracle
and Microsoft SQL Server databases, as well as workload optimized solutions specifically
tuned for bulk storage, high
performance computing, vertical industries such as fina
services, seismic processing, and content serving. The Database Utility and File Serving
Utility enable IT organizations to independently add servers or storage as needed, retask
workloads to different hardware, and maintain the environment without d
In spring 2006
announced its AppLogic service and later that summer Amazon
ompute Cloud). These services allow the operation of
general purpose computing applications. Both are based on
virtualization software and
the most commonly used operating system on the virtual
computers is Linux, though
Windows and Solaris are supported. Common uses include web application, SaaS, image
rendering and processing but also general
purpose business applications.