1. Wireless network

deliriousattackInternet και Εφαρμογές Web

4 Δεκ 2013 (πριν από 3 χρόνια και 10 μήνες)

89 εμφανίσεις


MARKS

1.

Wireless network

Wireless network

refers to any type of

computer network

that utilizes some form of
wireless network connection.

It is a method by which homes,

telecommunications networks

and enterprise (business)
installations avoid the costly process of introducing cables into a building, or as a connection
be
tween various equipment locations.

Wireless

telecommunications networks are generally
implemented and adminis
tered using

radio communication
. This implementation takes
place at the physical level (layer) of the OSI model network structure.

Types of wireless networks



Wireless PAN



Wireless

personal area networks

(WPANs) interconnect devices within a relatively small
area that is generally within a person's reach.

For example, both

Bluetooth

radio and
invisible

infrared
light provides a WPAN for interconnecting a headset to a
laptop.

ZigBee

also supports WPAN applications.

Wi
-
Fi PANs are becoming commonplace
(00) as equipment designers start to integrate Wi
-
Fi into a variety of con
sumer electronic
devices.

Intel

"My WiFi" and

Windows

"
virtual

Wi
-
Fi" capabilities have made Wi
-
Fi PANs
simpler and easier to set up and configure.

Wireless LAN




Wireless LAN

A wireless local area network (WLAN) links two or more devices over a short distance using
a wireless distribution method, usually providing a connection through an acce
ss point
for

Internet access
. The use of

spread
-
spectrum

or

OFDM

technologies may allow users to
move around within a local coverage area, and still remain connected to the network.

Products using the

IEEE 0.

WLAN
standards are marketed under the

Wi
-
Fi

brand name.

Fixed
wireless

technology implements

point
-
to
-
point

links between computers or networks at two
distant locations, often using dedicated

microwave

or modulated

laser light

beams over

line
of sight

paths. It i
s often used in cities to connect networks in two or more buildings without
installing a wired link.

Wireless mesh network



A wireless mesh network is a wireless network made up of radio nodes organized in a mesh
topology. Each node forwards messages on behalf of the other nodes. Mesh networks can
"self heal", automatically re
-
routing around a node that
has lost power.

Wireless MAN



Wireless

metropolitan area networks

are a type of wireless network that connects several
wireless LANs.



WiMAX

is a type of Wireless MAN and is described by the

IEEE 0.

Standard.

Wireless WAN



Wireless wide area networks

are wireless networks that typically cover large areas, such as
between neighboring towns and cities, or city and

suburb. These networks can be used to
connect branch offices of business or as a public internet access system. The wireless
connections between access points are usually

point to point

microwave
links

using

parabolic dishes

on the .

GHz band, rather than

omnidirectional antennas

used
with smaller networks. A typical system contains base station gateways, access p
oints and
wireless bridging relays. Other configurations are mesh systems where each access point
acts as a relay also. When combined with renewable energy systems such as photo
-
voltaic
solar panels or wind systems they can be stand alone systems.

Cellular

network



A

cellular network

or

mobile network

is a radio network distributed over land areas called
cells, each served by at least one fixed
-
location
transceiver
, known

as a

cell site

or

base
station
. In a cellular network, each cell characteristically uses a different set of r
adio
frequencies from all their immediate neighbouring cells to avoid any interference.

When joined together these cells provide radio coverage over a wide geographic area. This
enables a large number of portable transceivers (e.g., mobile phones,

pagers
, etc.) to
communicate with each other and with fixed transceivers and telephones anywhere in the
network, via base stations, even if some of the transceivers are moving through more than
one cell

during transmission.

Although originally intended for cell phones, with the development of

smartphones
,

cellular
telephone networks

routinely carry data in addition to telephone conversations



Global System for Mobile Communications

(GSM) The GSM network is divided into three
major systems the switching system, the base station system, and the operation and
support system. The cell phone connects to th
e base system station which then
connects to the operation and support station; it then connects to the switching station
where the call is transferred to where it needs to go. GSM is the most common standard
and is used for a majority of cell phones.



Personal Communications Service

(PCS) PCS is a radio band that can
be used by mobile
phones in North America and South Asia. Sprint happened to be the first service to set
up a PCS.



D
-
AMPS

Digital Advanced Mobile Phone Service, an upgraded version of AMPS, is

being
phased out due to advancement in technology. The newer GSM networks are replacing
the older system.

Uses



Some examples of usage include cellular phones which are part of everyday wireless
networks, allowing easy personal communications. Another example, Inte
r
-
continental
network systems, use radio

satellites

to communicate across the world.

Emergency
servi
ces

such as the police utilize wireless networks to communicate effectively as well.
Individuals and businesses use wireless networks to send and share data rapidly, whether it
be in a small office building or across the world.

Properties



General



In a
general sense, wireless networks offer a vast variety of uses by both business and home
users.

"Now, the industry accepts a handful of different wireless technologies. Each

wireless
technology is defined by a standard that describes unique functions at both the Physical and
the Data Link layers of the OSI Model. These standards differ in their specified signaling
methods, geographic ranges, and frequency usages, among other
things. Such differences
can make certain technologies better suited to home networks and others better suited to
network larger organizations."

Performance



Each sta
ndard varies in geographical range, thus making one standard more ideal than the
next depending on what it is one is trying to accomplish with a wireless network.

The
perfo
rmance of wireless networks satisfies a variety of applications such as voice and video.
The use of this technology also gives room for expansions, such as from

G

to

G

and, most
recently,

G
technology, which stands for fourth generation of cell phone mobile
communications standards. As wireless networking has become commonplace,
sophistication

increases through configuration of network hardware and software, and
greater capacity to send and receive larger amounts of data, faster, is achieved.

Space



Space is another
characteristic of wireless networking. Wireless networks offer many
advantages when it comes to difficult
-
to
-
wire areas trying to communicate such as across a
street or river, a warehouse on the other side of the premise or buildings that are physically
se
parated but operate as one.

Wireless networks allow for users to designate a certain
space which the network will be able to communicate with other devices through that
ne
twork. Space is also created in homes as a result of eliminating clutters of wiring.
0

This
technology allows for an alternative to installing physical network mediums
such
as

TPs
,

coaxes
, or

fiber
-
optics
, which can also be expensive.

Home



For homeowners, wireless technology is an effective option compared to

Ethernet

for
sharing printers, scanners, and high speed internet connections
. WLANs help save the cost
of installation of cable mediums, save time from physical installation, and also creates
mobility for devices connected to the network.
0

Wirel
ess networks are simple and require as
few as one single

wireless access point

connected directly to the Internet via a

router
.

Wireless Network Elements



The telecommunications network at the physical layer also

consists of many interconnected
wire line

Network Elements (NEs)
. These NEs can be stand
-
alone systems or products that
are either supplied by a single manufacturer, or are
assembled by the service provider (user)
or system integrator with parts from several different manufacturers.

Wireless NEs are products and devices used by a wireless carrier to provide support for
the

backhaul

network as well as a

Mobile Switching Center (MSC)
.

Reliable wireless service depends on the

network elements at the physical layer to be
protected against all operational environments and applications (see GR
-
,

Generic
Requirements for Network Elements Used in Wireless Networks
-

Physical Layer Criteria
).

What are especially important are the NEs that are located on the cell tower to the

Base
Station (BS)

cabinet. The attachment
hardware and the positioning of the antenna and
associated closures/cables are required to have adequate strength, robustness, corrosion
resistance, and rain/solar resistance for expected wind, storm, ice, and other weather
conditions. Requirements for ind
ividual components, such as hardware, cables, connectors,
and closures, shall take into consideration the structure to which they are attached.

2. Network management

Network management

refers to the activities, methods, procedures, and tools that pertain
to the

operation
,

administration
, maintenance, and

provisioning

of networked
systems.
1

Network management is essential to command and control practices and is
generally
carried out of a

network operations center.



Operation deals with keeping the network (and the services that the network provides)
up and running smoothly.

It includes monitoring the network to spot problems as soon
as possible, ideally before users are affected.



Administration deals with keeping track of res in the network and how they are
assigned. It includes all the "housekeeping" that is necessary to ke
ep the network under
control.



Maintenance is concerned with performing repairs and upgrades

for example, when
equipment must be replaced, when a router needs a patch for an operating system
image, when a new switch is added to a network. Maintenance also i
nvolves corrective
and preventive measures to make the managed network run "better", such as adjusting
device configuration parameters.



Provisioning is concerned with configuring res in the network to support a given service.
For example, this might includ
e setting up the network so that a new customer can
receive voice service.

A common way of characterizing network management functions is

FCAPS

Fault,
Configuration, Accounting, Performance and
Security.

Functions that are performed as part of network management accordingly include
controlling, planning, allocating, deploying, coordinating, and monitoring the res of a
network, network planning,

frequency

allocation, predetermined

traffic

routing

to
support

load balancing
,

cryptographic key

distribution

authorization
,

configuration
management
,

fault management
,

security management
,

performance
management
,

bandwidth management
,

Route analytics

and

accounting management
.

Data for network management is collected through several mechanisms,
including

agents

installed on infrastructure,

synthetic monitoring

that simulates
transactions, logs of activity,

sniffers

and real. In the past network management mainly
consisted of monitoring whether devices were up or down; today performance
management has become a crucial part of the IT team's role which brings about a host of

challenges

especially for global organizations.

Note

Network management does not include user

terminal equipment
.

Technologies




A small number of accessory methods exist to support network and network device
management. Access methods include the

SNMP
,

command
-
line interface

(CLIs), custom
XML,

CMIP
,
Windows M
anagement Instrumentation

(WMI),

Transaction Language
1
,

CORBA
,

NETCONF
, and the

Java Management Extensions

(JMX).

Internet service
providers

(ISP) use a technology known as

deep packet inspection

in order to
regulate

network congestion

and lessen

Internet bottlenecks
.

Schemas include the

WBEM
, the

Common Information Model
, and

MTOSI

amongst others.

Medical Service Providers provide a niche marketing utility for

managed
service providers
;
as HIPAA legislation consistently increases demands for knowledgeable providers. Medical
Service Providers are liable for the protection of their client’s confidential information,
including in an electronic realm. This liability creates

a significant need for managed service
providers who can provide secure infrastructure for transportation of medical data.

3. Systems management

Systems management

refers to enterprise
-
wide

administration

of

distributed
systems

including (and commonly in practice)

computer systems
.

Systems management is
strongly influenced by

network management

initiatives in

telecommunications
.
The

application performance management

(APM) technologies are now a subset of Systems
management. Maximum productivity can be achieved more efficiently through event
correlation, system automation and predictive analysis which is now a
ll part of APM.
1

Centralized management has a time and effort trade
-
off that is related to the size of the
company, the expertise of the

IT

staff, and the amount of technology being used



For a

small business

startup
with ten computers, automated centralized processes may
take more time to learn how to use and implement than just doing the management
work manually on each computer.



A very large business with thousands of similar employee computers may clearly be able
t
o save time and money, by having IT staff learn to do systems management
automation.



A small branch office of a large corporation may have access to a central IT staff, with
the experience to set up automated management of the systems in the branch office,

without need for local staff in the branch office to do the work.

System management may involve one or more of the following tasks



Hardware inventories.



Server availability monitoring and metrics.



Software inventory and installation.



Anti
-
virus

and anti
-
malware

management.



User's activities monitoring.



Capacity monitoring.



Security management.



Storage man
agement.



Network capacity and utilization monitoring.



Anti
-
manipulation management

Functions




Functional groups are provided according to

International Telecommunication
Union

Telecommunication Standardization Sector

(ITU
-
T)

Common management
information protocol

(X
.700) standard. This framework is also known as

Fault, Configuration,
Accounting, Performance,

Security

(FCAPS).

Software deployment

Accounting management

Billing

and statistics gathering

Performance management

Software metering

Security management

Identity management

Policy management

However this standard should not be treated as comprehensive, there are obvious
omissions. Some are recently e
merging sectors, some are implied and some are just not
listed. The primary ones are

Business Impact functions (also known as

Business Systems Management
)

Capacity management

Real
-
time Application Relationship Discovery (which supports Configuration Managemen
t)

Security Information and Event Management

functions (SIEM)

Workload scheduling

Performance management

functions can also be split into end
-
to
-
end performance
measuring and infrastructure component measuring functions. Another recently emergi
ng
sector is

operational intelligence

(OI) which focuses on real
-
time monitoring of business
events that relate to business processes, not unlike

business activity monitoring

(BAM).

Standards




Distributed Management Task Force

(DMTF)

Alert

Standard Format

(ASF)

Common Information Model

(CIM)

Desktop and mobile Architecture for System Hardware

(DASH)

Directory Enabled Networking

(DEN)

Systems Management Arch
itecture for Server Hardware

(SMASH)

Java Management Extensions

(JMX)


20 marks


1. Electromagnetic force

The

electromagnetic force

is one of the four

fundamental interactions

in

nature
, the other
three being the

strong interaction
, the

weak interaction
, and gravitation. This

force

is
described by

electromagnetic fields
, and has innumerable physical instances including the
interaction of

electrically charged

particles and the interaction of uncharged magnetic force
fields with electrical conductors.

The word

electromagnetism

is a compound form of two

Greek

terms, ἢλεκτρον,

ēlektron
,
"
amber
", and μαγνήτης,

magnētēs
, "
magnet
". The

science

of electromagnetic phenomena is
defined in terms of the electromagnetic force, sometimes called the

Lorentz force
, which
includes both

electricity

and magnetism

as elements of one phenomenon.

The electromagnetic force is the interaction responsible for almost all th
e phenomena
encountered in daily life, with the exception of gravity. Ordinary matter takes its form as a
result of

intermolecular forces

between individual

molecules

in matter.

Electrons

are bound
by electromagnetic wave mechanics into orbitals around

atomic nuclei

to form

atoms
,
which are the building blocks of molecules. This governs the processes involved
in

chemistry
, which arise from interactions between the

electrons

of neighboring atoms,
which are in turn deter
mined by the interaction between electromagnetic force and the
momentum of the electrons.

There are numerous

mathematical descriptions of the electromagnetic field
. In

classical
electrodynamics
,

electric fields

are described as

electric potential
and

elec
tric
current

in

Ohm's law
,

magnetic fields

are associated with

electromagnetic
induction

and

magnetism
, and

Maxwell's equations

describe how electric and magnetic
fields are generated and altered by each other and by charges and currents.

The theoretical implications of electromagnetism, in particular the establ
ishment of the
speed of light based on properties of the "medium" of propagation
(
permeability

and

permittivity
), led to the development

Originally electricity and magnetism were thought of as two separate forces. This view
changed, however, with the publication of

James Clerk Maxwell
's 1873

Treatise on
Electricity and Magnetism

in which the interactions of positive and negative charges were
shown to be regulated by one force. There are four main effects resulting from these
interac
tions, all of which have been clearly demonstrated by experiments

1.

Electric charges attract or repel one another with a force inversely proportional to
the square of the distance between them unlike charges attract, like ones repel.

2.

Magnetic poles (or state
s of polarization at individual points) attract or repel one
another in a similar way and always come in pairs every North Pole is yoked to a
south pole.

3.

An electric current in a wire creates a circular magnetic field around the wire, its
direction (clockw
ise or counter
-
clockwise) depending on that of the current.

4.

A current is induced in a loop of wire when it is moved towards or away from a
magnetic field, or a magnet is moved towards or away from it, the direction of
current depending on that of the
movement.

While preparing for an evening lecture on 21 April 1820,

Hans Christian Ørsted

made a
surprising observation. As he was setting up his materials, h
e noticed a

compass
needle

deflected from

magnetic north

when the electric current from the battery he was
using was switched on and off. This deflection convinced him that magnetic fields radiate
from all sides of a wire carrying an electric current, just as light and heat do, and that it
confirmed a direct rel
ationship between electricity and magnetism.

At the time of discovery, Ørsted did not suggest any satisfactory explanation of the
phenomenon, nor did he try to represent the phenomenon in a mathematical framework.
However, three months later he began more
intensive investigations. Soon thereafter he
published his findings, proving that an electric current produces a magnetic field as it flows
through a wire. The

CGS

unit of

magnetic induction

(
oersted
) is named in honor of his
contributions to the field of electromagnetism.

His findings

resulted in intensive research throughout the scientific community
in

electrodynamics
. They influenced French physicist
André
-
Marie Ampère
's developments
of a single mathematical form to represent the magnetic forces between current
-
carrying
conductors. Ørsted's discovery also represented a major step toward a unified concept of
e
nergy.

This unification, which was observed by

Michael Faraday
, extended by

James Cl
erk Maxwell
,
and partially reformulated by

Oliver Heaviside

and

Heinrich Hertz
, is one of th
e key
accomplishments of 19th century

mathematical physics
. It had far
-
reaching consequences,
one of which was the understanding of the nature of

light
. Unlike what was proposed in
Electromagnetism, light and other electromagnetic

are at the present seen as taking the
form of

quantized
, self
-
propagating

oscillatory

electromagnetic field disturbances which
have been called

photons
. Different

frequencies

of oscillation give rise to the different forms
of

electromagnetic radiati
on
, from

radio waves

at the lowest frequencies, to visible light at
intermediate frequencies, to

gamma rays

at the

highest frequencies.

Ørsted was not the only person to examine the relation between electricity and magnetism.
In 1802

Gian Domenico Romagnosi
, an Italian le
gal scholar, deflected a magnetic needle by
electrostatic charges. Actually, no

galvanic

current existed in the setup and hence no
electromagnetism was present. An account of the

discovery was published in 1802 in an
Italian newspaper, but it was largely overlooked by the contemporary scientific community.
1

Overview




Th
e electromagnetic force is one of the four known

fundamental forces
. The other
fundamental forces are



The

weak nuclear force
, which binds to all known particles in the

Standard Model
, and
causes certain forms of

radioactive decay
. (In

particle physics

though, the

electroweak
interaction

is the unified description of two of the four known fundamental interactions
of nature electromagnetism and the weak interaction);



the

strong nuclear force
, which binds

quarks

to form

nucleons
, and binds nucleo
ns to
form

nuclei

and



The

gravitational force
.

All other forces (e.g.,

friction
) are ultimately derived from these fundamental forces and
momentum carried by the movement of particles.

The electromagnetic force is the one responsible for practically all the phenomena one
encounters in daily life a
bove the nuclear scale, with the exception of gravity. Roughly
speaking, all the forces involved in interactions between

atoms

can be explained by the
electromagnetic force acting on the
electrically charged

atomic nuclei

and

electrons

inside
and around the atoms, together with how these partic
les carry momentum by their
movement. This includes the forces we experience in "pushing" or "pulling" ordinary
material objects, which come from the

forces between the individual

molecule
s

in our
bodies and those in the objects. It also includes all forms of

chemical phenomena
.

A necessary part of understanding the intra
-
atomic to intermolecular forces is the effective
f
orce generated by the momentum of the electrons' movement, and that electrons move
between interacting atoms, carrying momentum with them. As a collection of electrons
becomes more confined, their minimum momentum necessarily increases due to the

Pauli
Exclusion Principle
. The behaviour of matter at the molecular scale including its density is
determined by the balance between the electromagnetic force and
the force generated by
the exchange of momentum carried by the electrons themselves.

Classical electrodynamics




The scientist

William Gilbert

proposed, in his

De Magnete

(1600), that electricity and
magnetism, while both capable of causing attraction and repulsion of objects, were distinct
effects.
Mariners had noticed that lightning strikes had the ability to disturb a compass
needle, but the link between lightning and electricity was not confirmed until

Benjamin
F
ranklin
's proposed experiments in 1752. One of the first to discover and publish a link
between man
-
made electric current and magnetism was

Romagnosi
, who in 1802 noticed
that connecting

a wire across a

voltaic pile

deflected a nearby

compass

needle. However,
the effect did not become widely known u
ntil 1820, when Ørsted performed a similar
experiment.
2

Ørsted's work influenced Ampère to produce a theory of electromagnetism
that set the subject on a mathematical fo
undation.

A theory of electromagnetism, known as

classical electromagnetism
, was developed by
various

physicists

over the course of the 19th century, culminating in the work of

James
Clerk Maxwell
, wh
o unified the preceding developments into a single theory and discovered
the electromagnetic nature of light. In classical electromagnetism, the electromagnetic field
obeys a set of equations known as

Maxwell's equations
, and the electromagnetic force is
given by the

Lorentz force law
.

One of the peculiarities of classical electromagnetism
is that it is difficult to reconcile
with

classical mechanics
, but it is compatible with special relativity. According to Maxwell's
equations, the speed

in a vacuum i
s a universal constant, dependent only on the

electrical
permittivity

and

magnetic per
meability

of

free space
. This violates

Galilean invariance
, a
long
-
standing cornerstone of cla
ssical mechanics. One way to reconcile the two theories is
to assume the existence of a

luminiferous aether

through which the light propagates.
However, subsequent ex
perimental efforts failed to detect the presence of the aether. After
important contributions of

Hendrik Lorentz

and

Henri Poincaré
, in 1905, Albert Einstein
solved the problem with the introduction of special relativity, which replaces classical
kinematics with a new theory of kinematics that is compatible with classical
electromagnetism.
(For more information, see

History of special relativity
.)

In addition, relativity theory shows that in moving frames of reference a magnetic fiel
d
transforms to a field with a nonzero electric component and vice versa; thus firmly showing
that they are two sides of the same coin, and thus the term "electromagnetism". (For more
information, see

Classical electromagnetism and special relativity

and

Covariant formulation
of classical electromagnetism
.

Photoelectric effect




In another paper published in that same year, Albert Einstein undermined the very
foundations of classical elect
romagnetism. In his theory of the

photoelectric effect

(for
which he won the Nobel prize for physics) and inspired by the idea of

Max Planck
's
"quanta", he posited that light could exist in discrete particle
-
like quantities as well, which
later came to be known as

photons
. Einstein's theory of the photoelectric effect extended
the insights that appeared in the solution of the

ultraviolet catastrophe

present
ed by

Max
Planck

in 1900. In his work, Planck showed that hot objects emit electromagnetic radiation
in discrete packets ("quanta"), which leads to a finite total

energy

emitted as

black body
radiation
. Both of these results were in direct contradiction with the classical view of lig
ht as
a continuous wave. Planck's and Einstein's theories were progenitors of

quantum
mechanics
, which, when formulated in 1925, necessitated the invention of a quantum
t
heory of electromagnetism. This theory, completed in the 1940s
-
1950s, is known
as

quantum electrodynamics

(or "QED"), and, in situations where

perturbation theory

is
applicable, is one of the most accurate theories known to physics.




2. Cloud

computing

Cloud computing

is a colloquial expression used to describe a variety

of different types
of

computing

concepts that involve a large number of computers connected through a real
-
time communication

network

(typically the

Internet
).
1

Cloud computing is a

jargon
term

without a commonly accepted non
-
ambiguous scientific or technical definition. In
science, cloud computing is a synonym for

distributed computing

over a network and means
the ability to run a program on many connected computers at the same time. The phrase is
also, more commonly used to refer to network based services which appear to be provided
by
real server hardware, which in fact are served up by virtual hardware, simulated by
software running on one or more real machines. Such virtual servers do not physically exist
and can therefore be moved around and scaled up (or down) on the fly without aff
ecting
the end user
-

arguably, rather like a cloud.

The popularity of the term can be attributed to its use in marketing to sell hosted services in
the sense of

application service provisioning

that run

client server

software on a remote
location.

Advantages




Cloud computing relies on sharing of res to achieve coherence and

economies of
scale

similar to a

utility

(like the

electricity grid
) over a network.
2

At the foundation of c
loud
computing is the broader concept of

converged infrastructure

and

shared services
.

The cloud also focuses on maximizing the effectiveness of the shared res. Cloud res are
usually not only shared by multiple users but as dynamically re
-
allocated per demand. This
can work for allocating res to users in different time zones
. For example, a cloud computer
facility, which serves European users during European business hours with a specific
application (e.g. email) while the same res are getting reallocated and serve North American
users during North America's business hours wi
th another application (e.g. web server). This
approach should maximize the use of computing powers thus reducing environmental
damage as well. Since less power, air conditioning,
rack space
, and so on, is required for the
same functions.

The term "moving
to cloud" also refers to an organization moving away from a
traditional

CAPEX

model (buy the dedicated hardware and depreciate it over a period of
time) to the

OPEX

model (use a shared cloud infrastructure and pay as you use it).

Proponents claim that cloud computing allows companies to avoid upfront infrastructure
costs, and focus on

projects that differentiate their businesses instead of
infrastructure.
3

Proponents also claim that cloud computing allows enterprises to get their
applications up an
d running faster, with improved manageability and less maintenance, and
enables IT to more rapidly, adjust res to meet fluctuating and unpredictable business
demand.
3
4
5

Hosted services




In marketing, cloud computing is mostly used to sell hosted services in

the sense
of

application service provisioning

that run

client server

software at a remote location. Such
services are given popular acronyms like 'SaaS' (Software as a Service), 'PaaS' (Platform as a
Service), 'IaaS' (Infrastructure as a Service), 'HaaS' (Hardware as a Service) and finally 'EaaS'
(Everything as a Service).
End users access cloud
-
based

applications

through a

web
browser

or a light
-
weight desktop
or

mobile app

while the

business software

and user's data
are stored on servers at a remote locati
on.

History




The 1950s




The underlying concept of cloud computing dates back to the 1950s, when large
-
scale

mainframe computers

became available in academia and corporations, accessible
via

thin clients
/
t
erminal computers
, often referred to as "dumb terminals", because they
were used for communications but had no internal processing capacities. To make more
efficient use of costly mainframes, a practice evolved that allowed multiple users to share
both the

physical access to the computer from multiple terminals as well as to share
the

CPU

time. This eliminated periods of inactivity on the mainframe and allowed for a
greater return on the investment.
The practice of sharing CPU time on a mainframe became
known in the industry as

time
-
sharing
.
6

The 1960s

1990s




John

McCarthy

opined in the 1960s that "computation may someday be organized as
a

public utility
."
7

Al
most all the modern
-
day characteristics of cloud computing (elastic
provision, provided as a utility, online, illusion of infinite supply), the comparison to the
electricity industry and the use of public, private, government, and community forms, were
tho
roughly explored in

Douglas Parkhill
's 1966 book,

The Challenge of the Computer Utility
.
Other scholars have shown that cloud computing's roots go all the way back to the 1
950s
when scientist

Herb Grosch

(the author of
Grosch's law
) postulated that the entire world
would opera
te on dumb terminals powered by about 15 large data centers.
8

Due to the
expense of these powerful computers, many corporations and other entities could avail
themselves of
computing capability through time sharing and several organizations, such as
GE's GEISCO, IBM subsidiary The

Service Bureau Corporation

(SBC, founded in

1957),
Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun &
Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), and

Bolt,

Beranek and
Newman
(BBN) marketed time sharing as a commercial venture.

The 1990s




In the 1990s, telecommunications companies
, who

previously offered primarily dedicated
point
-
to
-
point data circuits, began offering

virtual private network

(VPN) services with
comparable quality of service, but at a lower cost. By switching traffic as they saw fit to
balance server use, they

could use overall network bandwidth more effectively. They began
to use the cloud symbol to denote the demarcation point between what the provider was
responsible for and what users were responsible for. Cloud computing extends this
boundary to cover serv
ers as well as the network infrastructure.
9

As computers became more prevalent, scientists and technologists explored ways to make
large
-
scale computing power available to more u
sers through time sharing, experimenting
with algorithms to provide the optimal use of the infrastructure, platform and applications
with prioritized access to the CPU and efficiency for the end users.
10

Since 2000




After the

dot
-
com bubble
,

Amazon

played a key role in all the development of cloud
computing by modernizing their

data centers
, which, lik
e most

computer networks
, were
using as little as 10% of their capacity at any one time, just to leave room for occasional
spikes. Having found that the new cloud archite
cture resulted in significant internal
efficiency improvements whereby small, fast
-
moving "two
-
pizza teams" (teams small
enough to feed with two pizzas) could add new features faster and more easily, Amazon
initiated a new product development effort to pro
vide cloud computing to external
customers, and launched

Amazon Web Services

(AWS) on a utility computing basis in
2006.
11
12

In early 2008,

Eucalyptus

became

the first open
-
, AWS API
-
compatible platform for
deploying private clouds. In early 2008,

Open Nebula
, enhanced in th6666e RESERVOIR
European Commission
-
funded project, became the
first open
-

software for deploying
private and hybrid clouds, and for the federation of clouds.
13

In the same year, efforts were
focused on
providing quality

guarantees (as requ
ired by real
-
time interactive applications)
to cloud
-
based infrastructures, in the framework of the IRMOS European Commission
-
funded project, resulting to
a

real
-
time cloud environment
.
14

By mid
-
2008, Gartner saw an
opportunity for cloud computing "to shape the relationship among consumers of IT services,
those who use IT services and those who sell them"
15

and observed that "organizations are
switching from company
-
owned hardware and software assets to per
-
use service
-
based
models" so that the "projected shift to computing

... will result in dramatic growth in IT
products in some areas and significant
reductions in other areas."
16

On March 1, 2011, IBM announced the

I
BM
Smart Cloud

framework to support Smarter
Planet.
17

Among the various components of the Smarter Computing foundation, cloud
computing is a critical piece.

Growth and
popularity




The development of the Internet from being document centric via semantic data towards
more and more services was described as "Dynamic Web".
18

This contribution focused in
particular in the need for better meta
-
data able to describe not only implementation details
but also conceptual details of model
-
based applications.

The present availability of high
-
capacity networks, low
-
cost computers and st
orage devices
as well as the widespread adoption of

hardware virtualization
,

service
-
oriented
architecture
,

autonomic
, and utility computing have led to a growth in cloud
computing.
19
20
21

Financials Cloud vendors are experiencing growth rates of 90% per annum.
22

Origin of the term




The origin of the term

cloud computing

is unclear. The expression

cloud

is commonly used in
science to describe a large agglomeration of objects that visually appear from a distance as a
cloud and describes any set of things whose details are not inspected further in a given
context.



Meteorology: a weather cloud is an a
gglomeration.



Mathematics: a large number of points in a coordinate system in mathematics is seen as
a point cloud;



Astronomy: stars that appear crowded together in the sky are known as nebula (
Latin

for mist or cloud), e.g. the Milky Way;



Physics: The ind
eterminate position of electrons around an atomic kernel appears like a
cloud to a distant observer

In analogy to above usage the word

cloud

was used as a metaphor for the Internet and a
standardized cloud
-
like shape was used to denote a network on telepho
ny schematics and
later to depict the Internet in

computer network diagrams
. The cloud symbol was used to
represent the Internet as early as 1994,
23
24

in which servers were then shown connected to,
but external to, the cloud symbol.

References to cloud
computing in its modern sense can be found as early as 2006, with the
earliest known mention to be found in a

Compaq

internal document.
25

Urban legends

claim that usage of the expression is directly derived from the practice of
using drawings of stylized clouds to denote networks in diagrams
of computing and
communications systems or that it derived from a marketing term.

The term became
popular after

Amazon.com

introduced the

Elastic Compute Cloud

in 2006.

Similar systems and concepts




Cloud Computing is the result of evolution and adoption of existing technologies and
paradigms.

The goal of cloud computing is to allow users to take benefit from all of these
technologies, without the need for deep knowledge about or expertise with each one of
them. The cloud aims to cut costs, and help the users focus on their core business instead

of
being impeded by IT obstacles.
26

The main enabling technology for cloud computing is

virtualization
. Virtualization abstracts
the physical infrastructure, which is the most rigid component, and makes it available as a
soft component that is easy to use and manage. By doing so, virtualization provides the
agility required to speed up IT ope
rations, and reduces cost by increasing
infrastructure

utilization
. On the other hand, autonomic computing automates the process
through which the user can provision res

on
-
demand
. By minimizing user involvement,
automation speeds up the process and reduces the possibility of human errors.
26

Users face difficult business problems every day. Cloud computing adopts concepts
from

Service
-
oriented Architecture

(SOA)
that can help the user break these problems
into

services

that can be integrated to provide a solution. Cloud computing provides all of its
res
as services, and makes use of the well
-
established standards and best practices gained in
the domain of SOA to allow global and easy access to cloud services in a standardized way.

Cloud computing also leverages concepts from

utility computing

in order to
provide

metrics

for the services used. Such metrics are at the core of the public
cloud pay
-
per
-
use models. In addition, measured services are an essential part of the feedback loop in
autonomic computing, allowing services to scale on
-
demand and to perform automatic
failure recovery.

Cloud computing is a kind of

grid computing
; it has evolved from grid computing by
addressing the

QoS

(quality of service) and

reliability

problems. Cloud computing provides
the tools and technologies to build data/compute intensive parallel applications with much
more affordab
le prices compared to traditional

parallel computing

techniques.
26

Cloud computin
g shares characteristics with:



Client

server model



Client

server computing

refers broadly to any

distributed
application

that distinguishes between service providers (servers) and service
requesters (clients).
27



Grid computing



"A form of

distributed

and

parallel computing
, whereby a 'super and
virtual computer' is composed of a

cluste
r

of networked,

loosely coupled

computers
acting in concert to perform very large tasks."



Mainframe computer



Powerful computers used mainly by large organizations for
critical applications, typically bulk data processing such as

census
, industry and
consumer statistics, police
and secret intelligence services,

enterprise re planning
, and
financial

transaction processing
.
28



Utility computing



The "packaging of

computing res
, such as computation and storage,
as a metered service similar to a traditional

public utility, such as electricity."
29
30



Peer
-
to
-
peer

means distributed architecture without the need for central coordination.
Participants are both suppliers and consumers of res (in contrast to the traditional
client

server model).



Cloud gaming

also known as on
-
demand gaming

is a way of delivering games to
computers. Gaming data is stored in the provider's server, so that gaming is
independent of client computers used to play the gam
e.

Characteristics




Cloud computing exhibits the following key characteristics:



Agility

improves with users' ability to re
-
provision technologi
cal infrastructure res.



Application programming interface

(API) accessibility to software that enables
machines to interact with cloud sof
tware in the same way that a traditional user
interface (e.g., a computer desktop) facilitates interaction between humans and
computers. Cloud computing systems typically use Representational State Transfer
(
REST
)
-
based APIs.



Cost
: cloud providers claim that computing costs reduce. A public
-
cloud delivery model
converts

capital expenditure

to

operational expenditure
.
31

This
purportedly
lowers

barriers to entry
, as infrastructure is typically provided by a third
-
party and does
not need to be purchased for one
-
time or infrequent intensive comp
uting tasks. Pricing
on a utility computing basis is fine
-
grained, with usage
-
based options and fewer IT skills
are required for implementation (in
-
house).
32

The e
-
FISCAL pr
oject's state
-
of
-
the
-
art
repository
33

contains several articles looking into cost aspects in more detail, most of
them concluding that costs savings depend on the type of activi
ties supported and the
type of infrastructure available in
-
house.



Device and location independence
34

enable users to access systems using a web
browser regardless of their location or what device they use (e.g., PC, mobile phone). As
infrastructure is off
-
site (typically provided by a third
-
party) and accessed via the
Internet, users
can connect from anywhere.
32



Virtualization

technology allows sharing of servers and storage
devices and increased
utilization. Applications can be easily migrated from one physical server to another.



multitenancy

enables sharing of res and costs across a large pool of use
rs thus allowing
for:



centralization

of infrastructure in locations with lower costs (such as real estate,
electricity, etc.)



peak
-
load capacity

increases (users need not engineer for highest possible load
-
levels)



utilisation and efficiency

improvements
for systems that are often only 10

20%
utilised.
11
35



Reliability

improves with the use of multiple redundant sites, which makes well
-
designed cloud computing suitable for

business continuity

and

disaster recovery
.
36



Scalability an
d

elasticity

via dynamic ("on
-
demand")

provisioning

of res on a
fine
-
grained, self
-
service basis near real
-
time,
37
38

without users having to eng
ineer for peak
loads.
39
40
41



Performance

is monitored, and consistent and loosely coupled architectures are
constructed using

web services

as the system interface.
32



Security

can improve due to c
entralization of data, increased security
-
focused res, etc.,
but concerns can persist about loss of control over certain sensitive data, and the lack of
security for stored kernels.
42

Security is often as good as or better than other traditional
systems, in part because providers are able to devote res to solving security issues that
many customers cannot afford to tackle.
43

However, the complexity of security is greatly
increased when data is distributed over a wider area or over a greater number of
devices, as well as in multi
-
tenant systems shar
ed by unrelated users. In addition, user
access to security

audit logs

may be difficult or impossible. Private cloud installations are
in part motivated by users' desire to retain contro
l over the infrastructure and avoid
losing control of information security.



Maintenance

of cloud computing applications is easier, because they do not need to be
in
stalled on each user's computer and can be accessed from different places.

The National Institute of Standards and Technology's definition of cloud computing
identifies "five essential characteristics":

On
-
demand self
-
service
:

A consumer can unilaterally p
rovision computing capabilities, such
as server time and network storage, as needed automatically without requiring human
interaction with each service provider.

Broad network access
:

Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms
(e.g., mobile phones, tablets, laptops, and workstations).

Re pooling
:

The provider's computing res are pooled to serve multiple consumers using a
multi
-
tenant model, with

different physical and virtual res dynamically assigned and
reassigned according to consumer demand.

...

Rapid elasticity
:

Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensur
ate with demand. To the
consumer, the capabilities available for provisioning often appear unlimited and can be
appropriated in any quantity at any time.

Measured service
:

Cloud systems automatically control and optimize re use by leveraging a
metering cap
ability at some level of abstraction appropriate to the type of service (e.g.,
storage, processing, bandwidth, and active user accounts). Re usage can be monitored,
controlled, and reported, providing transparency for both the provider and consumer of the
utilized service.

The self
-
service requirement of cloud computing prompts infrastructure vendors to create
cloud computing templates, which are obtained from cloud service catalogues.
Manufacturers of such templates or blueprints include

BMC Software

(BMC), with Service
Blueprints as part of their cloud management platform
46

Hewlett
-
Packard

(HP), which
names its templates as HP Cloud Maps
47

RightScale
48

and

Red Hat
, which n
ames its
templates CloudForms.
49

The templates contain predefined configurations used by consumers to set up cloud
services. The templates or blueprints provide the tech
nical information necessary to build
ready
-
to
-
use clouds.
48

Each template includes specific configuration details for different
cloud infrastructures, with information a
bout servers for specific tasks such as hosting
applications, databases,
and websites

and so on.
48

The templates also include predefined
Web service, the operating syste
m, the database, security configurations and load
balancing.
49

Cloud computing consumers use cloud templates to move applications between clouds
through a self
-
service p
ortal. The predefined blueprints define all that an application
requires to run in different environments. For example, a template could define how the
same application could be deployed in cloud platforms based on Amazon Web Service,
VMware or Red Hat.
50

The user organization benefits from cloud templates because the
technical aspects of cloud configurations reside in the templates, letting users to deploy
cloud services with a
push of a button.
51
52
Developers can use cloud templates to create a
catalog of cloud services.
53

Service models




Cloud computing providers offer their services according to several fundamental
models:
2
54

infrastructure as a service (IaaS), platform as a service (PaaS), and software as a
service (SaaS) where IaaS is the most basic and each higher mod
el abstracts from the details
of the lower models. Other key components in anything as a service (XaaS) are described in
a comprehensive taxonomy model published in 2009,
55

such

as Strategy
-
as
-
a
-
Service,
Collaboration
-
as
-
a
-
Service, Business Process
-
as
-
a
-
Service, Database
-
as
-
a
-
Service, etc. In
2012, network as a service (NaaS) and communication as a service (CaaS) were officially
included by ITU (International Telecommunication Un
ion) as part of the basic cloud
computing models, recognized service categories of a telecommunication
-
centric cloud
ecosystem.
56

Infrastructure as a service (IaaS)




In the most basic cloud
-
service model, providers of IaaS offer computers
-

physical or (more
often) virtual machines
-

and other res. (A

hypervisor
, such as

Xen

or

KVM
, runs the virtual
machines as guests. Pools of
hypervisors within the cloud operational support
-
system can
support large numbers of virtual machines and the ability to scale services up and down
according to customers' varying requirements.) IaaS clouds often offer additional res such as
a virtual
-
mach
ine

disk image

library, raw (block) and file
-
based storage, firewalls, load
balancers, IP addresses,

virtual local area netw
orks

(VLANs), and software bundles.
57

IaaS
-
cloud providers supply these res on
-
demand from their large pools installed in

data centers
.
For

wide
-
area

connectivity, customers can use either the Internet or

carrier
clouds

(dedicated virtual private networks).

To deploy their applications, cloud users install operating
-
system images and their
application software on the cloud infrastructure. In this model, the cloud user patches and
maintains
the operating systems and the application software. Cloud providers typically bill
IaaS services on a utility computing basis cost reflects the amount of res allocated and
consumed.

Examples of IaaS providers include:

Amazon EC2
,

Google Compute Engine
,

HP
Clou
d
,

Joyent
,

Linode
,

NaviSite
,

Rackspace
,
Windows Azure

Cloud Services
,

Ready Space

Cloud Services
, and

Internap

Agile.


Cloud communications

and

cloud telephony
, rather than replacing local computing
infrastructure, replac
e local telecommunications infrastructure with

Voice over IP

and other
off
-
site Internet services.

Platform as a service (PaaS)




In the PaaS model, cloud providers deliver a

computing platform
, typically including
operating system, programming language execut
ion environment, database, and web
server. Application developers can develop and run their software solutions on a cloud
platform without the cost and complexity of buying and managing the underlying hardware
and software layers. With some PaaS offers, th
e underlying computer and storage res scale
automatically to match application demand so that the cloud user does not have to allocate
res manually.

Examples of PaaS include:

AWS Elastic Beanstalk
,

Cloud Foundry
,

Heroku
,

Force.com
,

Engine
Yard
,

Mendix
,

Open Shift
,

Google App Engine
,

AppScale
,

Windows Azure Cloud
Services
,
OrangeScape

and

Jelastic
.

Software as a service
(SaaS)




In the

business model

using
software as a service (SaaS), users are provided access to
application software and databases. Cloud providers manage the infrastructure and
platforms that run the applications. SaaS is sometimes referred to as "on
-
demand software"
and is usually priced on

a pay
-
per
-
use basis. SaaS providers generally price applications
using a subscription fee.

In the SaaS model, cloud providers install and operate application software in the cloud and
cloud users access the software from cloud clients. Cloud users do not
manage the cloud
infrastructure and platform where the application runs. This eliminates the need to install
and run the application on the cloud user's own computers, which simplifies maintenance
and support. Cloud applications are different from other ap
plications in their scalability

which can be achieved by cloning tasks onto multiple

virtual machines

at run
-
time to meet
changing work demand.
58
Load balancers

distribute the work over the set of virtual
machines. This process is transparent to the cloud user, wh
o sees only a single access point.
To accommodate a large number of cloud users, cloud applications can be

multitenant
,

that
is, any machine serves more than one cloud user organization. It is common to refer to
special types of cloud based application software with a similar naming convention:

desktop
as a service
, business process as a service,

test environment as a se
rvice
, communication as
a service.

The pricing model for SaaS applications is typically a monthly or yearly flat fee per user,
59

so
price is scalable and adjustable if user
s are added or removed at any point.
60

Examples of SaaS include:

Google Apps
,

Microsoft Office 365
,

Petrosoft
,

Onlive
,

GT
Nexus
,

Marketo
,

Casengo
,

TradeCard
,

Rally Software
,

Sales force
,

Exact Target

and
Callidu
sCloud.

Proponents claim SaaS allows a business the potential to reduce IT operational costs by
outsourcing hardware and software maintenance and support to the cloud provider. This
enables the business to reallocate IT operations costs away from hardware/
software
spending and personnel expenses, towards meeting other goals. In addition, with
applications hosted centrally, updates can be released without the need for users to install
new software. One drawback of SaaS is that the users' data are stored on t
he cloud
provider's server. As a result, there could be unauthorized access to the data.

Network as a service (NaaS)




A category of cloud services where the capability provided to the cloud service user is to use
network/transport connectivity services and/or inter
-
cloud network connectivity
services.
61

NaaS involves the optimization of re allocations by considering netwo
rk and
computing res as a unified whole.
62

Traditional NaaS services include flexible and extended VPN, and bandwidth on
demand.
61

NaaS concept materialization also includes the provision of a virtual network
service by the owners of the network infrastructure to a third party (VNP


VNO).
63
64


3.

Cloud management


Legacy management infrastructures, which are based on the concept of dedicated system
relationships and architecture constructs, are not well suited to cloud environments where
instances are continually launched and decommissioned.
65

Instead, the dynamic nature of
cloud computing requires monitoring and management tools that are adaptable, extensible
and customizable.
66

Cloud management challenges




Cloud computing presents a number of management challenges. Companies using public
clouds do not have ownership

of the equipment hosting the cloud environment, and
because the environment is not contained within their own networks, public cloud
customers don’t have full visibility or control.
66

Users of public cloud services must also
integrate with an architecture defined by the cloud provider, using its specific parameters
for working with cloud components. Integration includes tying into the cloud APIs for
configuring IP address
es, subnets, firewalls and data service functions for storage. Because
control of these functions is based on the cloud provider’s infrastructure and services, public
cloud users must integrate with the cloud infrastructure management.

67

Capacity management is a challenge for both public and private cloud environments
because end users have the ability to deploy applications using self
-
service portals.
Applications of all sizes
may appear in the environment, consume an unpredictable amount
of res, then disappear at any time.

68

Chargeback

or, pricing re use on a granular basis

is a challenge for

both public and
private cloud environments.

69

Chargeback is a challenge for public cloud service providers
because they must price their services competitively while s
till creating profit.

68

Users of
public cloud services may find chargeback challenging because it is difficult for IT groups to
assess actual re costs on a granular basi
s due to overlapping res within an organization that
may be paid for by an individual business unit, such as electrical power.

69

For private cloud
operators, chargeback

is fairly straightforward, but the challenge lies in guessing how to
allocate res as closely as possible to actual re usage to achieve the greatest operational
efficiency. Exceeding budgets can be a risk.

68

Hybrid cloud environments, which combine public and private cloud services, sometimes
with traditional infrastructure elements, present their own set of management challenges.
These include security concerns if
sensitive data lands on public cloud servers, budget
concerns around overuse of storage or bandwidth and proliferation of mismanaged
images.

70

Managing the information

flow in a hybrid cloud environment is also a significant
challenge. On
-
premises clouds must share information with applications hosted off
-
premises by public cloud providers, and this information may change constantly.

71

Hybrid
cloud environments also typically include a complex mix of policies, permissions and limits
that must be managed consistently across both public and private clouds.

71

Cloud clients




Users access cloud computing using networked client devices, such as

desktop
computers
,

laptops
,

tablets

and

smartphones
. Som
e of these devices
-

cloud clients

-

rely on
cloud computing for all or a majority of their applications so as to be essentially useless
without it. Examples are

thin clients

and t
he browser
-
based

Chromebook
. Many cloud
applications do not require specific software on the client and instead use a web browser to
interact with the cloud application. With

Ajax

and

HTML5

these

Web us
er interfaces

can
achieve a similar, or even better,

look and feel

to native applications. Some cloud
applications, however, support specific client software dedicated to these a
pplications
(e.g.,

virtual desktop

clients and most email clients). Some legacy applications (line of
business applications that until now have been prevalent i
n

thin client

computing) are
delivered via a screen
-
sharing technology.

Deployment models




Private cloud




Private cloud is cloud infrastructure operated solely for a single organization, whether
managed internally or by a third
-
par
ty and hosted internally or externally.
2

Undertaking a
private cloud project requires a significant level and degree of engagement to virtualize the
business environment, an
d requires the organization to reevaluate decisions about existing
res. When done right, it can improve business, but every step in the project raises security
issues that must be addressed to prevent serious vulnerabilities.
72

They have attracted criticism because users "still have to buy, build, and manage them" and
thus do not benefit from less hands
-
on management,
73

essentially "lacking the economic
model that makes cloud computing such an intriguing concept".
74
75

Public cloud




A cloud is called a 'Public cloud' when the services are rendered over a network that is open
for public use. Technically ther
e is no difference between public and private cloud
architecture, however, security consideration may be substantially different for services
(applications, storage, and other res) that are made available by a service provider for a
public audience and whe
n communication is effected over a non
-
trusted network.
Generally, public cloud service providers like Amazon AWS, Microsoft and Google own and
operate the infrastructure and offer access only via Internet (direct connectivity is not
offered).
32

Community cloud




Community cloud

shares infrastructure between several organizations from

a specific
community with common concerns (security, compliance, jurisdiction, etc.), whether
managed internally or by a third
-
party and hosted internally or externally. The costs are
spread over fewer users than a public cloud (but more than a private cl
oud), so only some of
the cost savings potential of cloud computing are realized.
2

Hybrid cloud




Hybrid cloud is a composition of two or more clouds (private, community or public) that
remain unique entities but are bound together, offering the benefits of multiple deployment
models.
2

Such composition expands deployment options for cloud services, allowing IT
organizations to use public cloud computing res to meet temporary needs.
76

This

capability
enables hybrid clouds to employ cloud bursting for scaling across clouds.
2

Cloud bursting is an application deployment model in which an application runs in a pr
ivate
cloud or data center and "bursts" to a public cloud when the demand for computing
capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that
an organization only pays for extra compute res when they are needed.
77

Cloud bursting enables data centers to create an in
-
house IT infrastructure that supports
average workloads, and use cloud res from public or private clouds, during spikes in
process
ing demands.
78

By utilizing "hybrid cloud" architecture, companies and individuals are able to obtain
degrees of fault tolerance combined with locally immediate usability
without dependency
on internet connectivity. Hybrid cloud architecture requires both on
-
premises res and off
-
site (remote) server
-
based cloud infrastructure.

Hybrid clouds lack the flexibility, security and certainty of in
-
house applications.
79

Hybrid
cloud provides the flexibility of in house applications with the fault tolerance and scalability
of cloud based services.

Personal cloud




Personal cloud is an application
of cloud computing for individuals similar to a

Personal
Computer
. While a vendor organization may help manage or maintain a personal cloud, it
never takes possession of
the data on the personal cloud, which remains under control of
the individual.
80

Distributed cloud




Cloud computing can also be provided by a distributed set of machines that are running at
different locations, while still connected to a single network or hub service. Older examples
of this include distributed computing platforms such as

BOINC

and

Folding@Home
, as well
as new crowd
-
d cloud providers such as

Slicify
.

Cloud management strategies




Public clouds are managed by public cloud service providers, which include the public cloud
environment’s
servers, storage, networking and data center operations.

81

Users of public
cloud services can generally select from three basic categories:



User self
-
provisioning:

Customers purchase cloud services directly from the provider,
typically through a web form or console interface. The customer pays on a per
-
transaction basis.



Advance provisioning:

Customers contract in advance a predetermined amount of res,
which are pre
pared in advance of service. The customer pays a flat fee or a monthly fee.



Dynamic provisioning:

The provider allocates res when the customer needs them, then
decommissions them when they are no longer needed. The customer is charged on a
pay
-
per
-
use basi
s.

Managing a private cloud requires software tools to help create a virtualized pool of
compute res, provide a self
-
service portal for end users and handle security, re allocation,
tracking and billing.

82

Management tools for private clouds tend to be service driven, as
opposed to re driven, because cloud environments are typically highly virtualized and
organized in terms of portable workloads.

83

In hybrid cloud environments, compute, network and storage res must be managed across
multiple domains, so a good management strategy should start by defining what needs to
be managed, and where a
nd how to do it.

70

Policies to help govern these domains should
include configuration and installation of images, access control, and budgeting and
reporting.

70

Aspects of cloud management systems




A cloud management system is a combination of software and technologies designed to
manage cloud environments.

84

The industry has responded to the management challenges
of cloud computing with cloud management systems. HP, Novell, Eucalyptus,
Open Nebula
,
Citr
ix and are among the vendors that have management systems specifically for managing
cloud environments.

82

At a minimum, a cloud management solution should be able to
manage a pool of
heterogeneous compute res, provide access to end users, monitor security, manage re
allocation and manage tracking.

82

Enterprises with large
-
scale cl
oud implementations may require more robust cloud
management tools that include specific characteristics, such as the ability to manage
multiple platforms from a single point of reference, include intelligent analytics to automate
processes like applicatio
n lifecycle management. And high
-
end cloud management tools
should also be able to handle system failures automatically with capabilities such as self
-
monitoring, an explicit notification mechanism, and include failover and self
-
healing
capabilities.

70



4. Utility computing

Utility computing

is the packaging of

computing res
, such as computation, storage and
services, as a metered service. This model has the advantage of a low or no initial cost to
acquire computer res; instead,

co
mputational res

are essentially rented.

This repackaging of computing services became the foundation of the shift to "
on demand
"
computing,

software as a service

and

cloud computing

models that further propagated the
idea of computing, application and netw
ork as a service.

There was some initial skepticism about such a significant shift.
1

However, the new model of
computing caught on and eventually became mainstream.

IBM, HP and

Microsoft were early leaders in the new field of Utility Computing with their
business units and researchers working on the architecture, payment and development
challenges of the new computing model. Google, Amazon and others started to take the
lead in
2008, as they established their own utility services for computing, storage and
applications.

Utility Computing can support grid computing which has the characteristic of very large
computations or a sudden peaks in demand which are supported via a large n
umber of
computers.

"Utility computing" has usually envisioned some form of

virtualization

so that the amount of
storage or computing power available is consi
derably larger than that of a single

time
-
sharing

computer. Multiple servers are used on the "back end" to make this possible. These
might be a dedicated

computer cluster

specifically built for the purpose of being rented out,
or even an under
-
utilized supercomputer. The technique of running a single calculation on
multiple computers is known as

distributed computing
.

The term "
grid computing
" is often used to describe a particu
lar form of distributed
computing, where the supporting nodes are geographically distributed or
cross

administrative domains
. To provide utility computing service
s, a company can
"bundle" the res of members of the public for sale, who might be paid with a portion of the
revenue from clients.

One model, common among

volunteer c
omputing

applications, is for a central server to
dispense tasks to participating nodes, on the behest of approved end
-
users (in the
commercial case, the paying customers). Another model, sometimes called the

Virtual
Organization

(VO),

is more decentralized, with organizations buying

and selling

computing
res

as needed or as they go idle.

The definition of "utility computing" is sometimes extended to specialized tasks, such
as

web services
.

History




Utility computing merely means "Pay and Use", with regards to computing power. Utility
computing is not a new concept, but rather
has quite a long history. Among the earliest
references is:

IBM and other mainframe providers conducted this kind of business in the following two
decades, often referred to as time
-
sharing, offering computing power and database storage
to banks and other
large organizations from their world wide data centers. To facilitate this
business model, mainframe operating systems evolved to include process control facilities,
security, and user metering. The advent of mini computers changed this business model, by
making computers affordable to almost all companies. As Intel and AMD increased the
power of PC architecture servers with each new generation of processor, data centers
became filled with thousands of servers.

In the late 90's utility computing re
-
surfaced
. InsynQ (
1
), Inc. launched on
-
demand
applications and desktop hosting services in 1997 using HP equipment. In 1998, HP set up
the Utility Computing Division in Mountain View, CA, assigning former Bell Labs computer
s
cientists to begin work on a computing power plant, incorporating multiple utilities to form
a software stack. Services such as "IP billing
-
on
-
tap" were marketed. HP introduced the
Utility Data Center in 2001. Sun announced the

Sun Cloud

service to consumers in 2000. In
December 2005,

Alexa
launched Alexa Web Search Platform, a Web search building tool for
whic
h the underlying power is utility computing. Alexa charges users for storage, utilization,
etc. There is space in the market for specific industries and applications as well as other
niche applications powered by utility computing. For example,

PolyServe Inc.

offers
a

clustered file system

based on commodity server and storage hardware that creates highly
available

utility computing environments for mission
-
critical applications including Oracle
and Microsoft SQL Server databases, as well as workload optimized solutions specifically
tuned for bulk storage, high
-
performance computing, vertical industries such as fina
ncial
services, seismic processing, and content serving. The Database Utility and File Serving
Utility enable IT organizations to independently add servers or storage as needed, retask
workloads to different hardware, and maintain the environment without d
isruption.

In spring 2006

3tera

announced its AppLogic service and later that summer Amazon
launched

Amazon EC2

(Elastic C
ompute Cloud). These services allow the operation of
general purpose computing applications. Both are based on

Xen

virtualization software and
the most commonly used operating system on the virtual
computers is Linux, though
Windows and Solaris are supported. Common uses include web application, SaaS, image
rendering and processing but also general
-
purpose business applications.