Improved Delivery of Dynamic Documents In Edge Networks

tealackingΤεχνίτη Νοημοσύνη και Ρομποτική

8 Νοε 2013 (πριν από 3 χρόνια και 8 μήνες)

74 εμφανίσεις

Proceedings of the
Intern
ational Conference ,

Computational Systems and
Communication Technology”

Jan.,9,2009

-


by

Lord Venkateshwaraa Engineering
College
,
Kanchipuram Dt.PIN
-
631 60
5
,INDIA


Copy Right @CSE/IT/ECE/MCA
-
LVEC
-
2009

1


Improved Delivery of Dynamic Documents In Edge Networks


D.
Ram Kumar

Department of Computer science

Jaya Engineering College

ramduraikumar@yahoo.co.in

S.Godfrey Winster

Department of Information Tech
nology

Jaya Engineering College

godfrey_17ngl@yahoo.com

Abstract:

Edge computing has emerged as a popular mechanism to deliver dynamic Web content to clients. But, many
existing edge cache networks have not been ab
le to harness the full potential of edge computing technology. In this
project, we demonstrate the cooperation among the individual edge caches coupled with scalable server driven
document consistency mechanisms. There are many research challenges in desig
ning large
-
scale cooperative edge
cache networks. Towards addressing these challenges, this project uses cooperative edge cache grid. The design of
the cooperative EC grid focuses on the scalability and reliability of dynamic content delivery in addition t
o cache hit
rates. We introduce the concept of cache clouds and an intermediate server as a generic framework of cooperation
in large
-
scale edge cache networks. The architectural design of the cache clouds includes dynamic hashing
-
based
document lookup and

update protocols. This is responsible for which dynamically balance lookup and update loads
among the caches in the cloud and in intermediate server.


I.I
NTRODUCTION

The rapid increase of dynamic web
contents for past few years is creating a
serious cha
llenge to the WWW in terms of
performance and scalability. The research
community found a reasonable solution for
this using caching on the Edge of the
network. The underlying concept of the edge
catching is to bring data, and possibly some
parts of applic
ation, closer to the user. Also
adapting the concept of cost
-
effective
cooperation, the performance of the edge
catch network in delivering dynamic web
contents is improved in many ways as
cooperative mishandling, collaborative
document freshness maintenan
ce and
cooperative resource management. Also the
increase of the dynamic web contents
imposes a heavy load on server and network
infrastructure. To reduce this effect of
increased usage, there are different
techniques for web catching. The Web
catching red
uces the load and server also it
reduces the overall network bandwidth
required and lowest latency of the response.
Content Delivery Service Providers (CDS)
retrieves the content from the origin sites
and distribute it to the servers on the edge of
the net
work i.e. edge servers. The
distribution maybe based on pushes

technologies (multicasting through satellite
links) or pull technologies (as those used by
proxies). The
objective

is to decrease the
latency of the user access to the objects by
delivering the

objects from an edge server
close

to the user in this web
. T
o improve the
delivery of dynamic documents in edge
networks this done by combining all the
concepts of 1. Efficient creation
of
ed
ge
catches groups 2. Cooperative

catching in
edge networks3. Req
uest redirection in
content delivery network over scalable web
architecture

.
This

paper is organized
as

various sections where section 2 describes
the efficient creation of edge catch group
section 3 describes cooperative catching in
edge networks section
4 describes about the
request redirection in content delivery
network section 5 deals in edge catching /
offloading for dynamic content delivery the
section 6 describes the experimental setup
and section 7 is conclusion and future work


II.E
FFICIENT CREAT
ION OF EDGE
CACHE GROUPS

Designing an edge cache networks has
many challenges. They are: Finding out
appropriate number of edge caches for the
edge cache network, and their locations
dividing the edge caches into cooperative
groups such that the cooperatio
n is efficient
and effective. Our study shows that the two
important parameters that need to be
considered while forming cooperative cache
groups are:1.the network proximities of the
edge caches 2.the network distances of the
various caches to the origin
server.

Proceedings of the
Intern
ational Conference ,

Computational Systems and
Communication Technology”

Jan.,9,2009

-


by

Lord Venkateshwaraa Engineering
College
,
Kanchipuram Dt.PIN
-
631 60
5
,INDIA


Copy Right @CSE/IT/ECE/MCA
-
LVEC
-
2009

2


There are two edge cache clustering
techniques discussed in[14]

1
. Selective Landmarks
-
based group
formation scheme (SL scheme)



is an
clustering technique based on an Internet
landmarks for clustering the caches into
cooperative groups based on t
heir mutual
network proximities.

2. Server Distance sensitive Selective
Landmarks scheme (SDSL scheme)



is an
clustering technique, in which clustering is
based on both cache proximities and server
distances. It is found that SDSL is found
better than SL

Consider an edge cache network with
an origin server
Os

and
N
edge caches. Let
the set of edge caches be represented as
EcSet
=
{Ec
0
,Ec
1
, . . . , Ec
N−
1
}
. Let the
locations of the edge caches and the server
are pre
-
decided. Now we are going to
partition the
caches into
K
disjoint groups,
represented as
CGSet
=
{CGroup
0
,
CGroup
2
, . . ., CGroupM

1
}
, where M

is a
pre
-
specified parameter.

Let
Size
(
CGroup
i
)
gives the number of edge caches in
CGroup
i
.The factor affecting the
performance of the cooperative
edge cach
e
network is the communication cost.The
caches belonging to a group interact with
one another very Frequently. This leads us
to one of the important performance
criterion that needs to be optimized while
forming cache groups, namely,
average
group interact
ion cost
. The average group
interaction cost is the average cost of
transferring documents between any two
caches of same group. Let the interaction
cost between two edge caches
Ec
i
and
Ec
j
(
ICost
(
Ec
i
,Ec
j
)) as the cost of transferring
an average sized doc
ument between edge
caches
Ec
i
and
Ec
j
. The group interaction
cost of a cooperative group
CGroup
l
(
GICost
(
CGroup
l
)) is the average of the
interactioncosts of all pairs of edge caches
belonging to the cooperative group
CGroup
l
. The average group interaction

cost
of the edge cache network is the mean of the
group interaction costs of all groups within
the edge cache network.The average group
interaction cost is a measure of the
efficiency of cooperation the edge network.


Server Distance Sensitive Selective
L
andmarks Scheme

If the caches of a group are in close
proximity, the group interaction cost of the
cache group would be low, and vice versa. The
SDSL scheme works in three steps:


(1) Choosing high quality landmark set.

(2) Determining the relative nod
e positions by
probing the landmarks

(3) Creating groups through clustering edge
caches


Choosing High Quality Landmark set

The first step in constructing the cache
groups is to choose a set of landmarks, which
collectively serve as the frame of referen
ce.
The caches and the origin server of the
cooperative edge cache network measure their
network distance to these landmark nodes in
order to determine their relative positions.





Figure 1: Choosing High
-
Quality
Landmarks for Cloud Constr
uction


One of the most important properties of
a good landmark set is that the landmark
nodes have to be well dispersed among the
set of edge caches. If the landmarks are well
distributed, then the position information
obtained by using them is more accur
ate,
thereby yielding better quality cache groups.

One way of ensuring that the landmarks are
well distributed would be to choose them
such that the minimum distance between any
two nodes in the landmarks set (denoted as
MinDist
(
LmSet
)) is maximized.

Dete
rmining Relative Positions of Nodes

The next step of the SDSL scheme finds
the relative positions of the server and the
caches in the edge cache network. Using the
landmarks from the previous step does this.
In the SDSL scheme the relative positions of
the

nodes are represented using simple
feature vectors
. There are various techniques
for representing the relative positions of the
Proceedings of the
Intern
ational Conference ,

Computational Systems and
Communication Technology”

Jan.,9,2009

-


by

Lord Venkateshwaraa Engineering
College
,
Kanchipuram Dt.PIN
-
631 60
5
,INDIA


Copy Right @CSE/IT/ECE/MCA
-
LVEC
-
2009

3


nodes in wide area network [12,13, 08]. The
edge caches construct their respective
feature vectors by probing the landmark
node
s multiple times and recording the
average

Figure 2:Determining Feature Vectors

and K
-
means Clustering


RTT values to each of them. The top
portion of the Figure 2 shows this step of the
SL scheme, and it also indicates the feature
vectors of all the ed
ge caches in our example
network.

Creating Cache Groups through
Clustering

The final step of the SDSL scheme
clusters the edge caches into
K
groups on the
basis of their feature vectors. This step
utilizes the well
-
known K
-
means clustering
algorithm to gro
up the edge caches into
cooperative groups [14].While a number of
architectures, protocols and techniques have
been developed to improve cooperation
within cache groups, the problem of
partitioning the edge caches into cooperative
groups has not received m
uch attention.

We
found
server distance sensitive selective
landmarks scheme

provide significant
performance benefits.

III.C
OOPERATIVE CACHING IN EDGE
NETWORKS

For the potential benefits to be high, it
is must to Design an efficient cooperative
edge cachi
ng scheme. If an edge cache
receives a request for a document that is not
available locally, it will try to retrieve the
document from nearby caches rather than
contacting the origin server immediately.
Retrieving a document from a nearby cache
can reduce

the latency of a local miss. It also
reduces the load of origin server by reducing
the number of requests reaching the remote
servers. The another benefit of cooperation
among edge caches is the reduction in the
load generated by the document consistency
maintenance. When the caches are arranged
as cooperative groups, the server can
communicate the update message to a single
cache in a cache group, which then
distributes it to the other edge caches within
its group. Creating a large
-
scale cooperative
edge
cache network poses several. (1). An
effective mechanism is needed to calculate
number of edge caches needed and the
locations where they have to be placed (2).
These caches need to be organized into
cooperative groups such that the cooperation
is effecti
ve. (3)A dynamic and adaptive
architecture is required for efficient
cooperation within each cache group for
dynamic content delivery. Here we are
going to discuss these facts. as

(1) The architecture of the cache clouds

(2) a dynamic hashing
-
based cooper
ation
scheme for efficient document lookups and
document updates within each cache cloud.

(3) a utility
-
based document placement
scheme for strategically placing documents
among caches of a cache cloud.

Cache Clouds

A cache cloud is a group of edge caches
from
an edge Network. They cooperate among
themselves for efficient delivery of dynamic
web contents. The caches in a cache cloud
cooperate in several ways to improve the
performance of edge cache networks. First,
when a cache experiences a miss, it tries
to
retrieve the document from another cache
within the cache cloud, instead of immediately
contacting the server. Second, the caches in a
cache cloud collaboratively share the cost of
document updates i.e. the server needs to send
a document update message

to only one cache
in a cache cloud, which is then distributed to
other caches that are currently holding the
document. Third, the edge caches in a cache
cloud collaborate with each other to optimally
utilize their collective resources by adopting
smart st
rategies for document lookups,
updates, placements and replacements. A
cache that needs to retrieve a document needs
to locate the copies of the document existing
within the cache cloud. We refer to the
mechanism of locating the copies of a
document within

a cache cloud for the purpose
of retrieving it as the
document lookup
protocol
. The mechanism used by the cache
Proceedings of the
Intern
ational Conference ,

Computational Systems and
Communication Technology”

Jan.,9,2009

-


by

Lord Venkateshwaraa Engineering
College
,
Kanchipuram Dt.PIN
-
631 60
5
,INDIA


Copy Right @CSE/IT/ECE/MCA
-
LVEC
-
2009

4


cloud to communicate the document update to
all its caches

that are currently holding the
document is the
document update protocol
.

Each cache

is responsible for handling the
lookup and update operations for a set of
documents assigned to it. In a cache cloud, if
the cache Pc
j
i

is responsible for the lookup and
update operations of a document Dc, then we
call the cache Pc
j
i

as the
beacon point
of Dc

.
The beacon point of a document maintains the
up
-
to
-
date lookup information, which includes
a list of caches in the cloud that currently hold
the document. A cache that requires document
Dc, contacts Dc

's beacon point, and obtains its
lookup inform
ation. Then it retrieves the
document from one of the caches currently
holding the

document. Similarly, if the server
wants to update document
Dc
, it sends an
update message to its beacon point, which then
distributes this message to all the holders of
the

document.

Let a cache cloud with N edge
caches. All these caches have lookup
information about a set of documents. In the
dynamic hashing scheme [15], the assignment
of documents to beacon points can vary over
time so that the load balance is maintained
e
ven when the load patterns change. In our
scheme the edge caches of a cache cloud are
organized into subgroup called beacon rings.
A cache cloud has one or more beacon rings,
and each beacon ring has two or more beacon
points. Figure 3 shows a cache cloud
with 4
beacon rings, where each beacon ring has 2
beacon points. All the beacon points in a
particular beacon ring are collectively
responsible for maintaining the lookup
information of a unique set of documents.


Figure 3. Architecture of Edge Cache
Clo
ud


Document Placement in Cache Clouds

A good document placement scheme is
very essential for a cache cloud to optimally
utilize the available resources. A simple
document placement scheme would be to place a
document at each cache that has received a
requ
est for that document. This is the ad hoc
document placement scheme. This may leads to
uncontrolled replication of documents causes
higher disk
-
space contention. Another approach
for document placement, called the beacon point
caching, would be to store ea
ch document only at
its beacon point. This policy results in the
beacon points of hot documents encountering
heavy loads

Now we are going to see the design
of a utility
-
based document placement scheme,
in which the caching decisions rely upon the
utility o
f a document
-
copy to the cache storing it
and to the entire cache cloud. This
utility value
of the document copy is represented as
Utility(Dc)
for document Dc. The utility of
document copy Dc

estimates the benefit
-
to
-
cost
ratio of storing and maintaining th
e new copy.

A higher value of utility indicates that
benefits outweigh the costs, and vice
-
versa.
When a cache retrieves a document it calculates
its utility value and decides whether or not to
store the document based on this utility value.
Utility is ca
lculated using four factors as

Document Availability Improvement Component
:

This component quantifies the improvement in
the availability of the document in the cache
cloud achieved by storing the document copy at
Pci

Disk
-
Space Contention Component
:
Captu
res the
storage costs of caching the document copy

at Pci in terms of the disk
-
space contention at Pci

Consistency Maintenance Component:

This
component accounts for the costs incurred for
maintaining the consistency of the new
document copy at Pci

Access
Frequency Component:

quantifies how
frequently the document
Dc
is accessed in
comparison to other documents stored in the
cache.

In this section have studied the
challenges of designing a cooperative
edge network for caching dynamic web
content, and propos
ed the cache clouds
as a framework for cooperation in large
-
scale edge cache networks

Proceedings of the
Intern
ational Conference ,

Computational Systems and
Communication Technology”

Jan.,9,2009

-


by

Lord Venkateshwaraa Engineering
College
,
Kanchipuram Dt.PIN
-
631 60
5
,INDIA


Copy Right @CSE/IT/ECE/MCA
-
LVEC
-
2009

5



IV.R
EQUEST REDIRECTION IN CONTENT
DELIVERY NETWORK


This section discusses user specific
request redirection of user requests in
Content Delivery Networks CDN) for
pers
onalizing responses. User specific
request redirection defined as the process of
redirecting user requests to a server on the
content delivery network based on user
specific information carried in the user
request. The redirection schemes in CDNs
are eithe
r based on the authoritative DNS
model or the URL rewrite model. User
specific request redirection refers to the
process of redirecting requests to the same
set of embedded objects in a top level page,
arriving from different users, to different
CDSs, base
d on information contained in the
user request. The information found in the
user requests that could be used includes the
user IP address, cookies present in the user
request, user browser information, etc. Other
information that is not contained in the u
ser
request could be used to make redirection
decisions as well. These include time of day,
priority of the user, cost of accessing the
CDSP network and performance of the
CDSP network.

Content Delivery Service Providers
(CDSP)[1,2,3,4,5] distributes conte
nt from
origin sites to cache servers on the edge of
the network and deliver content to the users
from these edge servers (referred to as
Content Delivery Servers or CDS). The
distribution mechanism could be based both
on push technologies such as multicas
ting
the data to all the edge servers through
terrestrial or satellite links or pull
technologies such as those used by proxies.
The goal is to decrease the latency of user
access to the objects by delivering the
objects from an edge server closest to the
user.

Authoritative DNS
:

One way is for the
CDSP to “takeover” the DNS functionality
of the origin site and become the
authoritative DNS for the origin site. This is
an easy approach to implement but the
problem with this approach is that all the
objects f
rom the domain that has been taken
over needs to be served from the CDSs.

URL Rewrite
:

In this approach, the
authoritative DNS functionality still stays
with the origin site’s DNS. Any top
-
level
page requested by a user will be served from
the origin serve
r. But before the page is
served, all the embedded links found in the
top level page are rewritten to point to the
CDSP DNS so that requests to embedded
objects can be redirected by the CDSP DNS
to the closest CDS.

Service differentiation using DNS hierarc
hy
:
With user specific redirection, it becomes
possible for the content provider to instruct the
CDSP to provide different service levels to
different users
.

Service differentiation using URL prefix
:

If URL rewriting is used to redirect user
requests, whe
n URLs are rewritten, different
prefixes can be added to the URL path to provide
indication to the CDS about the service level that
should be provided to the user.


V.E
VALUATION OF EDGE CACHING/OFF
LOADING FOR DYNAMIC CONTENT DELIVERY

Dynamic
content
. need
s architectural
change in tandem. In particular, resources that
are already deployed near the client such as the
proxies that are otherwise underutilized for such
content should be employed. The strategies
include offloading some of the processing to the
p
roxy, or simply enhancing its cache abilities to
cache fragments of the dynamic pages and
perform page composition. There are researches
on the subject of optimizations for dynamic
content processing and caching, we still there is a
lack in finding out whi
ch is the best offloading
and caching strategies and their
design/deployment tradeoffs.

In this section, using a representative e
-
commerce benchmark [7,16], we have studied
many partitioning strategies. We found that
offloading and caching at edge proxy se
rvers has
advantages without pulling database out near the
client. The results show that, under typical user
browsing patterns and network conditions, 2~3
folds of latency reduction can be achieved. Also,
over 70% server requests are filtered at the
proxie
s, resulting server load reduction. This
advantage can be get largely by caching dynamic
page fragments and composing the page at the
proxy. Advanced offloading strategies can be
very complex and even affect the performance
-
wise if not done carefully. If e
nd
-
to
-
end security
Proceedings of the
Intern
ational Conference ,

Computational Systems and
Communication Technology”

Jan.,9,2009

-


by

Lord Venkateshwaraa Engineering
College
,
Kanchipuram Dt.PIN
-
631 60
5
,INDIA


Copy Right @CSE/IT/ECE/MCA
-
LVEC
-
2009

6


is in place for a particular application, then
offload all the way up to the database; otherwise
augment the proxy with page fragmentation
caching and page composition

The main objectives are to reduce client
response time, network traff
ic and server
load caused by of high volume of requests
over wide
-
area links. Fragment caching is an
effective technique to accelerate current
Web applications,

which usually
generate heterogeneous

contents with
complex layout. It is provided by today’s

common application server product like
Microsoft ASP.NET and IBM Web Sphere
Application Server. ESI proposes to cache
fragments at the CDN stations to further
reduce network traffic and response time.

Application offloading is another way
to improve perfo
rmance. In Active Cache, it
is proposed that piece of code be associated
with a resource and be able to be cached too.
The cache will execute the code over the
cached object on behalf of the server and
return the result to the client directly when
the obje
ct is requested at a later time. With
the blurring of application and data on
current Web, this scheme becomes less
effective. To do more aggressive application
offloading, Web Sphere Edge Services
Architecture suggests that portions of the
application suc
h as presentation tier and
business logic tier be pushed to the edge
server and communicate with the remaining
application at the origin server when
necessary via the application offloads
runtime engine. The full application is
replicated on the edge serve
r and database
accesses are handled by a data cache which
can cache query results and fulfill
subsequent queries by means of query
containment analysis without going to the
back
-
end. We focus on the proxies that are
already installed near clients. We also
examine exclusively on offloading and
caching of anything other than the database
content, as we believe mature technologies
to manage hard states in a scalable fashion
across wide
-
area are yet to be developed. To
the best of our knowledge, we are the firs
t to
report design and implementation tradeoffs
involved in devising partitioning and
offloading strategies, along with detailed
evaluations. There also has been no work
evaluating offloading versus advanced
caching mechanisms. Finally, this is the first
w
ork we know of that experiments with the
.NET framework in this aspect.


OFFLOADING AND CACHING

OPTIONS ENUMERATED

There are a number of issues to be
considered for distributing, offloading and
caching dynamic content processing and
delivery, they are:

1)

Av
ailable resources and their
characteristics,

2)

The nature of these applications and

3)

A set of design criteria and guidelines.
In this section, we discuss these issues
in turn.

Resources Where Offload can be Done

Figure 4 shows graphically various
resources
involved.


Figure 4. Resources available for
offloading and caching


Client
.
As a user
-
side agent, client


typically a browser is responsible for some
of the presentation tasks, it can also cache
some static contents such as images, logos
etc. The number

of clients is potentially
many; however they usually have limited
capacities and are (generally speaking) not
trusted.

Proxies
.
In terms of scale, proxies come
second. Proxies are placed near the clients
and are thus far from the server end. The
typical f
unctionalities of proxies include
firewall, and caching of static contents. They
are usually shared by many clients and are
reasonably powerful and stable. However,
Proceedings of the
Intern
ational Conference ,

Computational Systems and
Communication Technology”

Jan.,9,2009

-


by

Lord Venkateshwaraa Engineering
College
,
Kanchipuram Dt.PIN
-
631 60
5
,INDIA


Copy Right @CSE/IT/ECE/MCA
-
LVEC
-
2009

7


except the case of intranet applications,
content providers do not have much control
over t
hem.

Reverse Proxies
.
Reverse proxies are
placed near the back end server farm and act
as an agent of the application provider. They
serve the Web request on behalf of the back
end servers. Content providers can fully
control their behaviors. However, the
scale
of reverse proxies only goes as far as a
content provider’s network bandwidth
allows. In this paper, we consider them as
part of the server farm.

Server
.
Servers are where the content
provider has the full control. In the context
of this section, we
speak of “server” as one
logical entity. However, as it shall be clear
later, “server” itself is a tiered architecture
comprised of many machines and hosting
the various tiers of the Web application.

As far as dynamic content is
concerned, typically only t
he servers and
clients are involved. Proxies, as of today, are
incapable of caching and processing
dynamic contents. In this discussion, we
have also omitted CDN stations, as we
believe they can be logically considered as
an extension of either proxies or
reverse
proxies.


VI.E
XPERIMENT SETUP

A user with particular information need
submits the query. The client requests are
routed to a nearby cache. When an edge
cache receives a request for a document the
search process is carried out. First it checks
the d
ocument in the local caches if it is not
available locally it can try to retrieve the
document from nearby caches and from the
intermediate server rather than immediately
contacting the remote server. First the
document is searched in the beacon point
and
retrieves the document. If there is no
content in the beacon point then it searches
from the all nearby catch clouds and
retrieves the document. If there is no content
in the cloud, then it searches in the
intermediate server and retrieve the
document. If
there is no content in the
intermediate server, then it searches in the
original server and retrieve the document.
The recently viewed document is stored in
the beacon point. If the user again searches
the same document it won’t search in the
original serv
er. The beacon point itself
contains the data. So no need to perform
search in the original server. If the user
requests any other document the previous
document moved to the catch clouds. The
recently viewed document stored in the
beacon point. Most frequ
ently accessed
pages are moved to intermediate server.
When more requests are posted from users,
only a limited number of previously viewed
pages are in cache clouds. Then it is moved
out of cache cloud memory. In case of most
frequently accessed pages, if

previous
contents often moved out of cache clouds,
then it happens to search again and again
from original server. Instead of this, we find
most frequently accessed pages and these
pages are store to inter mediate server.
While a request is sent for such
pages, first
search is made on beacon point, then cache
cloud and retrieved from intermediate
server. Using this server, frequent access of
original server is avoided for frequently
accessing pages.


VII.C
ONCLUSION AND FUTURE WORK

In the proposed system, o
ur cooperative
EC grid architecture systematically
addresses various aspects of cooperation
such as collaborative miss handling,
cooperative consistency management,
efficient and failure
-
resilient document
lookups and updates, and cost
-
sensitive
document p
lacements. In contrast, the
cooperative EC grid utilizes cache
cooperation in multiple ways to enhance
dynamic content delivery, and it provides
stronger document consistency guarantees
through low
-
cost, server driven protocols.
Further, the low cost cache

cooperation
techniques can improve the efficiency and
scalability of these consistency maintenance
schemes. Our dynamic hashing scheme can
be viewed as a combination of directory and
hashing techniques, wherein documents are
mapped to beacon points via th
e dynamic
hashing mechanism and each beacon point
maintains up
-
to
-
date lookup information on
the documents that are mapped to it. These
schemes are very effective for dynamic
content delivery.


Proceedings of the
Intern
ational Conference ,

Computational Systems and
Communication Technology”

Jan.,9,2009

-


by

Lord Venkateshwaraa Engineering
College
,
Kanchipuram Dt.PIN
-
631 60
5
,INDIA


Copy Right @CSE/IT/ECE/MCA
-
LVEC
-
2009

8


R
EFERENCES


[01]Akamai Technologies,
http://www.akamai.com/en/
html/services/co
ntent delivery.html”.

[02]Speedera Network
“http://www.speedera.com/technologytechn
ologyimplement.html”,

[03] Mirror Image Internet,
“http://www.mirror
-
image.com/services/instacontent
datasheet.html”.

[04]Cable and Wireless, “http://www..

exodus.net/solutions/cdns /index.html”.

[05]Web
-
Caching, “http://www.

web
-
caching.com/cdns.html”.

[06] J. Challenger, P. Dantzig, A. Iyengar,
M.S. Squillante, and L.Zhang, “Efficiently
Serving Dynamic Data at Highly Accessed
Web Sites,” IEEE/ACM Trans. N
etworking,
vol. 12, no. 2, Apr. 2004.

[07] A. Datta, K. Dutta, H. Thomas, D.
VanderMeer, Suresha, and K.Ramamritham,
“Proxy
-
Based Acceleration of Dynamically
Generated Content on the World Wide Web:
An Approach and Implementation,” Proc.
ACM SIGMOD Int’l C
onf. Management of
Data, 2002.

[08] F. Dabek, R. Cox, F. Kaashoek, and R.
Morris. Vivaldi: A Decentralized Network
Coordinate System. In SIGCOMM
-
2004,
August 2004.

[09] L. Fan, P. Cao, J. Almeida, and A.
Broder, “Summary Cache: A Scalable Wide
-
Area Web Cac
he Sharing Protocol,” Proc.
ACM SIGCOMM ’98, 1998.

[10] A. Iyengar, E. Nahum, A. Shaikh, and
R. Tewari, “Web Caching,Consistency, and
Content Distribution,” The Practical
Handbook of Internet Computing, M.P.
Singh, ed. Chapmann and Hall/CRC Press,
2005.


[
11] A. K. Jain, M. N. Murty, and P. J.
Flynn. Data Clustering: A Review. ACM
Computing Surveys, 31(3), 1999.

[12] E. Ng and H. Zhang. Predicting
Internet Network Distance with
Coordinates
-
Based Approaches. In IEEE
-
INFOCOM, June 2002.

[13] A. Ninan, P. Kulk
arni, P. Shenoy, K.
Ramamritham, and R. Tewari, “Scalable
Consistency Maintenance in Content
Distribution Networks Using Cooperative
Leases,” IEEE Trans. Knowledge and Data
Eng., vol. 15, no. 4, July/Aug. 2003.

[14] L. Ramaswamy, L. Liu, and J. Zhang,
“Eff
icient Formation of Edge Cache Groups
for Dynamic Content Delivery,” Proc. Int’l
Conf.Distributed Computing Systems
(ICDCS), 2006.

[15] L. Ramaswamy, L. Liu, and A. Iyengar,
“Cache Clouds: Cooperative Caching of
Dynamic Documents in Edge Networks,”
Proc. 2
5th Int’l Conf. Distributed
Computing Systems (ICDCS ’05),2005.

[16] S. Rangarajan, S. Mukherjee, and P.
Rodriguez, “User SpecificRequest
Redirection in a Content Delivery Network,”
Proc. Int’lWorkshop Web Content Caching
and Distribution (IWCW), 2003.

[17
]Scott Michel,Khoi Nguyen,Adam
Rosentein,Lixia Zhang “
Adaptive Web
Caching:


Towards a New Caching Architecture

[18] C. Yuan, Y. Chen, and Z. Zhang,
“Evaluation of Edge Caching/Offloading for
Dynamic Content Delivery,” IEEE Trans.
Knowledgeand Data Eng.,
vol. 16, no. 11,
Nov. 2004.


[19] J. Zhang, L. Liu, C. Pu, and M.
Ammar. Reliable Peer
-
topeer End System
Multicasting through Replication. In IEEE
P2P 2004..