AVANCER 2010 PROJECT LIST

odecrackΤεχνίτη Νοημοσύνη και Ρομποτική

29 Οκτ 2013 (πριν από 3 χρόνια και 7 μήνες)

60 εμφανίσεις

AVANCER
2010
PROJECT LIST


NETWORKING AND SECURITY:


1.

ON WIRELESS SCHEDULING ALGORITHMS FOR MINIMIZING THE
QUEUE
-
OVERFLOW PROBABILITY

In this paper, we are interested in wireless scheduling

algorithms for the downlink of a
single cell that can minimize

the
queue
-
overflow probability. Specifi
cally, in a large
-
deviation

setting, we are interested in algorithms that maximize the

asymptot
ic decay
-
rate
of the queue
-
overfl
ow probability, as the

queue
-
overfl
ow threshold approaches in
fi
nity. We
fi
rst derive an

upper

bound on t
he decay
-
rate of the queue
-
overfl
ow probability

over all
scheduling policies. We then focus on a class of scheduling

algorithms c
ollectively referred
to as the “α
-
algorithms

. For

a given
α



1
,
the

-
algorithm picks the user for service at

each
time that has the largest product of the transmission rate

multiplied by the backlog raised
to the power
α
. We show

that when t
he overflow metric is appropriately modifi
ed, the

minimum
-
cost
-
to
-
overfl
ow under
the

-
algorithm can be achieved

by a simple linea
r path,
and it can be written as the solution of a

vector
-
optimization problem. Using this structural
property, we

then show that when
α

approaches infi
nity, the
α
-
algorithms

asymptotically
achieve the largest decay
-
rate of the queue

over
f
l
ow probability.
Finally, this result enables
us to design

scheduling algorithms that are both close
-
to
-
optimal in terms

of the asymptotic
decay
-
rate of the over
fl
ow probability, and

empirically shown to maintain small queue
-
over
fl
ow probabilities

over queue
-
length ranges
of practical interest.


2.

EFFICIENT AND DYNAMIC ROUTING TOPOLOGY INFERENCE FROM
END
-
TO
-
END MEASUREMENTS

Inferring the routing topology and link performance

from a node to a set of other nodes is
an important component

in network monitoring and application de
sign. In this paper we

propose a general framework for designing topology inference

algorithms based on additive
metrics. The framework can flexibly

fuse information from multiple measurements to
achieve

better estimation accuracy. We develop computational
ly efficient

(polynomial
-
time)
topology inference algorithms based on the

framework. We prove that the probability of
correct topology

inference of our algorithms converges to one exponentially fast

in the
number

of probing packets. In particular, for appl
ications

where nodes may join or leave
frequently such as overlay

network construction, application
-
layer multicast, peer
-
to
-
peer

file sharing/streaming, we propose a novel sequential topology

inference algorithm which
significantly reduces the probing ove
rhead

and can efficiently handle node dynamics. We
demonstrate

the effectiveness of the proposed inference algorithms via Internet

experiments.






KNOWLEDGE AND DATA MINING:


3.

BINRANK: SCALING DYNAMIC AUTHORITY
-
BASED SEARCH USING
MATERIALIZED SUBGRAPHS

Dy
namic authority
-
based keyword search algorithms, such as ObjectRank and personalized
PageRank, leverage semantic link information to provide high quality, high recall search in
databases, and the Web. Conceptually, these algorithms require a querytime Page
Rank
-
style iterative computation over the full graph. This computation is too expensive for large
graphs, and not feasible at query time. Alternatively, building an index of precomputed
results for some or all keywords involves very expensive preprocessing
. We introduce
BinRank, a system that approximates ObjectRank results by utilizing a hybrid approach
inspired by materialized views in traditional query processing. We materialize a number of
relatively small subsets of the data graph in such a way that an
y keyword query can be
answered by running ObjectRank on only one of the subgraphs. BinRank generates the
subgraphs by partitioning all the terms in the corpus based on their co
-
occurrence,
executing ObjectRank for each partition using the terms to generat
e a set of random walk
starting points, and keeping only those objects that receive non
-
negligible scores. The
intuition is that a subgraph that contains all objects and links relevant to a set of related
terms should have all the information needed to ran
k objects with respect to one of these
terms. We demonstrate that BinRank can achieve subsecond query execution time on the
English Wikipedia data set, while producing high
-
quality search results that closely
approximate the results of ObjectRank on the or
iginal graph. The Wikipedia link graph
contains about 108 edges, which is at least two orders of magnitude larger than what prior
state of the art dynamic authority
-
based search systems have been able to demonstrate.
Our experimental evaluation investigate
s the trade
-
off between query execution time,
quality of the results, and storage requirements of BinRank.


4.

CLOSENESS: A NEW PRIVACY MEASURE FOR DATA PUBLISHING

The k
-
anonymity privacy requirement for publishing microdata requires that each
equivalence cla
ss (i.e., a set of records that are indistinguishable from each other with
respect to certain “identifying” attributes) contains at least k records. Recently, several
authors have recognized that k
-
anonymity cannot prevent attribute disclosure. The notion
of
`
-
diversity has been proposed to address this; `
-
diversity requires that each equivalence
class has at least ` well
-
represented (in Section 2) values for each sensitive attribute. In
this article, we show that `
-
diversity has a number of limitations. In

particular, it is neither
necessary nor sufficient to prevent attribute disclosure. Motivated by these limitations, we
propose a new notion of privacy called “closeness”. We first present the base model t
-
closeness, which requires that the distribution of

a sensitive attribute in any equivalence
class is close to the distribution of the attribute in the overall table (i.e., the distance
between the two distributions should be no more than a threshold t). We then propose a
more flexible privacy model called

(n, t)
-
closeness that offers higher utility. We describe our
desiderata for designing a distance measure between two probability distributions and
present two distance measures. We discuss the rationale for using closeness as a privacy
measure and illustr
ate its advantages through examples and experiments.




5.

DATA LEAKAGE DETECTION

We study the following problem: A data distributor has given sensitive data to a set of
supposedly trusted agents (third parties). Some of the data is leaked and found in an
una
uthorized place (e.g., on the web or somebody’s laptop). The distributor must assess the
likelihood that the leaked data came from one or more agents, as opposed to having been
independently gathered by other means. We propose data allocation strategies (a
cross the
agents) that improve the probability of identifying leakages. These methods do not rely on
alterations of the released data (e.g., watermarks). In some cases we can also inject
“realistic but fake” data records to further improve our chances of d
etecting leakage and
identifying the guilty party.



6.

PAM: AN EFFICIENT AND PRIVACY
-
AWARE MONITORING
FRAMEWORK FOR CONTINUOUSLY MOVING OBJECTS

Efficiency and privacy are two fundamental issues in moving object monitoring. This paper
proposes a privacy
-
aware

monitoring (PAM) framework that addresses both issues. The
framework distinguishes itself from the existing work by being the first to holistically
address the issues of location updating in terms of monitoring accuracy, efficiency, and
privacy, particula
rly, when and how mobile clients should send location updates to the
server. Based on the notions of safe region and most probable result, PAM performs location
updates only when they would likely alter the query results. Furthermore, by designing
various
client update strategies, the framework is flexible and able to optimize accuracy,
privacy, or efficiency. We develop efficient query evaluation/reevaluation and safe region
computation algorithms in the framework. The experimental results show that PAM
su
bstantially outperforms traditional schemes in terms of monitoring accuracy, CPU cost,
and scalability while achieving close
-
to
-
optimal communication cost.


7.

P2P REPUTATION MANAGEMENT USING DISTRIBUTED IDENTITIES
AND DECENTRALIZED RECOMMENDATION CHAINS

Peer
-
to
-
peer (P2P) networks are vulnerable to peers who cheat, propagate malicious code,
leech on the network, or simply do not cooperate. The traditional security techniques
developed for the centralized distributed systems like client
-
server networks are ins
ufficient
for P2P networks by the virtue of their centralized nature. The absence of a central authority
in a P2P network poses unique challenges for reputation management in the network. These
challenges include identity management of the peers, secure re
putation data management,
Sybil attacks, and above all, availability of reputation data. In this paper, we present a
cryptographic protocol for ensuring secure and timely availability of the reputation data of a
peer to other peers at extremely low costs.
The past behavior of the peer is encapsulated in
its digital reputation, and is subsequently used to predict its future actions. As a result, a
peer’s reputation motivates it to cooperate and desist from malicious activities. The
cryptographic protocol is
coupled with self
-
certification and cryptographic mechanisms for
identity management and countering Sybil attack. We illustrate the security and the
efficiency of the system analytically and by means of simulations in a completely
decentralized Gnutella
-
li
ke P2P network.


8.

MANAGING MULTIDIMENSIONAL HISTORICAL AGGREGATE DATA IN
UNSTRUCTURED P2P NETWORKS

A P2P
-
based framework supporting the extraction of aggregates from historical
multidimensional data is proposed, which provides efficient and robust query eva
luation.
When a data population is published, data are summarized in a synopsis, consisting of an
index built on top of a set of subsynopses (storing compressed representations of distinct
data portions). The index and the subsynopses are distributed acros
s the network, and
suitable replication mechanisms taking into account the query workload and network
conditions are employed that provide the appropriate coverage for both the index and the
subsynopses.


9.

Deriving Concept
-
Based User Profiles from Search En
gine Logs

User profiling is a fundamental component of any personalization applications. Most existing
user profiling strategies are

based on objects that users are interested in (i.e., positive
preferences), but not the objects that users dislike (i.e., n
egative

preferences). In this
paper, we focus on search engine personalization and develop several concept
-
based user
profiling methods that

are based on both positive and negative preferences. We evaluate
the proposed methods against our previously propos
ed

personalized query clustering
method. Experimental results show that profiles which capture and utilize both of the user’s
positive and

negative preferences perform the best. An important result from the
experiments is that profiles with negative prefer
ences can increase

the separation between
similar and dissimilar queries. The separation provides a clear threshold for an
agglomerative clustering

algorithm to terminate and improve the overall quality of the
resulting query clusters.


10.

Parallelizing Itine
rary
-
Based KNN Query Processing in Wireless
Sensor Networks

Wireless sensor networks have been proposed for facilitating various monitoring applications
(e.g., environmental

monitoring and military surveillance) over a wide geographical region.
In these ap
plications,
spatial queries
that collect data from

wireless sensor networks play an
important role. One such query is the
K
-
Nearest Neighbor (KNN) query
that facilitates
collection of

sensor data samples based on a given query location and the number of
sa
mples specified (i.e.,
K
). Recently, itinerary
-
based KNN

query processing techniques,
which propagate queries and collect data along a predetermined itinerary, have been
developed. Prior

studies demonstrate that itinerary
-
based KNN query processing algorit
hms
are able to achieve better energy efficiency than other

existing algorithms developed upon
tree
-
based network infrastructures. However, how to derive itineraries for KNN query based
on

different performance requirements remains a challenging problem. I
n this paper, we
propose a Parallel Concentric
-
circle Itinerary

based

KNN (PCIKNN) query processing
technique that derives different itineraries by optimizing either
query latency
or
energy

consumption
. The performance of PCIKNN is analyzed mathematically
and evaluated
through extensive experiments. Experimental

results show that PCIKNN outperforms the
state
-
of
-
the
-
art techniques.


11.

Towards an Automatic Detection of Sensitive Information in a
Database

In order to validate user requirements, tests are

often c
onducted on real data. However,
developments and tests

are more and more outsourced, leading companies to provide

external staff with real confidential data. A solution to this

problem is known as Data
Scrambling. Many algorithms aim at

smartly replacing t
rue data by false but realistic ones.
However,

nothing has been developed to automate the crucial task of

the detection of the
data to be scrambled. In this paper we

propose an innovative approach
-

and its
implementation as

an expert system
-

to achieve t
he automatic detection of the

candidate
attributes for scrambling. Our approach is mainly

based on semantic rules that determine
which concepts have

to be scrambled, and on a linguistic component that retrieves

the
attributes that semantically correspond t
o these concepts.

Since attributes cannot be
considered independently from each

other we also address the challenging problem of the
propagation

of the scrambling among the whole database. An important

contribution of our
approach is to provide a semantic
modeling
of sensitive data. This knowledge is made
available through

production rules, operationalizing the sensitive data detection.


12.

ViDE: A Vision
-
Based Approach for Deep Web Data Extraction

Deep Web contents are accessed by queries submitted to Web dat
abases and the returned
data records are enwrapped in

dynamically generated Web pages (they will be called
deep
Web pages
in this paper). Extracting structured data from deep Web pages

is a challenging
problem due to the underlying intricate structures of
such pages. Until now, a large number
of techniques have been

proposed to address this problem, but all of them have inherent
limitations because they are Web
-
page
-
programming
-
language

dependent.

As the popular
two
-
dimensional media, the contents on Web pa
ges are always displayed regularly for users
to browse.

This motivates us to seek a different way for deep Web data extraction to
overcome the limitations of previous works by utilizing some

interesting common visual
features on the deep Web pages. In this

paper, a novel vision
-
based approach that is Web
-
page

programming
-

language
-
independent is proposed. This approach primarily utilizes the
visual features on the deep Web pages to

implement deep Web data extraction, including
data record extraction and dat
a item extraction. We also propose a new evaluation

measure
revision
to capture the amount of human effort needed to produce perfect extraction. Our
experiments on a large set of Web

databases show that the proposed vision
-
based approach
is highly effectiv
e for deep Web data extraction.



13.

An UpDown Directed Acyclic Graph Approach for Sequential
Pattern Mining

Traditional pattern growth
-
based approaches for sequential pattern mining derive length
-
(
k
þ
1
) patterns based on the

projected databases of length
-
k
patterns recursively. At each
level of recursion, they unidirectionally grow the length of detected

patterns by one along
the suffix of detected patterns, which needs
k
levels of recursion to find a length
-
k
pattern.
In this paper, a novel

data structure,
UpDown Directed Acyclic Graph (UDDAG), is invented
for efficient sequential pattern mining. UDDAG allows

bidirectional pattern growth along
both ends of detected patterns. Thus, a length
-
k
pattern can be detected in
b
log
2
k
þ
1
c
levels of

recursion at best,

which results in fewer levels of recursion and faster pattern
growth. When minSup is large such that the average

pattern length is close to 1, UDDAG
and PrefixSpan have similar performance because the problem degrades into frequent item

counting problem.
However, UDDAG scales up much better. It often outperforms PrefixSpan
by almost one order of magnitude in

scalability tests. UDDAG is also considerably faster
than Spade and LapinSpam. Except for extreme cases, UDDAG uses comparable

memory to
that of Prefi
xSpan and less memory than Spade and LapinSpam. Additionally, the special
feature of UDDAG enables its

extension toward applications involving searching in large
spaces.


14.

Domain
-
Driven Classification Based on Multiple Criteria and
Multiple Constraint
-
Level

Programming for Intelligent Credit Scoring

Extracting knowledge from the transaction records and the personal data of credit card
holders has great profit potential for

the banking industry. The challenge is to detect/predict
bankrupts and to keep and rec
ruit the profitable customers. However, grouping

and
targeting credit card customers by traditional data
-
driven mining often does not directly
meet the needs of the banking industry,

because data
-
driven mining automatically
generates classification outputs

that are imprecise, meaningless, and beyond users’ control.

In this paper, we provide a novel domain
-
driven classification method that takes advantage
of multiple criteria and multiple constraint

level

programming for intelligent credit scoring.
The metho
d involves credit scoring to produce a set of customers’ scores that allows

the
classification results actionable and controllable by human interaction during the scoring
process. Domain knowledge and experts’

experience parameters are built into the crite
ria
and constraint functions of mathematical programming and the human and machine

Con
v
ersation is employed to generate an efficient and precise solution. Experiments based
on various data sets validated the

effectiveness and efficiency of the proposed met
hods.


15.

Feature Selection Using
f
-
Information Measures in Fuzzy
Approximation Spaces

The selection of nonredundant and relevant features of real
-
valued data sets is a highly
challenging problem. A novel

feature selection method is presented here based on fu
zzy
-
rough sets by maximizing the relevance and minimizing the redundancy of

the selected
features. By introducing the fuzzy equivalence partition matrix, a novel representation of
Shannon’s entropy for fuzzy

approximation spaces is proposed to measure the
relevance
and redundancy of features suitable for real
-
valued data sets. The fuzzy

equivalence
partition matrix also offers an efficient way to calculate many more information measures,
termed as
f
-
information

measures. Several
f
-
information measures are s
hown to be
effective for selecting nonredundant and relevant features of real
-
valued

data sets. This
paper compares the performance of different
f
-
information measures for feature selection in
fuzzy approximation

spaces. Some quantitative indexes are intro
duced based on fuzzy
-
rough sets for evaluating the performance of proposed method. The

effectiveness of the
proposed method, along with a comparison with other methods, is demonstrated on a set of
real
-
life data sets.


MOBILE COMPUTING:

16.

SECURE DATA COLLECT
ION IN WIRELESS SENSOR NETWORKS
USING RANDOMIZED DISPERSIVE ROUTES

Compromised
-
node and denial
-
of
-
service are two key

attacks in wireless sensor networks
(WSNs). In this paper, we

study routing mechanisms that circumvent (bypass) black holes

formed by thes
e
attacks. We

argue that existing multi
-
path routing

approaches are
vulnerable to such attacks, mainly due to their

deterministic nature. So once an adversary
acquires the routing

algorithm, it can compute the same routes known to the source,

and
hence end
anger all information sent over these routes. In this

paper, we develop
mechanisms that generate randomized multipath

routes. Under our design, the routes taken
by the “shares”

of different packets change over time. So even if the routing

algorithm
becomes

known to the adversary, the adversary still

cannot pinpoint the routes traversed
by each packet. Besides

randomness, the routes generated by our mechanisms are also

highly dispersive and energy
-
efficient, making them quite capable

of bypassing black holes

at low energy cost. Extensive simulations

are conducted to verify the validity of our
mechanisms.


17.

A Novel Dual
-
Index Design to Efficiently Support Snapshot
Location
-
Based Query Processing in Mobile Environments

Abstract

Location
-
based services are increa
singly popular recently. Many applications aim
to support a large number of users in

metro area (i.e., dense networks). To cope with this
challenge, we present a framework that supports location
-
based services on

MOVing objects
in road Networks (MOVNet, fo
r short) [26]. MOVNet’s dual
-
index design utilizes an on
-
disk
R
-
tree to store the network

connectivities and an in
-
memory grid structure to maintain
moving object position updates. In this paper, we extend the functionality of

MOVNet to
support snapshot ra
nge queries as well as snapshot
k
nearest neighbor queries. Given an
arbitrary edge in the space, we

analyze the minimum and maximum number of grid cells
that are possibly affected. We show that the maximum bound can be used in

snapshot
range query process
ing to prune the search space. We demonstrate via theoretical analysis
and experimental results that

MOVNet yields excellent performance with various networks
while scaling to a very large number of moving objects.


PARELLEL AND DISTRIBUTED SYSTEM:


18.

PRIVAC
Y
-
CONSCIOUS LOCATION
-
BASED QUERIES IN MOBILE
ENVIRONMENTS

In location
-
based services, users with location
-
aware mobile devices are able to make
queries about their surroundings anywhere and at any time. While this ubiquitous
computing paradigm brings grea
t convenience for information access, it also raises concerns
over potential intrusion into user location privacy. To protect location privacy, one typical
approach is to cloak user locations into spatial regions based on user
-
specified privacy
requirement
s, and to transform location
-
based queries into region
-
based queries. In this
paper, we identify and address three new issues concerning this location cloaking approach.
First, we study the representation of cloaking regions and show that a circular region

generally leads to a small result size for region
-
based queries. Second, we develop a
mobility
-
aware location cloaking technique to resist trace analysis attacks. Two cloaking
algorithms, namely MaxAccu_Cloak and MinComm_Cloak, are designed based on diffe
rent
performance objectives. Finally, we develop an efficient polynomial algorithm for evaluating
circular
-
region
-
based kNN queries. Two query processing modes, namely bulk and
progressive, are presented to return query results either all at once or in an
incremental
manner. Experimental results show that our proposed mobility
-
aware cloaking algorithms
significantly improve the quality of location cloaking in terms of an entropy measure without
compromising much on query latency or communication cost. Moreo
ver, the progressive
query processing mode achieves a shorter response time than the bulk mode by
parallelizing the query evaluation and result transmission.


SECURE COMPUTING:


19.

Using Web
-
Referral Architectures to Mitigate Denial
-
of
-
Service
Threats

The web

is a complicated graph, with millions of websites interlinked together. In this paper,
we propose to use this web

sitegraph
structure to mitigate flooding attacks on a website,
using
new

web referral architecture for privileged service
(“WRAPS”).

WRAPS al
lows a
legitimate client to obtain a
privilege URL
through a simple click on a referral hyperlink,
from a website trusted by the

target website. Using that URL, the client can get privileged
access to the target website in a manner that is far less vulnera
ble to a

distributed denial
-
of
-
service (DDoS) flooding attack than normal access would be. WRAPS does not require
changes to web client

software and is extremely lightweight for referrer websites, which
makes its deployment easy. The massive scale of the w
eb sitegraph

could deter attempts to
isolate a website through blocking all referrers. We present the design of WRAPS, and the
implementation of a

prototype system used to evaluate our proposal. Our empirical study
demonstrates that WRAPS enables legitimat
e clients to connect

to a website smoothly in
spite of a very intensive flooding attack, at the cost of small overheads on the website’s
ISP’s edge routers.

We discuss the security properties of WRAPS and a simple approach to
encourage many small websites
to help protect an important

site during DoS attacks.


SOFTWARE ENGINEERING:


20.

Vulnerability Discovery with Attack Injection

The increasing reliance put on networked computer systems demands higher levels of
dependability. This is even more

relevant as new
threats and forms of attack are constantly
being revealed, compromising the security of systems. This paper

addresses this problem
by presenting an attack injection methodology for the automatic discovery of vulnerabilities
in software

components. The prop
osed methodology, implemented in AJECT, follows an
approach similar to hackers and security analysts to

discover vulnerabilities in network
-
connected servers. AJECT uses a specification of the server’s communication protocol and

predefined test case genera
tion algorithms to automatically create a large number of
attacks. Then, while it injects these attacks

through the network, it monitors the execution
of the server in the target system and the responses returned to the clients. The

observation of an unexp
ected behavior suggests the presence of a vulnerability that was
triggered by some particular attack (or group

of attacks). This attack can then be used to
reproduce the anomaly and to assist the removal of the error. To assess the usefulness of

this appro
ach, several attack injection campaigns were performed with 16 publicly available
POP and IMAP servers. The results show

that AJECT could effectively be used to locate
vulnerabilities, even on well
-
known servers tested throughout the years.



21.

On Event
-
Base
d Middleware for Location
-
Aware Mobile
Applications

As mobile applications become more widespread, programming paradigms and middleware
architectures designed to support their development are becoming increasingly important.
The event
-
based programming par
adigm is a strong candidate for the development of
mobile applications due to its inherent support for the loose coupling between components
required by mobile applications. However, existing middleware that supports the event
-
based programming paradigm is

not well suited to supporting location
-
aware mobile
applications in which highly mobile components come together dynamically to collaborate at
some location. This paper presents a number of techniques including location
-
independent
announcement and subscr
iption coupled with location
-
dependent filtering and event delivery
that can be used by event
-
based middleware to support such collaboration. We describe
how these techniques have been implemented in STEAM, an event
-
based middleware with a
fully decentrali
zed architecture, which is particularly well suited to deployment in ad hoc
network environments. The cost of such location
-
based event dissemination and the
benefits of distributed event filtering are evaluated.


CLOUD COMPUTING:


22.

An Efficient Hybrid Peer
-
to
-
Peer System

for Distributed Data
Sharing

Peer
-
to
-
peer overlay networks are widely used in distributed systems. Based on whether a
regular topology is maintained

among peers, peer
-
to
-
peer networks can be divided into two
categories: structured peer
-
to
-
p
eer networks in which peers are

connected by a regular
topology, and unstructured peer
-
to
-
peer networks in which the topology is arbitrary.
Structured peer
-
to
-
peer

networks usually can provide efficient and accurate services but
need to spend a lot of effo
rt in maintaining the regular topology. On the

other hand,
unstructured peer
-
to
-
peer networks are extremely resilient to the frequent peer joining and
leaving but this is usually

achieved at the expense of efficiency. The objective of this work is
to desig
n a hybrid peer
-
to
-
peer system for distributed data sharing

which combines the
advantages of both types of peer
-
to
-
peer networks and minimizes their disadvantages. The
proposed hybrid peer

to
-

peer system is composed of two parts: the first part is a
struc
tured core network which forms the backbone of the hybrid system; the

second part is
made of multiple unstructured peer
-
to
-
peer networks each of which is attached to a node in
the core network. The core

structured network can narrow down the data lookup wi
thin a
certain unstructured network accurately, while the unstructured networks

provide a low
-
cost
mechanism for peers to join or leave the system freely. A data lookup operation first checks
the local unstructured

network, and then, the structured network
. This two
-
tier hierarchy
can decouple the flexibility of the system from the efficiency of the

system. Our simulation
results demonstrate that the hybrid peer
-
to
-
peer system can utilize both the efficiency of
structured peer
-
to

peer

network and the flexib
ility of the unstructured peer
-
to
-
peer network
and achieve a good balance between the two types of

networks.


23.

A Model for Reducing Power Consumption

in Peer
-
to
-
Peer
Systems

Abstract

Information systems based on the cloud computing

model and peer
-
to
-
peer (P
2P)
model are now getting popular. In

the cloud computing model, a cloud of servers support
thin clients

with various types of service like

Web pages and databases. On the

other hand,
every computer is peer and there is no centralized coordinator

in the P2
P model. It is
getting more significant to discuss

how to reduce the total electric power consumption of
computers

in information systems to realize eco
-
society. In this paper, we consider

a Web
type of application on P2P overlay networks. First,

we discus
s a model for showing how
much each server peer consumes

electric power to perform Web requests from client peers.

Then, we discuss algorithms for a client peer to select a server peer

in a collection of server
peers so that the total power consumption

can

be reduced while some constraint like
deadline one is satisfied.

Lastly, we evaluate the algorithms in terms of the total power
consumption

and throughput compared with traditional round robin

algorithms.



WEB APPLICATION:


24.

A Solution Model and Tool for
Supporting the Negotiation of
Security Decisions in E
-
Business Collaborations

Sharing, comparing and negotiating security

related

actions and requirements across
businesses has always

been a complicated matter. Issues arise due to semantic

gaps,
disparity in security documentation and formats, and

incomplete security
-
related
information during negotiations,

to say the least. As collaborations amongst e
-
businesses in

particular increase, there is a growing, implicit need to address

these issu
es and ease
companies’ deliberations on security. Our

research has investigated this topic in substantial
detail, and

in this paper we present a novel solution model and tool for

supporting
businesses through these tasks. Initial evaluation

results and fee
dback from interviewed
security professionals

affirm the use and suitability of our proposals in supporting

the
security actions negotiation process.