18 Annual Post-graduate Research conference

jeanscricketInternet και Εφαρμογές Web

3 Νοε 2013 (πριν από 3 χρόνια και 11 μήνες)

147 εμφανίσεις





Faculty of Technology and Environment



School of Computing and Mathematical
Sciences






1
8
th

Annual
Post
-
graduate

Research conference


Wednesda
y 14

& Thursday 1
5

M
arch 201
2

To be held in

Room 705,
Byrom

street









ANNUAL

POSTGRADUATE RESEARCH CONFERENCE



Wednesday 14
th March 20
1
2



Opening and Welcome by
Professor
Madjid Merabti

(09:15


09:30
)



(Tea/Coffee from 9
.0
0
am)



Session Chair:
Brett Lempereur

Session

Rapporteur
:
Dr Kashif Kifayat



TIME

NAME

YEAR

TITLE

9.30


Andrew Attwood


3
rd

Mobility and state protection in wireless
sensor networks


9.50


Chelsea Dobbins

2
nd


Peer

to

Peer Mobile Services for Capturing
Human Digital Memori
es


10.
1
0


Amal Fadak


1
st


D
esign framework of dialogue generation
for digital interactive storytelling


10.30


Mohssen Ghaderi

2
nd



State Consistency in Large Scale
Distributed Systems


Tea/
Coffee Break

(10.50



11.1
0
)


Session Chair
:

Farooq Alam


S
ession
Rapporteur
:
Dr
Simon Cooper




11.10


William Hurst

2
nd


An Intelligent Middleware For Critical
Infrastructure Support


11.30


Andrew Jones


1
st


Towards Resilient Critical Infrastructure



11.50


Zaynab Ahmed


3
rd


Enhanced Computation Time for F
ast Block
Matching Algorithm


12.10

Ibrahim Idowu

2
nd


Ad
-
hoc cloud networks: a probabilistic
model for vulnerability detection in critical
infrastructure using Bayesian networks


Lunch (
12.30


13.30



Post Room
)



Session Chair
:

Stewart Blakeway



Se
ssion
Rapporteu
r:

Dr Michael Mackay


13
.30

Mohd Rizuan Baharon


1
st


An Overview of Cloud Computing


Benefits, Threats and Research Directions

13.
5
0

Mark Evans

4
th

PT

Policy Signatures of Viable Autonomous
Agents in Highly Dynamic Environments

14.1
0

Ric
ardo Duarte

2
nd


Novel Framework for 3D Embodied
Character Animation


14.3
0

Aine MacDermott

1
st


Intrusion detection for critical infrastructure
protection


Tea/Coffee B
reak (14:50


15:1
0
)


Session Chair
:

M
uhammad
Z
ahid

K
han

Session
Rapporteur
:
Dr
Chris

Carter


15.
1
0

Abdulraqeb Al
-
Selwi

2
nd


Cyber Security and Bio
-
Inspiration
Network Intrusion Monitoring


15.30

Paul Hanley


1
st


Development of a Prognostics Engine for
the Determination of Human Biological
Signals

15.5
0


Micha
e
l Kennedy

3
rd


A Framework

to assess System of Systems
security composition



Close 1
6
:
1
0


Thursday 1
5
th

March 20
1
2



(Tea/Coffee from 9
.00
am)


Session Chair:

Andrew Attwood


Session
Rapporteur
:
Dr David Lamb


TIME

NAME

YEAR

TITLE

9.30


Andrew Hardy


3
rd


A platform for distrib
uted energy
management of appliance

9.
50


Stathis Goudoulakis

2
nd


Study of Planning and Re
-
planning
Algorithms for Digital Interactive
Storytelling



10.10


Philip England


1
st



Trust Management in Ad
-
hoc Networks:

A Survey


10.30


Behnam Bazli

3
rd


Using Network Structure to Enhance
Security and Trust of Ubiquitous
Computing Services


Tea/Coffee Break (10
.
50


11.10
)


Session Chair:

Rob Hegarty


Session
Rapporteur
:

Dr
Bo Zhou


11.10


Laura Pla Beltran

2
nd


Using a Multiplayer Game Interface to
Ma
nage Critical Infrastructure Protection


11.3
0


Mark Sabino


1st

Infrastructure
-
as
-
a
-
Service for Systems
-
of
-
Systems in Crisis and Repair


11.50


Rabea F Kurdi


3
rd


A Readiness Study into E
-
Government
Information Systems Migration to Cloud
Computing


12.10

Shamaila Iram

2
nd


Personalised Healthcare: Computational
Data Modelling for the Detection of
Alzheimer’s


Lunch (12.30


13.3
0



Post Room
)


Session
Chair:

Chunlin Song


Session
Rapporteur
:
Dr
Paul Fergus


13.3
0


Haya Alaska


1st

An Investigation i
nto Clustering of High
Dimension Data

13.50

Norulzahrah Mohd
Zainudin

3
rd


Online Social Networks As Supporting
Evidence: A Digital Forensic Investigation
Model and Its Application Design


14.10

Emma Higgins

2
nd


Developing a Statistical Methodology for

Improved Identification of Geographical
Areas at Risk of Accidental Dwelling Fires


14.3
0

Simon Chambers


1
st

Data mining in real
-
world health scenarios





Tea/Coffee Break (14
:
50


15.1
0
)


Session Chair:

Chris Dennett

Session

Rapporteur
:
Dr
Muhammad

Asim

15.1
0

Faisal Alsrheed

3
rd


Autonomic negotiation mechanism to
support service level agreement
management for cloud computing

15.5
0

Nathan Shone

2
nd

Real
-
time System
-
of
-
Systems Security
Monitoring


15.50


Hector Ruiz

3
rd

Application examples of the
Fisher information metric



Close
16
:
1
0
















































An Intelligent Middleware for Critical Infrastructure Support

William Hurst

w.hurst@2009.ljmu.ac.uk



The current threat
levels facing critical infrastructures are higher ever before. Not only are
critical infrastructures having to cope with changing environmental conditions but the volume
of sophisticated cyber
-
attacks are starting to put a strain on the defences currently
in place.
In
addition, f
actoring the consequences of critical infrastructure failure only reinforces the need
to ensure effective protection is in place to safeguard the continuing delivery of the essential
services they provide.
It is, therefore, becoming

increasingly
important it is to ensure that the
protection measures being used are continually evolving to keep up to date with new and
emerging
threats
.
In response, we propose

an
innovative approach towards

c
ritical
i
nfrastructure support based on using

behavioural observation techniques to develop an
effective

way of supporting security. This approach involves the development of an intelligent
middleware which will be combined with current control systems to create a SCADA+ system.
The aim of the resear
ch within these areas is to have an impact on national security for critical
infrastructures such as power plants, transport systems and industrial production to name a
few, and improve on the security each currently uses.









Trust Management in Ad
-
ho
c Networks: A Survey

Philip England

P.England@2011.ljmu.ac.uk



Distributed ad
-
hoc networks with no centralised node of command such as MANETs and
sensor offer an interesting challenge in establishing trust
between nodes for the purposes of
network security. As a subjective measure of belief for behaviour; trust values can be
calculated and propagated between each node to promote confidence between cooperating
nodes while determining untrustworthy and misbeha
ving nodes that can cause considerable
issues within such networks.


This presentation will present a summary of current approaches and general considerations
when developing such systems. This will include;
an initial overview on the history &
background
of trust and trust management, a

summary

of trust metrics used to record levels
of trust, trust dynamics and how trust can change with time, experience and state

with an
outline into trust propagation, trust aggregation and trust prediction

before discussi
ng the
ideas behind the application of the trust management in previous research.


Further to this, a summary of some of areas in the field requiring further research will be
explored. A brief reflection will be made on the potential of incorporating socia
l context and
network dynamics to the area of trust management. As well as considering how
general

behavioural properties of mobile nodes are key in the computational calculations of trust and
why this opens the greater challenge of how to determine trust
between heterogeneous
nodes with differing properties.







Game Content Model: An Ontology for Documenting Serious
Game Design

Stephen Tang

O.T.Tang@ljmu.ac.uk



Computer games are a form of real
-
time interactive
software wrapped in
creatively crafted media that offers game
-
players engaging, goal
-
directed
play. Designing computer games requires adequate experience and great
attention to detail to describe the rules, play and aesthetics that compose the
interactive
experience. For inexperienced game designers, formalised
methods such as game design languages and game meta
-
models can
provide a guide and language to produce a game design specification correct
by design. This paper introduces a new game content model th
at can aid
game designers document specification of game design.







Transformational eGovernment Success Through

Enhanced Project Management

Shauneen Furlong


SFurlong@territorialcommunication
s.com


The first decade of eGovernment was a dot.com era of high hope and heavy
promise. Much has been done but much more remains. While eGovernment’s
first decade has arguably been much more transactional than
transformational, radical changes affecting
eGovernment are needed in this
decade: culture; relationships with all stakeholders; organizational
arrangements; business processes; and resource management. Proponents
of transformational eGovernment must be dedicated to this complicated work
that lies b
efore them. To this end, this research uncovered and documented a
composite and holistic compendium of the ten interrelated eGovernment
challenges and barriers that constrain transformational eGovernment
progress; and it proposed a solution for them. It pr
oposed an enhancement to
the Project Management Institute (PMI), USA Project Management Body of
Knowledge (PMBOK) methodology which will contribute to the theory and
practice of transformational eGovernment.




A Platform for Distributed Energy Management
of Appliance
s

Andrew Hardy


A.Hardy@2009.ljmu.ac.uk


Appliances are a very significant contributor to electricity demand. Climate
change, the switch to renewable generation and the rising cost of energy has
m
ade energy management a pressing concern. Appliances can often waste
energy by operating when their situation does not require it. More appliances
should be connected to environmental sensing enabling them to operate more
appropriately and save energy.
In addition, huge reserves have been
necessary to meet daily peaks in demand, but these reserves are now
depleting and renewable generation is more volatile. Appliances ought to
collaborate with each other, with off
-
peak storage and with personal
generati
on, to schedule demand with supply
and minimize

peaks.

There are many other energy management challenges,
but

tailored solutions
to single problems have limited value. What is needed is effective and
empowering communication, but communication is changi
ng. The future
internet of things will support more machine to machine communication and
greater device autonomy, and prospective smart appliances will make
local

decisions in response to energy management events. In light of this we have
designed a dist
ributed communication architecture to marshal a potential
wealth of energy management
data

and present it efficiently to interested
appliances and devices.

The communication architecture employs data
-
centric routing with a dedicated
energy management namin
g scheme. The communication mechanism has a
common base but operates in a number of different modes to support the
varied e
nergy management applications.

The network is ad
-
hoc and
communication is between many potentially heterogeneous appliances and
dev
ices and for this reason we are designing a dedicated link cost scheme to
optimize routing for this setting.

We are working on a number of test
-
bed case studies in which we plan to
demonstrate significant outcomes for a number of energy management and
ener
gy saving goals. While our particular design provides an effective tool to
address these functional goals, it goes further by offering significant impact
over existing solutions in accessibility, usability and portability. We plan also
to demonstrate the
se qualities and in particular a significant contribution to
simplification in deployment.










Data mining in real
-
world health scenarios

Simon Chambers

S.J.Chambers@ljmu.ac.uk


Healthcare providers in the
21
st

century are constantly improving both the quality and the
quantity of the data that it collects and archives. Traditional techniques used by researchers
within the Medical and Public Health communities are woefully inadequate when presented
with hund
reds of millions of personal medical records.


Models have been developed which allow researchers to delve into these huge data sources,
and attempt to tease out relevant information and relationships between not just patients, but
also variables. This pre
sentation will focus on the problems inherent in real
-
world public
health data, and how this impacts on the choice of appropriate modelling solutions to obtain
insights.





Mobility and state protection in wireless sensor networks

Andrew Attwood

A.Attwood@2006.ljmu.ac.uk



Ubiquitous networks will provide seamless access to systems and services
provided by the Internet of Things. It is envisaged that devices will cooperate
to provide greater system knowledge t
han the sum of its parts. Devices that
have a specific intention at the time of installation will provide additional
functions and services to higher level systems, creating emergent overall
system behaviour. In an emergency situation, the flow of data ac
ross the
Internet of Things may be disrupted, giving rise to a requirement for machine
-
to
-
machine interaction within the remaining ubiquitous environment. Devices
can make decisions based on interactions with other machines’ state. There is
a requirement
that a device can distribute its service state in a robust manner,
in order to ensure the optimal state of the system as a whole. It is imperative
that this state is available to other services even when the original device is
disrupted or destroyed, in or
der to allow system operation during or after a
partial system failure or interruption. Additionally, post
-
failure analysis will
consume and interpret last
-
known state data from devices that are no longer
available. This will require self
-
organisation and
entirely distributed operation,
without any element of central organisation or control responsibility. We
propose DASM


Distributed Application State Management


a routing and
state rendezvous mechanism that uses a hashing technique to manage
physical lo
cation, routing and optimal state distribution to preserve distributed
data in the case of a (series of) localised failures. To manage the mobility of
Internet of Things devices, without the constraints of a single point of failure, a
new mobility protocol

is required. We propose IoMANETs


Indirection
Overlay for MESH Ad Hoc Networks.



THE IMPLEMENTATION O
F COGNITIVE ATTRIBUT
ES IN A
MOBILE AD
-
HOC NETWORK TO BETTE
R MEET MISSION
OBJECTIVES

Stewart Blakeway

blakews@ho
pe.ac.uk



Mobile Ad
-
Hoc Networks can be defined as a network with no predefined infrastructure that
supports end
-
to
-
end communications by all nodes within the network participating in the
forwarding of packets; the nodes are usually mobile and are theref
ore often powered by
battery. MANETs are easily deployed and offer versatility not found in conventional wired or
wireless networks, this versatility does however present challenges in areas such as routing,
quality of service and allocation of spectum.
A cognitive communications network is able to
adapt to changes and predicted changes in the network or the application, the network should
also be able to remember the consequence of any actions imposed by the network. The
memory of past actions and conse
quences should aid in the network making better informed
future decisions. A control loop gathers information, analyses the information to determine if
the resources are being used efficiently or to determine if the needs of the application are
being met.

This research presents a MANET infrastructure with the implementation of a
Control Mechanism that forms the basis of a control loop, this consists of four stages:
observe, learn, plan and execute. It is intended that the Control Mechanism role will be
p
erformed by a neighbouring node of an active node in a MANET infrastructure to help
elevate routing, quality of service and allocation of spectrum challenges that are faced by
traditional MANETs in order to better meet mission objectives and to improve end
-
to
-
end
delay.




Novel Framework for 3D Embodied

Character Animation

Ricardo Duarte

R.L.Duarte@2010.ljmu.ac.uk



Virtual Character Animation provides a vast area of research where both academia and
industr
y have dedicated the past 30 years stretching the boundaries of what is considered to
be a realistic character.

As a research area, Virtual Character Animation has been subject to numerous studies such
as, 3D modeling and rendering areas, which attempt to
increase realism, v
irtual character
behavior and animation
.

Movies Pictures Expert Group (MPEG),
in

the late 90’s and early 2000’s created the MPEG
-
4
facial animation (FA) specification, in an effort to gather the advances achieved until then and
used the
m to create a standard. The standard goal is to provide research guidance and a
framework to future research work.

In
our

research project we propose to investigate existing techniques in facial animation,
discuss their strengths and weaknesses

to devise n
ovel approaches
.

Our
objectives

are

to create a
new animation framework, Charisma
and gather a set of
techniques that establishes new directions and techniques to be used within facial animation.
Our framework combines the advances in MPEG
-
4 standard with

the latest advances in 3D
graphic rendering and character animation, in an attempt to achieve an highly realistic and
efficient model capable of simulating expressions and voice synchronization in real
-
time. We
will focus not only in showing where existin
g solutions failed to give satisfactory results but we
will also contribute to the creation of novel approaches for 3D Embodied Character Animation.

The work completed so far proposes a novel approach on an adapti
ve LoD algorithm which
prioritiz
es the huma
n face features in the facial mesh, as defined in MPEG
-
4, it also simplifies
animation streams, reducing animation complexity and overhead. Our approach is to apply
continuous LOD to an MPEG
-
4 compliant FA model using an extension of Garland’s Qua
dric
-
Base
d Surface algorithm
, and
to

evaluate the framework and discuss the potential gain in
performance when we apply our techniques to a high LOD 3D rigged model. This
presentation will

introduce the research conducted in the past yea
r presenting our published
r
esults
, and the future
research

we will undertake to complete the thesis work.





A Readiness Study into E
-
Government Information Systems
Migration to Cloud Computing

Rabea Kurdi

R.Kurdi@2009.ljmu.ac.uk

The fas
t pace of change and rapidly evolving customer requirements in the use of
government services has forced governments to consider improving or developing new
methods to deliver their services. Therefore, widespread uptake of E
-
government applications
has
emerged as a major factor in government interaction with the citizen and there is wide
scope for further adoption in future administrations over the coming years. Moreover, the e
-
government wave along with cloud computing seem to present a significant oppo
rtunity to
improve government services by providing them over the internet. However in analysing
current research in the field of e
-
government it can be observed that there are limitations for
the use of e
-
government services, and there are many issues fac
ing e
-
government adoption
and e
-
government readiness is a major concern. The recently completed

literature review, by
the author, indicated that there is a gap between practice and theory identified by the absence
of a comprehensive assessment framework fo
r e
-
government systems and readiness in both
the public and private sectors; most of the assessment frameworks, reviewed for the study,
are varied in terms of philosophies, objectives, methodologies, approaches, and results. This
implies that there is no o
ne assessment framework that is likely to cover all e
-
government
readiness aspects. To this end, this study aims to develop a comprehensive framework of
associated guidelines and tools to support e
-
government Information Systems (EGIS)
readiness, with a sp
ecific focus on EGIS migration to the Cloud Computing provisioning
model. Moreover, the proposed framework aims to provide a method to guide the assessment
of EGIS migration readiness, including assessing the degree of maturity of a considered e
-
government

system. Also this work is intended to complement on
-
going e
-
government
initiatives in the field, taking the perspective of citizens and officials, as well as Staff. The
outcomes ought to aid authorities to understand the key issues that influence the
impl
ementation of e
-
government systems and their institutional readiness, as well as
assessing the migration to cloud computing. Therefore, the framework developed is currently
being tested and validated in a real environment via fieldwork, surveying 600 citi
zens, 125
staff, and 25 officials in the Kingdom of Saudi Arabia.











Policy Signatures of Viable Autonomous Agents in Highly
Dynamic Environments

Mark Evans

M.Evans@2001.ljmu.ac.uk



Agents within highly
dynamic environments often have to make decisions that
benefit themselves and often that of others simultaneously. When such
individual agents are participants within a collective, the actions of many such
entities are ostensibly intended to serve to deliv
er the goals of the group. This
is considered to be determined by the amount of autonomy they each
possess, the level of cohesion present within the collective and the level of
shared focus upon the common aim. The nature of the environment that such
enti
ties occupy are often highly dynamic, and the decisions made by the
individual agents and the collective must keep pace with respect to such
varying conditions. It is contended that this must be the case if the benefit to
the collective is maximised, losse
s minimised and its viability assured.
Telemetry from a real collective of autonomous agents with a common goal,
and exposed to dynamically varying threat conditions, has been analysed. A
test of the hypothesis that a common signature of decision formation

(Policy)
exists for such teleological systems, both real and synthetic, has been made.
Some early results of the initial work will be presented for consideration.





Development of a Prognostics Engine for the Determinat
ion of
Human Biological Signals

Mr

Paul Hanley

P.S.Hanley@2011.ljmu.ac.uk


The development and consequential use of a medical prognostics engine
could be very advantageous to the medical diagnostics community. Body
Area Networks (BAN’s) in
Telehealth are considered by many to be
instrumental in the future of the NHS. Determination of the ‘Sensory’ signals
received from a ‘BAN’ is crucial to the proper diagnosis of a patient’s health.
Prognostics could provide a means of patient interception
before discomfort
and further health deterioration occurs. It is proposed that a combination of
sensory signals could be analysed by ‘Sensor Fusion’ methods, and the
resulting data algorithmically processed by the either the ‘JDL Data Fusion
Method’ or ‘F
our Level Fusion Process’ in order to obtain the necessary
prognostic results.












Self
-
Care Information system for Diabetes Management

Nonso. A. Nnamoko

N.A.Nnamoko@2011.ljmu.ac.uk



Diabetes melli
tus (DM) is a chronic condition that affects over 4.3% of the UK
population and this is likely to increase
due to the
ageing population

and a
rise in obesity
. With millions of people affected in the UK, DM places a great
strain on NHS resources, costing an

estimated £9bn per annum; a
disproportionate 10% of the total health care budget. Also, quality DM
management is inherently a complex and time
-
consuming task for both
patients and clinicians. Self
-
care in form of blood glucose monitoring in
patients with
type 1 and insulin
-
treated type 2 diabetes mellitus (DM) contrives
a critical part of the current integrated management package for DM. Current
glucose monitors are invasive, cause discomfort
,
risk infection

and hence
adherence can be patchy
.

Non
-
invasive
monitoring technology offers a
potential solution and
,

technology is emerging which could promise enhanced
self
-
care for patients. Advances in technology are the driving force behind
much of today's success within the healthcare sector, so there is need t
o
harness the power of technology to improve DM, in a user
-
friendly way that
fits the workflow of the parties involved (patients & clinicians).

This paper proposes a whole new patient centric and cost effective approach
to DM management through technology
. Based on the principle that “what
gets measured gets done”, an interactive information system is proposed

which will

empower and encourage patients to take ownership of their
condition, as well as help clinicians make informed decision on patient
managem
ent and prescriptions

for follow
-
on care
. The
project promises
enhanced quality of DM management
by offering a new framework to support
patient
-
clinician engagement.



A Framework to assess System of Systems security composition

Michael Kennedy

M.Kennedy@2006.ljmu.ac.uk



In a System of Systems environment, it is imagined that services can be combined together to
create software systems composed of services provided by multiple complex and disparate
systems
to offer goal orientated solutions that present functionality greater than their
component parts. The idea that a system can be created out of existing services will provide
many benefits and allow multiple complex systems to be created, deployed and opera
ted with
a reduction in the investment of both time and money for the creation of a new system.
However this approach poses many security questions and challenges when attempting to
provide a secure system those users can have faith in. These challenges in
clude ensuring the
secure operation in a multi system environment where the management of some services is
outside of the system operator’s control, ensuring that one poorly secured service cannot
harm the composite system and cause damage or loss to the u
sers of that system. Our
current work is focused on methods to provide the means of identifying a secure composition
and method of analysing a composition and provide information about the status of security
available within the composition. We propose a f
ramework that will allow service users and
systems composers the means to identify poorly secured systems or those exhibiting security
issues and assist in making a decision whether to utilise the service or select another more
secure service.


We will be
exploring and discussing some of the concerns and challenges that are faced
when composing a System of Systems relating to security. We will also examine some of the
current approaches available to securing a System of Systems, and suggest an approach that

could be used to provide an indication of the security of a System of Systems composition.





State Management in Large
-
Scale Distributed Systems

Mohssen Ghaderi

M.Ghaderi@2010.ljmu.ac.uk



There is a need

to undertake a critical review of the existing Systems Management
technology and ethos with the ultimate aim of proposing a new approach to the discipline by
exploring and utilizing new and emerging technologies to deal with large scale distributed
system

enabling state management and scalability.

The rapid and somewhat exponential growth of IT over the last two decades has equipped
humans with an amazingly versatile tool. Projections are that this rapidly evolving sphere of
human activity will gather fu
rther speed, complexity and reach. As systems grow the need to
manage them also grows in complexity, depth and necessity.


We propose an Active State Consistency for Enterprise Systems Management (ASCESM)
Architecture and Framework to manage Large
-
Scale D
istributed Systems whose goal is to
attain and maintain consistency throughout the system.

This design provides a novel approach to the discipline by providing a mechanism to make
available hitherto idle resources to adaptively participate within a System
of Systems
framework. The collaborative element of this proposal will enforce the business logic and will
enable that validation components are distributed across all nodes, which would facilitate a
nodal composition in which end nodes are actively involv
ed in establishing and maintaining
consistency across the whole functioning system.

The research involves the development of a new framework in a scalable Distributed Systems
Management protocol based on autonomic systems, client
-
server, Peer
-
to
-
Peer and M
ulti
Agent Systems.


We will focus on weaving together separate technologies to pave the way for a new
generation of Systems Management tools which are more autonomic and therefore far more
agile and horizontally scalable to deal with state management in l
arge scale distributed
system.

The presentation will give an overview of the shortcomings of current system management
technology, introduce the progress so far with the design of the architecture and logic of our
proposed ASCESM.










The User Experie
nce and service quality in Online Saudi
banks


Majed

Shafi


M.M.Shafi@2011.ljmu.ac.uk




E
-
banking has changed the conventional patterns of banking transactions. These changes in
tools, technology, competit
ion and lifestyles all make a significant impact on how banks
operate and offer services today. Currently, researchers and practitioners are focused on the
user experience and service quality because they are key concepts to understanding
customer satisfac
tion and creating competitive advantages and customer loyalty.


The thesis will discuss the e
-
banking experience of banks in Saudi Arabia and examine the
service quality required in maintaining a good relationship to their customers. It is not
sufficient t
o adopt good practice of banking to solve the problems of Saudi Banks because the
Saudi banking environment is much different from others; the service quality level that is
proposed by the Saudi banks does not depend on British or American standards and cr
iteria;
the technical issues and the law regulations of information technology are not followed or
even, in some cases, exist.


The thesis question is how to the provided service quality should be improved to receive a
good relationship to the customers? S
ervice quality includes reliability, accessibility,
availability, security and privacy. Thus, we will consider the following:



User Experience



Customer service quality



Online systems quality



Banking service product quality


The thesis objectives are:

1.


The c
urrent HCI service quality will be reviewed. As a result, a comparison amongst
them will be performed and it willdetermine the strengths, weaknesses and
limitations, and provide some recommendations.

2.

User experience evaluations will be carried out on diffe
rent banking branches at
Jeddah city, Saudi Arabia. The evaluations instruments will be developed on the
basis of the existing literature, observations, the pilot study, and expert opinion.

3.

A systematic approach will be proposed to enhance the service qua
lity online in the
Saudi Banks.

4.

A Sample of Saudi banks will be approached for the prototyping of the proposed
improvements and then the results will be evaluated.

5.

The thesis study will be documented.














Critical Infrastructures & Systems
-
of
-
Syst
ems: A
Comprehensive Survey


Mark E.G. Sabino


M.Sabino@2007.ljmu.ac.uk


Critical Infrastructures and Systems
-
of
-
Systems have developed into disparate
networks of computers and devices each working independen
tly towards a common
goal and often over vast geographies. Research within the field is anew in the face
of an apparent susceptibility to both innocent and malicious impediments so their
protection, survivability and resilience is now of paramount importan
ce to all
stakeholders.

Within a refreshed field of research this initial paper discusses common and
emerging themes related to Systems
-
of
-
Systems & critical infrastructure regarding
the definition of systems, how they are networked, their security & their

monitoring
and give examples of how this affects our field differently from common consumer
systems and devices. We also look into existing works related to the design,
engineering and testing of these complex systems and we look at frameworks that
provid
e ideas on how to move forward with the management of Systems
-
of
-
Systems. In addition we survey situation awareness & resilience and how cloud
computing could assist both and also how it could aid in a crisis.


We introduce other disciplines being adopted
by researchers in this field and we
choose a selection of said themes & disciplines to highlight potential research
challenges in the field and for our future work.

















AD
-
HOC CLOUD NETWORKS: A PROBABILISTIC MODEL FOR
VULNERABILITY DETECTION IN

CRITICIAL
INFRASTRUCTURE USING BAYESIAN NETWORKS

Ibrahim Olatunji Idowu

I.O.Idowu@2009.ljmu.ac.uk


The use of sensor nodes in our society is becoming more ubiquitous and also
looks to expand more in the neare
st future. This is because most recent
technological devices are now having built
-
in wireless sensor nodes, which
leads to different questions of how the device capabilities could support
heterogeneous distributed applications to help improve the quality o
f service
of critical infrastructure systems.

This presentation explains how the quality of service of critical infrastructure
systems could be improved through the use of a probabilistic model. The
model should enable mobile sensor nodes to combine toget
her, allow
permissions to detect vulnerabilities of a system in real time sequences, act
as a form of distributed systems using a Bayesian network, and be ad
-
hoc in
a self
-
organizing manner through Cloud Computing platform.




An Investigation into Cluster
ing of High Dimension Data

HAYA ALASKAR

H.ALASKAR@2011.ljmu.ac.uk


Clustering methods have attracted a growing interest in different research areas, e.g.
machine learning and pattern recognition. Clustering
has been proposed as a tool for
exploring information on the structure of data, which is an unknown natural grouping.
However, there are many challenges in clustering research for example “ Clustering high
dimension data”. The main aim of this project is t
o focus on some clustering challenges,
which are clustering high dimension data. The overall aim of this research work is to
investigate the possibility of developing a new method/technique for overcoming one of the
most challenging areas in clustering “an
alysing high dimension data sets”.







Study of Planning and Re
-
planning Algorithms for Digital
Interactive Storytelling


Efstathios Goudoulakis


e.goudoulakis@2008.ljmu.ac.uk



Digital Interactive Sto
rytelling (DIS) is a relatively new field of interactive computer
entertainment which aims at creating interactive applications capable of generating
consistent, emergent

and rich narratives.


Planning involves
generating a sequence of actions from a curre
nt known state to a goal
state.
Planning systems are the most widely used techniques to generate the sequences of
actions for DIS, however there have not been much novel solutions in DIS research with
respect to developing new algorithms

or heuristics
. Not

only has no DIS dedicated planning
algorithm been proposed, but when general
-
purpose Artificial Intelligence (AI) planning
algorithms are being used, the justification of the algorithm’s choice is usually inadequate and
incomplete
.


Furthermore, even if a

plan is thoroughly tested and well
-
constructed, its value decays as
changing circumstances, resources, information, or objectives render the original course of
action inappropriate.
When a plan fails, it is necessary for a new plan to be constructed by
ei
ther starting from scratch or by revising a previous plan.

There is a significant gap in the
development and use of re
-
planning methods in existing DIS systems, compared to solution
proposed in theoretical models and industrial applications, as no specific

re
-
planning
algorithm has been proposed to deal specifically with DIS.


In
our

research, we investigate the field of DIS, along

with the fields of AI planning,
re
-
planning algorithms and Multi
-
Agent Systems (MAS), to exploit their potential for the field
of
DIS. We propose to contribute to the assessment and consolidation of existing AI planning
solutions for DIS and to create a novel framework providing new solutions for DIS using multi
-
agent systems, including the designing and implementation of new AI p
lanning and re
-
planning algorithms which will be evaluated to be most suitable for DIS solutions
features
.
The implementation of such a framework, will provide a more robust, flexible and
efficient

solution for a large class of DIS.

In this presentation we

will review the work completed so far
which has been published, and introduce a detailed proposal of the DIS framework design
and scenarios.

















Genetic algorithms for trust aware service collaborations

Hermina Duratovic
-
Field

h.duratovic@2009.ljmu.ac.uk



Trust is used by society in every decision made. To trust someone or
something, we leave ourselves vulnerable, but we can share positive
outcomes from the situation. When trusting we carefu
lly asses other peoples
intention, capabilities and motives, in context to the situation and task at hand
to decide whether a particular person or service is worth trusting.

In service collaborations, many individual services will co
-
operate in order to
pr
ovide an end product for the end users. As each service is independent to
the other it is important that each component is judged in terms of how well it
performed in the collaboration and how trustworthy it is. However, when
collaborations are evaluated,
this is not the case, and a global trust score is
given. This means that should one service have faltered, then, the bad
scoring will be reflected on all, and vice versa. It is because of this that there
is a need to be able to evaluate each service compon
ent individually, and
then be able to form collaborations based on only components we believe will
adhere to our trust, security and performance requirements. We have formally
defined this problem as; given a list of functional requirements, for each of
th
em a list of adequate components, and a cost function expressing the trust
cost of using that component, select the way of producing the composition in
order to minimise the total trust cost of the collaboration. Furthermore the trust
cost of the component

should reflect an individual trust rating based on the
requirements.

To solve the problem given above we have proposed a framework that takes
into account the previous interactions of a service component. As each
component of a collaboration can be a comp
onent in many different
collaborations performing different tasks, our framework will only take the
collaboration trust score, where it performed that or a similar task, and from
those collaboration trust scores, we will derive an individual trust score fo
r the
particular task. Furthermore a grouping genetic algorithm will then be used to
form a service collaboration that meets the users’ requirments in terms of
performance, security and trust.











Node Feedback Based TCP Mechanism for Mobile Ad
-
hoc
N
etwork


Farooq Alam



F.Alam@2004.ljmu.ac.uk


Mobile ad
-
hoc network is an autonomous system of mobile nodes establishing network in the
absence of any fixed infrastructure.
Mobile ad
-
hoc network due to potenti
ally high mobility
have provided new challenges by introducing special consideration differentiating from the
unique characteristics of the wireless medium and the dynamic nature of the network
topology. Due to unique network formation, routing in mobile a
d
-
hoc network is a challenging
issue. Effort has been undergoing to transform TCP so that it could support routing function in
an ad
-
hoc network.

This research has discovered that most of the TCP based variant routing
solutions of mobile ad
-
hoc network has

not been successful in addressing problem at full.
Taking TCP based routing solution as a main problem, this research has proposed a novel
routing solution called Node feed back based TCP as a routing scheme for mobil
e ad
-
hoc
network. The scheme has been
developed

in Java and Evaluated in SWANS.


In the light of the simulation experiments, it could be seen that NFBTCP performed well in all
simulation environment. It can be confirmed that NFBTCP has proven it self as a fully
functional and operational
-
able
for mobile ad
-
hoc network, thus should be seen or taken as a
new novel TCP based solution for mobile ad
-
hoc network. A higher number of route requests
and route replies representing networking activities were observed with the increase of mobile
nodes. In

addition to the messages activities, good numbers of routes were added at the end
of each simulation cycle. It is quite understandable that the more routes available for data
transfer in mobile ad
-
hoc network, the better. Moreover, such additions to the a
vailable routes
could directly impact overall throughout. Lastly, nodes in mobile ad
-
hoc network suffer with
limited resources. That makes conservation of all such resources an important issue in the
context of mobile ad
-
hoc network. NFBTCP has shown impre
ssive performance by
conserving available memory in most of the experiments. We believe NF
B
TCP offers a
complete and an effective TCP based routing solution for mobile ad
-
hoc network.


















Autonomic negotiation mechanism to support Service Leve
l
Agreements (SLA) management for cloud computing


Faisal Alsrheed


F.S.Alsrheed@2010.ljmu.ac.uk



The proliferation of the cloud computing in recent years has increased. This
has also increased the need fo
r negotiable Service Level Agreements (SLA)
between cloud service providers and customers.

However, much of the research carried out in negotiation is focused on
theoretical aspects of negotiation protocol and strategy; the practical
application and actual

deployment of automated negotiation mechanism has
lagged far behind. Thus, there is a need for a novel autonomic negotiation
and monitoring mechanism to support Service Level Agreements (SLA)
management for cloud computing
-
based applications.


This resear
ch work is seeking to promote and discover novel methods for the
automatic negotiation and monitoring of cloud computing applications. This
presentation will provide an overview of cloud computing negotiation and SLA.
Then, related work and open research q
uestions will be discussed. Also, we
will also show the progress made to date. Finally, in
conclusion

we will detail
the future work to be completed in order to reach our goal.






Developing a Statistical Methodology for Improved
Identification of Geogra
phical Areas at Risk of Accidental
Dwelling Fires

Emma Higgins

e.higgins@ljmu.ac.uk


This paper outlines recent research completed in partnership between Liverpool John Moores
University and Merseyside Fire and R
escue Service. The aim of the research was to
investigate ways to implement a statistical methodology into the corporate GIS system that
could be used to enhance the identification of areas most at risk from accidental dwelling fire.
Further to this, the t
oolkit developed was expanded to include a second strand of research
that looked into ways of integrating a bespoke customer segmentation methodology
developed using local geographic and demographic data to further support the identification
of risks and n
eeds.







Peer

to

Peer Mobile Services for Capturing Human Digital
Memories

Chelsea Dobbins

C.M.Dobbins@2006.ljmu.ac.uk



M
obile computing and
the increasing trend of
sharing content ubiquitously ha
ve

en
abled
users to create and

share memories instantly. Access to different data sources, such

as
location, movement, and physiology, has helped to create a

data rich society where new and
enhanced memories will form

part of everyday life.


Peer

to

Peer (P2P)
systems have also

increased in popularity over the years, due to their ad
hoc and

decentralized nature. Mobile devices are
becoming
“smarter” and are

increasingly
becoming part of P2P systems; opening up a whole

new dimension for capturing, sharing and
int
eracting with

e
nhanced human digital memories.


Creating memories using this data will

require original

and novel platforms
,

which
automatically
gather

data sources

from ubiquitous ad
-
hoc services,
prevalent within the

environments we occupy. In this

way m
emories created in the same location, and time are not

necessarily similar


it depends on the data sources that are

accessible. DigMem,
our

initial
prototype
,
is being developed to
gather

distributed mobile services.

DigMem captures
data,
from mobile devi
ces, for the purposes of creating h
uman digital memories
.




Novel Simulation Framework for State Management in Large Scale
Distributed Systems

Chris Dennett


C.R.Dennett@2006.ljmu.ac.uk


This project aimed

to develop a novel simulation framework and interface to
support distributed state management intended for use in large
-
scale
distributed systems such as distributed network environments or networked
multiplayer games. The simulation framework is an inter
face to the popular
ns
-
3 simulator, a discrete
-
event network simulator for Internet systems, with
many visualisation features to enable rapid network prototyping and
debugging of network applications. The state management framework is
intended to pave an e
asy route to multiplayer game creation while managing
bandwidth requirements and providing marshalling, authentication and state
propagation through native language features and those provided by external
libraries. The development of the framework and usa
ge in various scenarios
and test
-
cases is proposed to illustrate the features of the simulator
implementation and interface, most notably the single
-
machine
multi
-
threaded
execution and visualisation aspects, alongside the capability of simulating
differen
t network topologies with varying error models, with discussion about
mechanisms used for decoupling the wall clock in the Java virtual machine
and the issues this poses. In this presentation we discuss the results of the
project with the aforementioned us
e cases

and their evaluation
.



An Active Approach to Live Digital Forensic

Investigation

Brett Lempereur

b.lempereur@2006.ljmu.ac.uk



As the complexity and the volume of data involved in the average case

continues to rise,
there is a widespread need for improved digital forensic investigation techniques. Existing
codes of practice state that machines should immediately be powered down and an analysis
conducted only on sound duplications of their offline
media. However, with the increasing
scale of digital forensic investigations, there is a need for approaches that are capable of
reducing the quantities of offline data forensic examiners are required to search. Meanwhile,
as anti
-
forensic and encryption

techniques evolve, there is an increasing need to capture
relevant information from a machine before powering it off. Live digital forensics offers an
approach capable of capturing the operational information that would be discarded in a
traditional offl
ine approach. However, there is significant resistance to live digital forensics
among practitioners and researchers. This is compounded by a lack of techniques capable of
answering definitive questions about the efficacy and accuracy of existing methods
.


We believe that these objections are not only counterproductive, but also hold digital forensics
to a standard not seen in other forensic or investigatory disciplines. Numerous approaches to
live forensic evidence acquisition have been proposed in the
literature, but relatively little
attention has been paid to the problem of identifying how their effects, and their improvements
over other techniques, can be evaluated and quantified. Further, we recognise that historical
records of operational state ar
e more useful to examiners than a static image captured during
seizure. System monitoring provides a useful pre
-
emptive solution to the problem of
gathering information about how a system behaves at runtime for post
-
hoc analysis by
system administrators a
nd forensic experts.


Our goals are thus twofold. We aim to increase the acceptance of live digital forensic
methods by providing an apparatus with which researchers and developers can conduct
repeatable experiments on live digital forensic techniques and

software. The results of these
experiments highlight the strengths and weaknesses of existing approaches in different
scenarios. Then we extend the state of the art by developing a distributed trace
-
based system
monitor that permits the correlation of ac
tions within and between hosts in the monitored
domain where users specify policy defining either interesting or potentially unsafe events in a
system that are, nevertheless, permitted for reasons of usability. These event sequences are
then stored and, w
here appropriate, relayed to system administrators and used to guide
traditional digital forensic and out
-
of
-
band investigations in the event on an organisational or
information security policy breach.













Novel Framework for 3D Embodied

Character
Animation

Ricardo Duarte

R.L.Duarte@2010.ljmu.ac.uk



Virtual Character Animation provides a vast area of research where both academia and
industry have dedicated the past 30 years stretching the boundaries

of what is considered to
be a realistic character.

As a research area, Virtual Character Animation has been subject to numerous studies such
as, 3D modeling and rendering areas, which attempt to increase realism, v
irtual character
behavior and animation
.

Movies Pictures Expert Group (MPEG),
in

the late 90’s and early 2000’s created the MPEG
-
4
facial animation (FA) specification, in an effort to gather the advances achieved until then and
used them to create a standard. The standard goal is to provide rese
arch guidance and a
framework to future research work.

In
our

research project we propose to investigate existing techniques in facial animation,
discuss their strengths and weaknesses

to devise novel approaches
.

Our
objectives

are

to create a
new animati
on framework, Charisma
and gather a set of
techniques that establishes new directions and techniques to be used within facial animation.
Our framework combines the advances in MPEG
-
4 standard with the latest advances in 3D
graphic rendering and character a
nimation, in an attempt to achieve an highly realistic and
efficient model capable of simulating expressions and voice synchronization in real
-
time. We
will focus not only in showing where existing solutions failed to give satisfactory results but we
will
also contribute to the creation of novel approaches for 3D Embodied Character Animation.

The work completed so far proposes a novel approach on an adapti
ve LoD algorithm which
prioritiz
es the human face features in the facial mesh, as defined in MPEG
-
4, it

also simplifies
animation streams, reducing animation complexity and overhead. Our approach is to apply
continuous LOD to an MPEG
-
4 compliant FA model using an extension of Garland’s Qua
dric
-
Based Surface algorithm
, and
to

evaluate the framework and discu
ss the potential gain in
performance when we apply our techniques to a high LOD 3D rigged model. This
presentation will

introduce the research conducted in the past yea
r presenting our published
results
, and the future
research

we will undertake to complet
e the thesis work.



















A Readiness Study into E
-
Government Information Systems
Migration to Cloud Computing

Rabea Kurdi

R.Kurdi@2009.ljmu.ac.uk

The fast pace of change and rapidly evolving custome
r requirements in the use of
government services has forced governments to consider improving or developing new
methods to deliver their services. Therefore, widespread uptake of E
-
government applications
has emerged as a major factor in government intera
ction with the citizen and there is wide
scope for further adoption in future administrations over the coming years. Moreover, the e
-
government wave along with cloud computing seem to present a significant opportunity to
improve government services by prov
iding them over the internet. However in analysing
current research in the field of e
-
government it can be observed that there are limitations for
the use of e
-
government services, and there are many issues facing e
-
government adoption
and e
-
government rea
diness is a major concern. The recently completed

literature review, by
the author, indicated that there is a gap between practice and theory identified by the absence
of a comprehensive assessment framework for e
-
government systems and readiness in both
t
he public and private sectors; most of the assessment frameworks, reviewed for the study,
are varied in terms of philosophies, objectives, methodologies, approaches, and results. This
implies that there is no one assessment framework that is likely to cove
r all e
-
government
readiness aspects. To this end, this study aims to develop a comprehensive framework of
associated guidelines and tools to support e
-
government Information Systems (EGIS)
readiness, with a specific focus on EGIS migration to the Cloud Co
mputing provisioning
model. Moreover, the proposed framework aims to provide a method to guide the assessment
of EGIS migration readiness, including assessing the degree of maturity of a considered e
-
government system. Also this work is intended to complem
ent on
-
going e
-
government
initiatives in the field, taking the perspective of citizens and officials, as well as Staff. The
outcomes ought to aid authorities to understand the key issues that influence the
implementation of e
-
government systems and their i
nstitutional readiness, as well as
assessing the migration to cloud computing. Therefore, the framework developed is currently
being tested and validated in a real environment via fieldwork, surveying 600 citizens, 125
staff, and 25 officials in the Kingd
om of Saudi Arabia.



















Policy Signatures of Viable Autonomous Agents in Highly
Dynamic Environments

Mark Evans

M.Evans@2001.ljmu.ac.uk



Agents within highly dynamic environments often have to mak
e decisions that
benefit themselves and often that of others simultaneously. When such
individual agents are participants within a collective, the actions of many such
entities are ostensibly intended to serve to deliver the goals of the group. This
is con
sidered to be determined by the amount of autonomy they each
possess, the level of cohesion present within the collective and the level of
shared focus upon the common aim. The nature of the environment that such
entities occupy are often highly dynamic,
and the decisions made by the
individual agents and the collective must keep pace with respect to such
varying conditions. It is contended that this must be the case if the benefit to
the collective is maximised, losses minimised and its viability assured.

Telemetry from a real collective of autonomous agents with a common goal,
and exposed to dynamically varying threat conditions, has been analysed. A
test of the hypothesis that a common signature of decision formation (Policy)
exists for such teleological

systems, both real and synthetic, has been made.
Some early results of the initial work will be presented for consideration.




AD
-
HOC CLOUD NETWORKS: A PROBABILISTIC MODEL FOR
VULNERABILITY DETECTION IN CRITICIAL
INFRASTRUCTURE USING BAYESIAN NETWORKS

Ib
rahim Olatunji Idowu


I.O.Idowu@2009.ljmu.ac.uk


The use of sensor nodes in our society is becoming more ubiquitous and also
looks to expand more in the nearest future. This is because most recent
technologica
l devices are now having built
-
in wireless sensor nodes, which
leads to different questions of how the device capabilities could support
heterogeneous distributed applications to help improve the quality of service
of critical infrastructure systems.

This

presentation explains how the quality of service of critical infrastructure
systems could be improved through the use of a probabilistic model. The
model should enable mobile sensor nodes to combine together, allow
permissions to detect vulnerabilities of

a system in real time sequences, act
as a form of distributed systems using a Bayesian network, and be ad
-
hoc in
a self
-
organizing manner through Cloud Computing platform.




Multi
-
fusion Clustering based Cooperative Spectrum Sensing
for Cognitive Radio N
etworks


Ahmed S. B. Kozal


A.S.Kozal@2010.ljmu.ac.uk



Cognitive radio (CR) is a novel approach that aims at improving the utilization of the
electromagnetic radio spectrum and easing the congestion of the
wireless communication
spectrum. In CR systems, secondary users can be coordinated to perform cooperative
spectrum sensing (CSS) so as to detect the primary user more accurately.
Clustering is an
effective approach that used in CSS to reduce the cooperatio
n range and the control channel
overhead.

Most existing CSS approaches including clustering algorithms have only been
focused on the sensing performance. Whereas, an important issue i.e. the power
consumption in the clustering CSS algorithm remains rarely
studied, that the cluster heads
which are far away from the fusion centre must consume more power to forward the sensing
results to the fusion centre, thus, depleting energy very fast. In this presentation, we focus on
a new multi
-
fusion clustering approac
h for centralized CSS in CRNs which can reduce the
power consumption and prolong the network’s lifetime.





Towards Resilient Critical Infrastructure

Andrew Jones


a.jones5@2011.ljmu.ac.uk



Critical infrast
ructure
s

provide goods and services that are regarded as essential to
supporting economic and sociological wellbeing. Typical examples of critical infrastructure
include the electric grid,
communication
systems
,

water supply, agriculture, and

emergency
ser
vices
. R
ecently a great deal of attention has been drawn to this area of research due to an
increased regularity of attacks and faults.

Infrastructures need to be resilient and secure
because loss of service would impose a huge impact on the functioning of

society and have
economic repercussions

on affected areas. W
e aim to gain an understanding of the threats
and vulnerabilities
of

modern critical
infrastructures and

to propose new areas of research
that will

ensure the resilience of current and future inf
rastructure.


Significantly, developments have been made to automate control of these infrastructures and
to increase their resilience in the face of extreme circumstances
.

This has increasingly
involved integration of IT systems to control and monitor nod
es. While this has presented new
opportunities to manage infrastructures remotely, introducing these control elements to the
cyber domain also exposes them to cyber threats
. Developments in control and monitoring
systems need to be designed to be resilient

themselves so that they don’t compromise the
underlying system.


Research will focus on developing supporting services
which enable an infrastructure to
provide a high level of availability
.
Current research in this area is concerned with introducing
stat
e awareness at the macro
-
level, enabling services to detect emerging faults and mitigate
effects before service delivery is compromised. This
will improve the
utilis
ation of avail
able
resources and, by extension,

the resilience of the infrastructure.
W
e wi
ll discuss the current
state

of research
in this area
and propose new topics for future work.




An Overview of Cloud Computing


Benefits, Threats and
Research Directions


Mohd Rizuan Baharon


M.R.Baharon@20
11.ljmu.ac.uk


Cloud computing has changed the way people deal with IT technology

to run
their business
. Traditionally, people ha
ve

invested
a
lot

of money
on IT

infrastructures such as sof
tware and hardware, implemented

complex
management

processes,

and
hired
a
huge number of staffs for maintenance.
With the tremendous benefits of
this
advent computing, all of these loads
have been taken by cloud provider
s

where p
eople just pay what they use

with
a minimum requirement of IT facilities such as
the I
nternet

and desktop
s
.
However, unresolved security threats within the cloud environment have led
people still reluctant to adopt this IT technology.

A l
arge

scale of research from academia, enterprise and personal
perspective
s

has

been actively conducted to over
come those threats. Some
of them
try to
ensure the integrity and prevent loss of the data stored in cloud
storage,

by implementing

strong encryption mechanisms like AES and DES
.
This form of encrypted data in cloud is good for storage or archival but is
ra
ther costly to process.
Thus,

a new form of encryption, ca
lled Homomorphic
Encryption which
enables the ciphertext to be processed in public cloud
without decrypting it
, is proposed
(Sengupta, et al., 2011)
.

However, applying
this general mechanism to our
daily
tasks

would be far from practical, due to
the extremely high complexity of
the

operation

involved
(Wang, et al., 2011)
.

On the other hand,
Song, et al., 2010

have
proposed
the concept of trusted
data binding, enforcing policy usage on application
s

ov
er data sets with the
aid of trusted hardware.
Both software and data are encrypted while at rest
(Song, et al., 2010)
.
However, to deal with the data sets
,

the software need
s

to be modified. This is
in
contrast with our research work.

In this research wo
rk, we want to
concentrate on securing the data while
processing on the applications. The cloud computing environment is similar to
standalone computing environments where data is retrieved and process by
the applications. Data may be processed by many dif
ferent applications,
making

encrypting
a
single application in the environment
impractical if other
applications cannot deal with the encrypted data
. Therefore, a mechanism is
required so that the applications can
process

the encrypted data

without the
nee
d to be modified to deal with encryption.









T
he abstract

Cyber Security and Bio
-
Inspired Network Intrusion Monitoring

Abdulraqeb Alselwi


A wireless internet connection can be of huge benefit to the user allowing you to use same connection
for severa
l PCs or laptops and therefore be able to access the internet as when you need it. However the
wireless network airwaves are more vulnerable to interceptions with other intruders and taking
advantage of your connection to get access into the router, this c
an have a knock on effect as it can
slow down your own speeds and also make your wireless device more vulnerable to attacks.


The challenges we have on the cyber security and threats to the way we access to the internet and the
rise of everywhere broadband

access and wireless travel computing has led to a new security threats.
However it also provides new opportunities and challenges to security services looking to intercept,
locate and monitor threat which the security protection in a wireless network make
s it harder to protect
a network or discover intrusion in the network. The other challenge in wireless network is make it
easier for intruders to hack by exploiting this access point and can use it as a legitimate user to steal
information or use DoS attac
ks.


The bio inspiration thread has particular relevance for developing monitoring approaches that exploit
new cyber concept such as DNA sequencing which is to recognize the sequence belong to during the
access of each user to the network or use a wireless

as a connection to the cyber, this is very important
when the source of the sequence is unknown. So the DNA consists of long sequence and threshold
value from processing the system data and encodes observed network connection into corresponding
DNA sequen
ce. Then to align the signature sequence with observed sequence to find similarity degree
value and decide whether the connection is attack or normal.

Beyond the network intrusion monitoring is to be able to identify intrusions with high accuracy, at the
s
ame time it must not confuse legitimate actions that occur on a system with intrusive ones to monitor
all the users access to the wireless network through their unique sequence of DNA access threshold
value and may locate them during the specific area of t
he waves travel around the access point.













Intrusion Detection for Critical Infrastructure Protection

Á
ine MacDermott


a.mac
-
dermott@2008.ljmu.ac.uk



Critical Infrastructures are physical or
mechanical processes controlled electronically by
systems, usually called supervisory control and data acquisition (SCADA) or process control
systems (PCSs), composed of computers interconnected by networks. When Critical
Infrastructures were first implem
ented, the security and protection of their management
systems was not a primary concern. Society’s dependency on these services is having a
chilling effect on public consciousness as the mainstream media become aware of their
vulnerability.


The need for
Critical Infrastructure Protection was particularly highlighted in June 2010 with
the discovery of the “Stuxnet” computer worm. Prior to Stuxnet, there was a widespread
belief that even if these systems are vulnerable, they are not being actively targeted
. This
complacency is, however, now accepted to be unrealistic. Effective protection of Critical
Infrastructures is crucial as
it is apparent

that existing research does not meet the
requirements of an emerging interconnected critical infrastructure.


Effective ways of detecting intrusions or attempts at inference could be achieved through
effective monitoring of the network using intrusion detection. Intrusion detection techniques
are continuously evolving, with the goal of improving the security and
protection of networks
and computer infrastructures. Prevention is a key functionality, but detection is a must.
Despite the promising nature of anomaly
-
based IDS there still exist several issues regarding
these systems, i.e. low detection efficiency, hi
gh false positive rate.


Critical Infrastructures face significant threat as the growth in the use of SCADA systems and
the integrated networks. The SCADA industry is transitioning from a legacy environment, in
which systems were isolated from the Internet

and focused on reliability instead of security, to
a modern environment where networks are being leveraged to help improve efficiency. Open
communication protocols are increasingly used to achieve interoperability, exposing SCADA
systems to the same vuln
erabilities that threaten general purpose IT systems. A key problem
is the cascading effect this could have on other infrastructures as their failure has a high
socioeconomic impact.


In this talk, we will be exploring the challenges and weaknesses rega
rding Intrusion Detection
for Critical Infrastructure protection, and highlighting areas in which present research does not
fulfill the required protection needed for such an evolving infrastructure.

Dialogue Generation for
Digital Interactive Storytell
ing

Amal Fadak


A.Fadak@2011.ljmu.ac.uk


Nowadays, interactive storytelling systems have received a
n

important

interest

from

the research community
,

exploring
conversational system and
interactivity
to

prom
o
t
e

i
mmersiveness

and fun, and to improve storytelling content and diversity
.
Introducing
dialogue
g
enerati
on in DIS

would allow

to
creation of

new element of
stories
and
assist discourse
between characters

and player

within the
story

world.
G
enerating
dialogue
s in interactive storytelling
would
change the form of stories from linear to

non
-
linear branching dialogue
-
based


storytelling
creating different stor
ylines

based
on
the
interaction

and discussion

between
the story characters
.
The
purpose of
the
dialogue

generat
ion

would be

to produce suitable discourse acts involving characters
within a
context

generated by the storytelling system

and supported by a
conversational system
.
The context of the dialog generation will be supported by the
story world and chara
cters states, the world knowledge base using ontology and
constructive grammar, and the actions generated by the planner.

In our project we
propose

to design a
new
framework for dialogue generation within
digital interactive storytelling system taking into

account novel approach
es

involving
planning algorithm
s, constructive grammar, machine learning

and dynamic
story
context.

The

presentation
aims to introduce a review of the state
-
of
-
the
-
art of
conversational system and dialog generation techniques in DIS
and to propose an
initial framework for dialogue generation within Interactive storytelling.




Autonomic Computin
g: Applications of Self
-
Healing
Systems

Mohamed Mousa


M.R.Ahmed
-
Al
-
Zawi@2005.ljmu.ac
.uk



Self

Management systems are the main objective of Autonomic Computing (AC), and it is
needed to increase the running system’s reliability, stability, and performance. This field
needs to investigate some issues related to complex systems such as; se
lf
-
awareness
system, when and where an error state occurs, knowledge for system stabilization, analyze
the problem, healing plan with different solutions for adaptation without the need for human
intervention. This
research

focuses on self
-
healing which is

the most important component of
Autonomic Computing. Self
-
healing is a technique that aims to detect, analyze, and repair
existing faults within the system. All of these phases are accomplished in real
-
time system. In
this approach,

the system is capable
of performing a reconfiguration action in order to recover
from a permanent fault. Moreover, self
-
healing system should have the ability to modify its
own behavior in response to changes within the environment. Recursive neural network has
been proposed an
d used to solve the main challenges

of self
-
healing, such as monitoring,
interpretation, resolution, and adaptation.






A Zone
-
Based Fault Management Architecture

for Wireless Sensor Networks


Muhammad Zahid Khan


M.Zahid
-
Khan@2008.ljmu.ac.uk



Wireless Sensor Networks (WSNs)
consist of spatially distributed autonomous devices using
sensors to cooperatively monitor physical or environmental conditions, such as temperature,
sound, vibration, pressure, mo
tion or pollutants, at different locations.
For example, as these
systems are highly demanded in the military domain for defence and aerospace system, there
is also an increasing focus on these systems in the civil domain to monitor and protect critical
in
frastructure (such as bridges and tunnels), the national power grid, and pipeline
infrastructure.


The small size, low
-
cost sensor devices can often be unreliable and prone to failures. Some
sensor node may fail or be blocked due to lack of power supply,
physical damage or node
crash, and environmental interference. Hence, faults and failure
s are normal facts in any
WSNs. Therefore, i
n order to guarantee the network quality of service and reliability in terms
of fault
-
tolerance, it is essential for the WNS
s

to be able to detect faults, and to perform
something akin to healing and recovering from events that might cause faults or misbehaviour
in the network. A set of functions or applications designed specifically for this purpose is
called a
fault
-
managemen
t platform
. Fault management is a very important component
concerned with detecting, diagnosing, isolating and resolving faults and failures that
occur

in
the network.

In the light of a comprehensive literature survey;
fault management

has been
identified
as one of the core design
principles

in WSNs,

Therefore, fault management should
be seriously considered in WSNs applications.


To address this key design challenge, we

have

propose
d and developed

a
novel scheme
known as
Zone
-
Based Fault Management Archite
cture (ZFMA) for WSNs.
The key feature of
ZFMA
is that it utilizes a more
holistic app
roach towards fault management and thereby offers
a comprehensive solution for a
ll the three phases of fault management

namely f
ault detection,
fault
diagnosis and
fault
recovery. In addition,
ZFMA self
-
organizes sensor nodes

into a
connected network using
a hierarchal
clustering approach
; therefore, it increases network
lifetime,

minimum resource utilization and consumes less energy
. ZFMA is developed and
implemented in a

Java
-
based simulator. A series of experiments were conducted to monitor
the performance of developed schemes in different network environments. ZFMA has proven
to be an efficient and reliable fault management solution for WSNs. At present, we are
finalizi
ng the implementation and evaluation, and expecting to submit a final report on the
overall project in the near future.










Online Social Networks As Supporting Evidence: A Digital
Forensic Investigation Model and Its Application Design

Norulzahrah
Mohd Zainudin


n.mohd
-
zainudin@2008.ljmu.ac.uk



The growth of online social networks has encouraged new ways of
communicating and sharing information and is used regularly by millions of
people; it ha
s also resulted in an increase in its use for significant criminal
activities and perpetrators are becoming increasingly sophisticated in their
attempt to use technology in order to evade detection and perform criminal
acts. Hence a systematic model for fo
rensic investigation of online social
networks is required in order to obtain optimum results from the networks’
investigation. We have reviewed the existing literature of digital forensic
investigation models and frameworks, most have quite similar approa
ches,
and some of the models are generic which do not focus on the purpose of the
investigation. In addition, there is no standard and consistent model, only sets
of procedures and tools, thus many digital crime investigations are performed
without proper
guidelines. Moreover, there is no model built specifically for
online social networks but in contrast digital crimes related to them are
growing rapidly. To address these challenges, we have developed a standard
model of digital forensic investigation for
online social networks in this
research. This model incorporates the existing traditional frameworks,
allowing us to compile a comprehensive digital forensic investigation model
specifically for the networks. In order to implement the model in online socia
l
networks investigation, we will design its application that will be implemented
to investigate, document and report important information in the networks
without having to search manually.




Personalised Healthcare: Computational Data Modelling for the

Detection of Alzheimer’s

Shamaila Iram

S.Iram@2009.ljmu.ac.uk


Chronic diseases are of long duration but usually slow in progression. They
take too long to be clinically appeared in the body of the patient and b
y the
time the patient knows about them, they might be in their later stage or
sometimes at their end. No remedy can be useful for that stage. Correct and
in
-
time diagnosis of these diseases is very crucial to increase the life
expectancy of the patient. O
ur research focuses on the area of Brain
Informatics. We aim to propose an early disease detection system for
Alzheimer’s. Decision Tree based classification technique has been
introduced in this paper, which classifies the object based on Neurological
Dis
order, Cognitive Impairment, Retrogenesis, Amyloid Beta Protein (Aβ
Protein) and Gait Impairment. Relevant features from the signals have been
extracted by applying our novel technique of Discrete Cosine Transform
(DCT) because the signals are not station
ary. We have selected the database
of two groups: one who have Alzheimer’s (with Gait Impairment), and the
other groups is of Healthy control subjects.




Real
-
time System
-
of
-
Systems Security Monitoring

Nathan Shone

n.shone@2007.ljmu.ac.uk


Security is a major concern for modern networks; recent news stories such as
high profile breaches and theft of sensitive information highlight the extent of
the problem faced. Combined with the increase in services now avail
able
online, it poses the question of whether current security measures are
sufficient and whether or not they can protect newer technologies.


System
-
of
-
Systems (SoS) is a relatively new area, which is facing its own
security challenges. A SoS is a large
-
scale complex system, composed of
multiple independent systems. They exhibit characteristics such as facilitating
the development of emerging behaviour, large scale, dynamicity and
complexity, which results in system boundaries that are largely unknown.
Th
ese characteristics make securing them a difficult task, but the greatest
threat to a SoS is the components themselves.


In a SoS, component systems from different organisations collaborate and
pool their resources, thus exposing more of themselves than no
rmal. This fact
is open to exploitation; compromised components with direct internal access
to other components are high
-
risk threats and can go undetected by existing
security measures. A compromised or rogue component system has the
ability to destroy a
SoS, steal sensitive data, conduct a reconnaissance of the
system, sabotage provided services and compromise other components.


Existing similar solutions such as Host Intrusion Detection System (HIDS) and
Host Intrusion Prevention System (HIPS) are ineff
icient when applied to
monitor SoSs, due to the complexity and dynamicity involved. Solutions such
as these primarily target external based attacks or malware. However, the
problem arises from internal misbehaviour of components that could
jeopardise the S
oS. The lack of suitable security monitoring solutions for this
problem means that SoS compositions are a high security risk. This
emphasises the need for capable security monitoring, which can monitor a
component’s internal behaviour, ensuring it does not

negatively affect the
SoS.


In this presentation, we will examine the area of SoS security monitoring and
the challenges faced when attempting to monitor such systems. We will also
look at similar existing solutions as well as outlining the proposed desig
n that
could be used to monitor SoS component systems.


Using a Multiplayer Game Interfac
e

to Manage Critical
Infrastructure Protection

Laura Pla Beltran

L.Pla
-
Beltran@2010.ljmu.ac.uk



The correct funct
ioning of critical infrastructures is crucial for modern society. These systems
provide the backbone to our everyday lives and, therefore, securing and protecting them is a
challenging yet vital issue. This project studies the vulnerabilities and protectio
n challenges
these systems present, and proposes a solution using a collaborative multi
-
user game
-
based
visualisation interface. The proposed tool models and represents a complex system in an
easy
-
to
-
understand, comprehensive manner, enabling all those inv
olved in monitoring the
system to detect problems and threats, and make better informed security decisions. The
interface will resemble a massively
-
multiplayer online game (MMOG), in which different users
with different roles and access levels will have a
personalised view of the system and react to
security warnings in order to protect the system. The particular nature of critical
infrastructures poses several modelling challenges that we’ll need to overcome in order to
provide an accurate and effective mo
nitoring and protection tool that can aid in their
protection.





A Robust Region
-
Adaptive Digital Image Watermarking System

Chunlin Song


C.L.Song@2004.ljmu.ac.uk



Digital image watermarking techniques
has been
drawing the attention of researchers and
practitioners as a means of protecting copyrights in digital images. The technique is a
subset
of information hiding technologies which works by embedding information into a host image
without perceptually
alters the appearance of the host image.
Despite the progress in digital
image watermarking technology, the main objectives of the majority of research in this area
remains to be the improvement in imperceptibility and robustness of the watermark to attack
s.


Watermark attacks are often deliberately applied to a watermarked image in order to remove
or destroy any watermark signals in the host data. Our research in the area of watermark
attacks found a number of different types of watermark attacks

including

removal attacks,
geometry attacks, cryptographic attacks and protocol attacks.
We

also found both pixel and
transform domain watermarking techniques share similar level of sensitivity to these attacks.

The experiment which was conducted to analyse the eff
ects of different attacks to
watermarked data provided us with a conclusion that each attack affects high and low
frequency part of the watermarked image spectrum differently. Furthermore, the finding also
showed that the effect of the attack can be allevi
ated by using watermark image with similar
frequency spectrum to that of the host image. The results of this experiment lead to a
hypothesis which would be proven by applying a watermark embedding technique which
takes into account all of the above phenome
na. We call this technique the Region
-
Adaptive
Watermarking.



Region Adaptive Watermarking is a novel watermark embedding technique where the
watermark data is embedded on different regions of the host image using a combination of
DWT

and
SVD

techniques.
This technique is derived from the earlier hypothesis that the
robustness of a watermarking process can be improved by using watermark data which
frequency spectrum not too dissimilar to that of the host data. To facilitate this, the technique
utilises dua
l watermarking technologies and embed parts of the watermark images into
selected regions in the host image. Our experiment shows that our technique has improved
the robustness of the watermark data to image processing attacks as well as geometric
attacks,

thus validate the earlier hypothesis.


In addition to the improving the robustness of the watermark to attacks, we can also show a
novel use the Region
-
Adaptive Watermarking technique as a means to detect if certain type
of attacks have occurred. This is
a unique feature of our watermarking algorithm which
separates it from other state
-
of
-
the
-
art watermarking techniques. The watermark detection
process uses coefficients derived from the Region
-
Adaptive Watermarking algorithm in a
linear classifier. The exp
eriment conducted to validate this feature shows that in average
9
4.5
% of all watermark attacks can be correctly detected and identified.


USING NETWORK STRUCT
URE TO ENHANCE
SECURITY AND TRUST O
F UBIQUITOUS COMPUTI
NG
SERVICES


Behnam Bazli

B.Bazli@ljmu.ac.uk


Developments in Ubiquitous Computing are expected to introduce interactive and
programmable living spaces as an integral part of future computing environments. Such a
user
-
oriented environment will require a secu
re network infrastructure to ensure integrity,
interoperability, privacy, fault tolerance and simple development and execution of
applications.

Various middleware architectures have been proposed for ubiquitous computing
environments as a means of securin
g the network infrastructure, enabling integrity,
privacy and reliability mechanisms as part of the service access interfaces and
implementations provided. In a Ubiquitous Computing context, the middleware acts as a
cluster of services, providing a distrib
uted layer between applications and the hardware
infrastructure to facilitate integration of components and easy deployment within the
ubiquitous environment.

Such solutions exist as research projects, but have only been addressed and
implemented within sm
all spaces with limited functionality and services. This work is
aimed at using small world network infrastructures and properties to propose a
middleware to efficiently coordinating the components in large
-
scale network and smart
-
building. The solution wi
ll use factors such as context awareness, location sensing, the
building layout and other metrics and services in order to apply the required security
policies.

Unlike random networks, increasing the size of a scale
-
free network retains its essential
chara
cteristics. By considering this, and the properties of small
-
world networks, a suitable
infrastructure will be designed as a middleware solution to accommodate agents, entities,
service and application domain and security systems within a user
-
oriented Ubi
quitous
Computing environment.

Moreover this will be applied to highlight security challenges, as
well as ensuring the reliability, privacy, fault tolerance, efficiency and overall
effectiveness of the network.

Expanding the network infrastructure would or
dinarily require the middleware to support
new entities through extensive reprogramming to accommodate the change in network
size. By making use of scale
-
free properties, the proposed infrastructure will allow the
middleware to maintain a balance by rebind
ing to equivalent security services at specific
locations within the network as the size of the network changes.


The development of an adaptive environment (framework) to
assist the teaching, learning and assessment of geography
within Oman secondary edu
cation system


Batoul AL
-
Lawati


B.M.Al
-
Lawati@2009.ljmu.ac.uk



The radical changes in knowledge and technology based
-
societies, has required
individuals to be knowledgeable about information and commu
nication technologies
because of the ubiquitous usage in all aspects of our daily life.
Educational institutes thus
need to strengthen their ICT curriculum in every subject area. As the level of technology
rises, the acquisition of technological skills wil
l cease to be regarded as anything other
than essential. Proficiency in the use of technology will become an indispensible skill at
almost all levels of work and the criterion will be not whether an individual can use
technological equipment but the extent

to which an individual can solve problems and
enhance procedures using the available technology.
Using information technologies in
the field of the social studies such as geography contributes to rendering abstract
phenomena concepts visualization and inc
reases student's interest in social studies.
Geography is a subject that is particularly suitable for teaching through the use of
technology, and it offers many opportunities for the development of skills in ICT and other
technology areas.

The aim of this

research is to explain how technology has enhanced teaching; learning
and assessment in field of education especially in geography in Sultanate of Oman
schools. It attempts to develop and design software tools to create an active and effective
education e
nvironment that support the teaching and learning process. Thus, we will
compare between two samples of secondary school students, one will study via the
traditional education method and another one will study using software in the classroom
to explore if
the technologies can enhance the teaching and learning of geography.





Application examples of the Fisher information metric


Héctor Ruiz

H.Ruiz@2010.ljmu.ac.uk



The Fisher information metric defines a Riema
nnian space where distances reflect similarity
with respect to a given probability distribution. From a practical point of view, we can use the
Fisher metric derived from a given dataset in conjunction with data mining methods to
produce results that are i
nformed about the posterior class probabilities in the space of the
data. We will illustrate this through two examples: building a proximity network from synthetic
data and applying convex nonnegative factorization on real
-
world brain tumour data.






An
Entity
-
based Social P2P Network for Sharing Human Life
Digital Memories

Haseeb Ur Rahman


H.Ur
-
Rahmam@2008.ljmu.ac.uk



People capture the memorable events of their lives in a digital form, called human lif
e
digital memories, in order to share them in their social network. Memory for Life (M4L)
system is an effort to allow the proper storage, organization and annotation of a user’s
data and generate meaningful information about a person’s life, such as their

interests,
schedule, health and so on, in order to portray their personality. To share human life
digital memories, the basic challenges needed to overcome are: defining a suitable
underlying network structure, a searching technique that allows searching
amongst
appropriate peers for the correct data, security and data privacy; specially the sharer’s
control over their personal data, even after it has been shared.

The underlying network structure has a great effect on the performance of a network.
Consider
ing the requirements for sharing human life digital memories and the complexity
of underlying network structure, we have proposed a novel Entity
-
based social peer
-
to
-
peer network. The entity
-
based social peer
-
to
-
peer network organizes the network in the
fo
rm of communities of entities and connects peers in a community according to the
memory threads in human life digital memories. In this context, an entity could be a
person, place, object or interest and so on. A memory
-
thread is the collection of digital
memories that belong to an entity, which can be organized according to a criterion.
Memory threads are reflected in similar way in a community, to form
Memory Threads
-
based Communities (MTC).


The idea of
Entities
reduces the degree of separation between
peers because not only
human
-
human links are utilized for connectivity but also human
-
data
-
human links in the
form of Entity
-
Entity links. Similarly, MTCs allows organizing the network in a meaningful
way which increases the performance of the network. The

purpose of using peer
-
to
-
peer
network as the underlying network structure is to avoid a single authority that collects
personal information, which achieves data privacy up to some extent. The simulation
results for MTCs have shown the decrease in network
overhead and have improved the
search query success rate. We are currently working on implementing the idea of
entities, in the peer
-
to
-
peer network, based on the data obtained from Web
-
base online
social networks and implementing our own searching techniq
ue in MTCs.







MEDICAL HEALTHCARE P
ROVISIONING USING WI
RELESS
SENSOR NETWORKS

Arshad Haroon

A.Haroon@2006.ljmu.ac.uk


Advances in healthcare monitoring and increase in the use of modern
technologies for the
purpose of providing medical healthcare services has

changed the way patient has been administered; consequently, the
number of individuals suffering from long
-
term debilitating conditions as they
grow older is also increasing. This demand has resulted i
n greater health and
social care monitoring systems.

In recent years, internetworking research has shifted from an exclusive focus
on the office to now include the home environment. Increased use of network
enabled home medical devices will help us to admi
nister our health, and make
certain changes in our diet on the basis of results provided by these devices.
Home medical network interconnects various electronic devices within a home
environment using different industry standards used for interconnectivity

such
as Open Services Gateway Initiative (OSGi), and Universal Plug and Play
(UPnP) etc.

With the introduction of modern communication and wireless enabled devices,
will help us perform healthcare tasks and enable us to manage our life style in
order to
have a better quality of life. These systems are designed to provide
higher speed and accuracy over how data is recorded. In general a healthcare
monitoring system has sensors and actuators that are used to monitor and log
data; this data is then transmitt
ed and stored to a database which is used to
help the process of diagnosis carried out by the general physician. Body Area
Networks are a relatively new initiative and are primarily being designed for
use within the health industry. Their main objectives
are to enable doctors
and other medical staff to safely monitor the health status of patients
independent of their location (i.e. Home, shopping, and working).

The aim of the project is to develop a new framework; to support networked
medical devices and s
ervices in environments, such as the home, where we
can use sensor technology to improve healthcare provisioning, reduce costs
and streamline how and when services are delivered. The scope of this
project is to wirelessly incorporate home medical devices,
which may be
embedded in the environment as well as those embedded within and on a
person’s body. This will provide the basis for a detailed evaluation of medical
devices and services and their impact on healthcare in the home environment.










RESEARC
H, DESIGN AND DEVELOPMENT

REVIEW OF THE CLOUD COMPUTING MANAGEMENT
SYSTEM (CCMS)


Glyn Hughes


g.d.hughes@ljmu.ac.uk




Abstract
-

Due to the vast scale of Cloud Computing systems, management
of the numerous phy
sical and irtual components may become unwieldy. Many
software packages that have historically been installed on desktops /
workstations for years are slowly but surely being converted to Cloud
Computing solutions. The problems that are emerging today are
only set to
worsen as Cloud Computing becomes ever more pervasive. This paper
synopsises previous investigatory research concerning these emerging
problems. It then continues, to describe and review the structure and
operation of the Cloud Computing Manage
ment System which utilizes an
object mapping declarative language which in turn utilizes an object oriented
system to support key operations.


Keywords
-

Cloud Management, Hypervisors, Virtualisation Management,
Autonomic Systems.







Enhanced Computatio
n Time for Fast Block Matching
Algorithm

Zaynab Ahmed

Z.Ahmed1@2009.ljmu.ac.uk

A novel method is proposed for the purpose of decreasing the computational time required to determine
the matching macroblock betwe
en the reference and the current frames while keeping the resolution of
the decompressed videos. This is performed by stopping the calculation of the sum absolute different
between the pixels in the current macroblock and the macroblocks in the reference f
rame when the
current uncompleted sum absolute value is greater than the previous calculated one.

Experimental results using various video sequences with different search methods (Full Search,
Diamond Search and New Three Step Search) showed that the prop
osed technique reduces the search
time of macroblock matching without affecting the resolution. The proposed technique was also
applied for Adaptive Rood Patten Search technique with other changes for the aim of improving the
resolution of the decompress v
ideos while enhancing the time required for processing.


A Signature Detection Scheme for Distributed Storage

Robert Hegarty

R.C.Hegarty@2006.ljmu.ac.uk


The use of distributed storage platforms in the for
m of Storage as a Service
(SaS) cloud provides users with a convenient and cost effective way to store
and share data. The amount of storage provided is practically limitless and the
pay as you use model is popular as it removes the requirement to invest u
p
front in storage devices which are perpetually falling in price and thus bad
investments. This flexibility coupled with the ability to access data from
multiple devices has lead to an explosion in the uptake of SaS.

Digital forensics is the process of an
alysing computers or storage devices for
evidence of a breach of policy or illegal activity. Signature detection is an
approach often applied in digital forensics to automate searching for target
files on a storage device. The conventional approach taken i
n the signature
detection process is to image the storage device being examined, making an
exact replica of the data stored on the device. A signature is created from
each file in the image and compared with a signature library, to determine if
any matches

can be found.

The scale and distribution of data in SaS is such that existing signature
detection techniques are not suited to the task of analysing data in these
platforms. To maintain a practical and effective digital forensic capability, a
new approach

to the detection of target data in such platforms is required.

We have developed techniques to overcome the barriers presented by the
scale and distribution of data found in SaS platforms. Our techniques address
the challenges associated with scalability,

distribution and resource allocation
when analysing large scale distributed data to make signature detection
feasible.

Through experimentation we validate our techniques for signature
compression and resource allocation. To illustrate how our techniques can
make feasible the analysis of large scale distributed storage platforms within
reason temporal and fiscal constraints
.