Session 1C – Multimedia Applications - School of Computing ...

sandpaperleadSoftware and s/w Development

Oct 31, 2013 (3 years and 9 months ago)

150 views

M
ULTIMEDIA
A
PPLICATIONS


Virtual Time Windows:Applying Cross Reality
to Cultural Heritage
CJ Davies,Alan Miller,Colin Allison
School of Computer Science
University of St Andrews
fcjd44,alan.miller,cag@st-andrews.ac.uk
Abstract—Virtual worlds have proven popular in academia as
extensible multi-user 3D virtual environments capable of hosting
a wide range of experimental scenarios.One of the products
of virtual worlds research is the cross reality paradigm;the
fusion of ubiquitous sensor/actuator infrastructure and virtual
worlds facilitating synchronous bidirectional intercommunication
between real and virtual environments,allowing each to reflect,
influence and merge with the other.
We introduce the ongoing Virtual Time Window project,an
application of cross reality to the domain of cultural heritage,
that promises to further existing alternate reality work in the
field by allowing simultaneous exploration of real and virtual
environments;visitors to a cultural heritage site can simultane-
ously explore its virtual reconstruction via a tablet computer.
Unlike previous augmented reality and virtual reality projects in
the field,the virtual reconstruction is also accessible to persons
remote to the real site,affording intriguing interactions between
visitors at the site and those exploring it from elsewhere.
I.INTRODUCTION
One application of 3Dvirtual environments is the simulation
of real world locations [1].Whilst these simulations can be
explored by users physically disjoint from the real location
that they represent,intriguing possibilities arise when one
considers simultaneous exploration of such a simulation and its
real world counterpart,particularly if the simulation represents
the state of the location at an earlier point in history since
which drastic changes may have occurred.
By introducing sensor readings into a simulation its state
can alter in response to real time physical and environmental
changes in the state of the real world location;lighting,
weather,movement,etc.The result is a simulation that better
represents the state of the real world,improving the experience
of simultaneous exploration.
Inversely,changes to the state of a simulation can affect
the real world via actuators;HVAC systems,door openers,
Representation of Sensory Effects (RoSE) devices [2],etc.
This feedback facilitates better presentation and easier under-
standing of digital content,making use of more modalities
than simple VDU and sound.
This synchronous combination into a single system of
augmented virtuality (affecting a simulation according to infor-
mation from the real world) and augmented reality (affecting
the real world according to information from a simulation) [3]
gives rise to a specific mixed reality situation referred to more
precisely as cross reality [4].
Fig.1.Illustration of VTW concept;simultaneously exploring the present
day traces of the cathedral at St Andrews and its virtual reconstruction.
This paradigm will further the adoption of alternate reality
technologies in the domain of cultural heritage,by allowing
interested individuals to simultaneously explore a real world
cultural heritage site and a complete virtual reconstruction
of it,with each environment linked to the other to better
reflect it and provide a seamless exploration experience.This
style of interaction between physical and digital content is
particularly rewarding when the current state of the site does
not represent its former glory,such as where only traces of a
once magnificent building remain.
In contrast to previous alternate reality projects within the
domain,the use of a virtual environment to host a complete
reconstruction means that it can easily be made remotely
accessible in a very similar manner as it is at the site
itself.This presents many intriguing opportunities;interaction
between on-site and off-site visitors,including the ability for
domain experts to imbue their knowledge to site visitors even
if they themselves are not at the site;continued interaction
with the digital content,in a familiar fashion,after visitors
have left the site;and increased accessibility of the digital
content for persons unable to physically visit the real site for
any reason.
We continue in section II by briefly describing the cross
reality paradigm,its position in relation to other alternate
realities and how it can be of benefit to cultural heritage.
The Virtual Time Window project is introduced in section III
with a high-level discussion of its design,before exploring
the design space and the suitability of available technologies
to the project in section IV.Our progress in implementing the
project is discussed in section V.
II.CROSS REALITY
Cross reality is the ubiquitous mixed reality situation that
arises from the fusion of real-world sensor/actuator infrastruc-
ture with virtual environments,such that augmented reality and
augmented virtuality manifest simultaneously and facilitate
synchronous multi-directional exchange of media and control
information between real and virtual environments.Sensors
collect and tunnel dense real-world data into virtual environ-
ments where they are interpreted and displayed to dispersed
users,whilst interaction of virtual participants simultaneously
incarnates into the real world through a plenitude of diverse
displays and actuators [5].
The principle features that distinguish cross reality from
other alternate realities that computer science has explored
are;
1) a shift from single- to bi-directional information flow
between real and virtual environments [4]
2) that both environments are complete unto themselves
(but are enriched by their ability to mutually reflect,
influence and merge into one another).[6]
Because a cross reality system is comprised of two en-
vironments,one augmented reality and the other augmented
virtuality,it cannot be placed at a single point on Milgram’s
virtuality continuum [7] but instead occupies two separate
points,one to the left and the other to the right of the centre.
Cross reality expands upon existing alternate reality research
in the domain of cultural heritage by increasing the level of
interactivity between users,artefacts and sites,by coupling real
world conditions more tightly to those pertaining to the digital
content of the site and vice-versa,allowing each to influence
the other to a greater degree.In particular cross reality;
allows greater user interaction with digital content;allows
conditions within the real world to affect digital content;
allows conditions pertaining to digital content to affect the
real world;increases the extent and amount of digital content;
and allows easier and greater remote access to digital content.
III.THE VIRTUAL TIME WINDOW PROJECT
We have designed and begun implementation of a novel
application of the cross reality paradigm to the exploration
of cultural heritage sites in the Virtual Time Window (VTW)
project,investigating the use of tablet computers to present
visitors to a cultural heritage site with a ‘window’ into a virtual
reconstruction of the entire site as it was at an earlier point
in time.Visitors view the site in its current state around them
and in an earlier state through the window;simultaneous ex-
ploration of real and virtual environments.Figure 1 illustrates
this concept.
To maintain a natural and unhindered sense of exploration
VTWdoes not require visitors to manually control navigation
within the virtual environment.Changes in a tablet’s position
Fig.2.Illustration of how VTW maps a tablet’s physical pitch (1) and
bearing (2) onto virtual environment camera control.
within the site are automatically reflected by a corresponding
movement within the virtual environment,making use of a
combination of location tracking technologies.The direction
that a visitor faces is monitored by magnetometer (‘digital
compass’) and the angle that their tablet is held at by ac-
celerometer;this information is reflected by the direction and
pitch of the camera within the virtual environment (see figure
2).The resulting style of interaction is similar to using a
digital camera to take a photo;the screen on the back of the
camera shows what the image will look like when the shutter
is released,whilst with VTW the screen on the tablet shows
what the site looked like in the past.
Sensor infrastructure at the site monitors environmental and
other physical properties and these data affect the state of the
virtual reconstruction in real-time so that the view through
the window better merges with what the user sees around it.
Actuators allow interaction of visitors and scripts within the
virtual reconstruction to manifest into the site,affording novel
methods of explanation and demonstration of points of interest.
The design of VTW(see figure 3) is client-server.The server
is a computer responsible for running the virtual environment
server software,for distributing content to tablets running
compatible virtual environment client software,and for receiv-
ing and processing sensor data (with these data affecting the
content distributed to the tablets).Clients include visitors to the
site itself,who are simultaneously exploring both the real site
and the virtual environment with tablets,and remote visitors
who are exploring only the virtual environment fromelsewhere
using their desktop or laptop computers.Clients also include
sensors and actuators at the site,comprising both dedicated
wired and wireless sensor/actuator infrastructure and mobile
sensors and actuators present in the tablets.
Fig.3.The main functional components and technologies of VTW from a
high level of abstraction.
IV.EXPLORATION OF DESIGN SPACE
This section examines some of the constituent components
that comprise VTW and surveys the suitability of available
platforms according to the specific requirements of the project.
A.Virtual Reconstruction Infrastructure
Whilst inaugural cross reality projects and the majority that
have since followed have made use of virtual worlds [8] such
as Second Life [9],OpenSimulator [10] and Open Wonder-
land [11],the paradigm can also be implemented using ‘tradi-
tional’ game and visualisation solutions such as Unity [12]
and Unigine [13];in fact this has already occurred [14].
However virtual worlds,particularly open virtual worlds such
as OpenSim,offer researchers many benefits.
Foremost the accessibility of virtual worlds with regards
to content creation makes them a more attractive option for
researchers who may not have prior experience in a standalone
3D modelling package,which is a common requirement for
working with game and visualisation engines.Virtual worlds
provide powerful yet straightforward interfaces for building
and personalising avatars,virtual spaces and objects and while
these tools may lack some of the more advanced features
and effects of professional modelling packages,resulting in
lower photorealism,they allow researchers to create diverse
and expansive environments more easily and affordably,both
in terms of experience and time required [15].This allows
more to be achieved within a limited budget as researchers
can focus on running experiments and collecting results rather
than creating the experimental environment itself.
This ability to create content and shape the virtual environ-
ment in an almost infinite number of ways,combined with the
inherent open-endedness of virtual worlds and the ability to
easily integrate with external data sources through application
programming interfaces,streaming in video,sound and Web
content,presents researchers with nearly endless possibilities,
unrestricted to develop whatever styles of interaction they
choose and to use the environment for whatever they wish [16].
These experiments can be run for extended periods of time,
over a period of months or more,at very lowcost,thanks to the
open licensing that many virtual worlds have [17].Researchers
are free to make use of existing hardware at their institution,
deploy and administer their own infrastructure in any manner
they see fit.
Researchers at St Andrews have used OpenSimfor a number
of projects,among them the accurate recreation of historical
monuments including a Byzantine basilica in the Sparta region
of Greece [18] and the cathedral at St Andrews as seen in
figure 2.We run a number of OpenSim servers and clients on
a variety of different hardware,including small form factor
computers that are easily transportable to public exhibition
sites.
We are building upon our knowledge and experience by
using OpenSim as the virtual reconstruction infrastructure
component of VTW.Our physical proximity to the cathedral
at St Andrews,which is a cultural heritage site of huge im-
portance,combined with out existing OpenSim reconstruction
of it,provides an ideal situation for development and testing
of VTW without requiring a lengthy content creation phase.
B.On-Site Sensor/Actuator Infrastructure
Numerous platforms for distributed sensing and actuating
exist [19].Commercially procurable hardware designed specif-
ically for the task,such as wireless sensor network (WSN)
solutions including Berkeley motes [19],are one option whilst
general-purpose low-power prototyping and embedded plat-
forms,such as Phidgets and Arduino,have also proven popular
for creation of bespoke sensor/actuator devices [20].Software
platforms include those tailored specifically for operation on
WSN motes such as TinyOS [21] and more general purpose
operating systems such as Contiki [22] which is designed
for running on devices with limited memory footprints and
computational capabilities.
In addition to dedicated sensing infrastructure VTW uses
sensors built into the tablet computers.Accelerometer and
magnetometer determine tablets’ orientations whilst location
tracking technologies including GPS determine their position
within a site.Other sensors connected to tablets can be used
to detect changes in physical and environmental conditions
throughout the site either to bolster data from dedicated
sensing infrastructure or to allow VTW to be deployed with
no dedicated sensing infrastructure;the latter scenario suits
rapid deployment and withdrawal which is of particular use
for temporary installations.
Actuators primarily serve as interfaces to controllable lumi-
naires,fans,animatronics,etc.but also to more ‘traditional’
output devices such as monitors,projectors and loudspeakers.
The tablets themselves provide several output modalities;
monitor,sound and vibration at a minimum.
Faced with this heterogeneity it is critical that VTW in-
tegrates with as many different sensor/actuator platforms as
possible through the use of interfaces to widely adopted
standards.The issue of standards is discussed further in section
IV-D.
C.Networking
VTW relies upon networking between its constituent com-
ponents;on-site clients,remote clients and server.There
are several scenarios for the provision of this infrastructure,
distinguished primarily by the presence or lack at the site
of a suitable Internet connection and a suitable location to
situate the server.Note that the server entity can comprise
multiple physical computers,dependant upon the scale of the
deployment.
The simplest scenario is where the site has both a suitable
Internet connection and can appropriately accommodate the
server physically.On-site clients communicate directly with
the server via wireless networking;tablets via 802.11 and
sensor/actuator infrastructure either via a low-power wireless
protocol such as ZigBee (or alternatively wired Ethernet,
however many sites will understandably be averse to unsightly
cabling).Remote clients communicate with the server via the
Internet.
If the server cannot be located on-site,for example because
there is no secure and weatherproof building present,then it
must be situated elsewhere,such as a datacentre,and both
on-site and off-site clients must access it via the Internet.This
does not change anything for off-site clients,whilst the style of
communication used by on-site clients depends upon whether
a suitable Internet connection is available at the site.
If this is the case then an on-site gateway communicates
with the on-site clients via wireless or wired networking
and marshals communications to and from the server via the
Internet.This gateway is a much smaller and lower powered
device than the server and can be deployed outdoors in a small
weatherproof enclosure.
If a suitable Internet connection is not available then on-
site clients must make use of wireless WAN connectivity
such as 3G.However the default behaviour of the OpenSim
client/server model is not well suited to deployment over 3G
networks.Unlike online games where content such as textures
and multimedia are stored on the client and communication
between client and server comprise only gameplay elements
such as position updates,OpenSim stores content on the
server and this must be downloaded by the client each time
it logs into a particular location.This is largely due to the
user-generated nature of the content which means that it
is susceptible to frequent change and results in high initial
bandwidth requirements immediately after a client login [17],
[23].However once the content has been retrieved bandwidth
requirements drop substantially to levels more familiar with
online games,which are theoretically attainable via 3G.
We are designing experiments with OpenSim and 3G net-
works to ascertain the feasibility of such a VTW deployment.
This involves modification of an OpenSim client to allow
content to be downloaded and cached whilst a suitably fast
Internet connection is available,so that when a login is
performed via a 3G network the only traffic that needs to
be sent is movement updates and similar,with the content
being read from the device’s local storage.This is a realistic
scenario;one can easily imagine that the tablet computers are
preloaded with the necessary content before being delivered
to the site where they are then loaned to visitors upon their
arrival,in a similar fashion to how audio tours are already
implemented at many sites.
Adoption of 4G/LTE will provide another option for on-
site client communication at sites that do not have suitably
fast Internet connections.
D.Standards
No single standard or framework for the development of
cross reality systems has been widely adopted,a situation that
presents a serious challenge to the greater realisation of the
paradigm through projects such as VTW.
However the advent of cross reality did not spawn the
demand for standards that facilitate synchronous bidirec-
tional flow of sensory and control information between sen-
sor/actuator infrastructure and display technologies,whether
that display is a virtual world or a traditional two-dimensional
visualisation such as a graph.Sensor/actuator infrastructure
was employed long before the adoption of virtual worlds as a
research tool and as such there are numerous frameworks and
standards that can contribute toward the creation of a cross
reality system;IEEE 1451 [24],Sensor Web Enablement [25],
Ubiquitous Sensor Network [4],SenseWeb [26],Global Sensor
Network [27],etc.
Previous cross reality projects have either adapted and built
upon existing standards and frameworks such as these,or
have developed their own bespoke and proprietary solutions.
The former approach can unfortunately result in the use of
standards in manners for which they were not intended nor
are ideally suited for,and the latter in the creation of sys-
tems that are only compatible with particular sensor/actuator
infrastructure or virtual environment platforms,which means
that they cannot be easily extended by other interested parties
or used as the basis for future projects.
The most promising endeavour toward alleviating this sit-
uation is ISO/IEC 23005 ‘Information technology - Media
context and control’.Better known as MPEG-V,this is the
result of 3 years work from about 30 EU-based organisations,
including Philips Research,to develop a global standard for
connecting different virtual worlds and for connecting virtual
worlds to the real world [28].
VTW has adopted ISO/IEC 23005 not only to ensure
maximum compatibility with currently available and future
sensor/actuator infrastructure and virtual worlds,but also to
ensure that it can be extended,built upon and integrated into
future projects.
V.IMPLEMENTATION OF VTW
In this section we discuss our progress in implementing
VTW,justifying decisions pertaining to adoption of particular
hardware or software approaches to meeting certain constituent
components of the designed functionality.
A.Tablet Computer
Tablet computers are used to display the OpenSim recon-
struction and other associated digital content to visitors using
virtual world ‘viewer’ software.Currently such software is
only available for x86/x86
64 platforms,which limits which
tablet computers can be used for the project.Currently we are
using a MSI Windpad 110W 10” tablet computer that sports
an AMD Fusion Z-Series APU (dual core x86
64 processor
with Radeon HD6250 graphics) [29].
Although popular tablet computers running Apple’s iOS
and Google’s Android operating systems are now available
with sufficiently powerful processors and graphics chipsets
to produce the 3D graphics that viewers require,they are
unfortunately ARM platforms which cannot run viewer soft-
ware written for x86/x86
64 platforms.We are developing
VTW under the assumption that with the continued adoption
of virtual worlds and the ever growing popularity of tablet
computers,which are increasing in computational power and
decreasing in cost with every new release,viewers written
for ARM platforms will be released before the project’s
completion.This will greatly increase the range of supported
hardware upon which VTW can be deployed.
Additional hardware considerations when determining a
tablet computer’s suitability for VTWis the presence of and/or
compatibility with sensor and actuator devices.Most tablet
computers have at least GPS,front and rear facing cameras,a
tilt sensor and a light sensor built in.With an x86
64 tablet like
the Winpad,which sports a standard USB port,the connection
and configuration of further devices is a trivial affair.
B.Translating Tablet Location into OpenSim
We have developed an OpenSim region module that trans-
lates a real world position specified by a latitude and a
longitude into an equivalent position within an OpenSim sim-
ulation.The only requirement for this translation is the prior
knowledge of a single ‘anchor’ point within the simulation for
which the equivalent latitude and longitude is known.If the
simulation spans multiple adjacent OpenSim regions (squares
of land 256x256 metres) then only a single anchor in one of
these regions is needed for the entire simulation.
The translation is achieved by calculating the distance
between the anchor’s latitude and longitude and the newly
supplied latitude and longitude through spherical trigonometry,
specifically the haversine function which gives great-circle
distance between 2 points on the surface of a sphere.This
distance is then scaled according to the scale of the simulation
and returned as an OpenSim vector from the anchor’s position
in the simulation.This allows OpenSim avatars to be moved
according to readings from a GPS receiver and conceivably
has many additional applications.
For the purposes of VTW,that is moving avatars within an
OpenSim reconstruction of a real world cultural heritage site.
the accuracy attainable from GPS even in best case scenarios
is not sufficient for satisfactory user experience.Thus we are
investigating the suitability of additional technologies to use
in conjunction with GPS in order to attain the accuracy that
we require.
Options include increasing the accuracy of GPS readings
through use of a DGPS base station,or supplementing GPS
readings with other location determining technologies such as
image recognition,range imaging via stereo cameras,time-of-
flight via radio pulses,laser rangefinding,etc.
C.Translating Tablet Bearing and Orientation into OpenSim
We have successfully used an ADXL335 3-axis 3g ac-
celerometer connected to an Arduino to control the pitch of
the camera in an OpenSim viewer.We had originally planned
to use the Windpad’s own tilt sensor however were unable to
gain sufficient access to it,either in Windows or Linux,to
use it for our purposes or even to determine its capabilities.
Arduino was chosen as a convenient prototyping platform that
would allow us to easily and rapidly evaluate different sensor
and actuator devices,including accelerometers.
Translating readings from the accelerometer into OpenSim
camera commands was achieved by programming the Arduino
to mimic a standard USB joystick,with the readings from the
Y axis of the accelerometer smoothed and scaled onto the Y
axis of the joystick.This involved replacing the firmware on
the Arduino’s ATmega16u2 microcontroller,which is normally
used as a USB to serial bridge for compatibility with modern
personal computers that do not have serial connectors,with
alternative firmware that changes the device’s behaviour to
that of a USB Human Interface Device (HID) with a joystick
HID report descriptor [30].
The major benefit of this approach is that we can use any
OpenSim viewer in an unmodified form,as they all feature
support for both avatar movement and camera control via USB
joystick.However the viewers’ ‘flycam’ functionality that we
are using to directly map the pitch of the accelerometer to the
camera’s pitch is not ideal for our purposes;the rest position
of the flycam changes as the pitch is moved up and down,
such that after a small number of pitch updates the flycam no
longer returns to horizontal when the accelerometer is returned
to horizontal and the calibration between the two systems is
lost.There is unfortunately very little documentation available
for the flycam and as such we are resorting to modifying a
viewer to suit the needs of our project.
It may be possible to effect small alterations to the flycam
source code in order to make it behave in the manner that
we require.However as the main benefit to using the joystick
and flycam approach,that it works with an unmodified client,
is lost as soon as we resort to modifying a client,we are
instead reverting to accessing the data from the Arduino via
its normal USB to serial bridge and hooking directly into the
viewer’s camera control methods at a lower level,giving us
much greater control and removing the dependency upon the
joystick and flycam interface.
We have also begun evaluation of a HMC5883L magne-
tometer which,in combination with the ADXL335,allows us
to determine the bearing (direction) that a tablet is facing and
thus which direction the viewer’s camera should be facing.
The magnetometer must be used in combination with the
accelerometer,as otherwise the readings are only correct when
the magnetometer is held flat.As the tablet is not going to
be held level,the readings from the magnetometer must be
compensated using the readings fromthe accelerometer to gain
sufficiently accurate results.
VI.CONCLUSION
We have provided a brief introduction to the cross reality
paradigm,a situation that arises when augmented reality
and augmented virtuality take place in unison;sensor read-
ings from a real world location trigger effects in a virtual
environment whilst actions within this virtual environment
simultaneously manifest into the real location via displays and
actuators.
We have presented the ongoing Virtual Time Window
project that applies cross reality to the domain of cultural
heritage,affording simultaneous exploration of a real cultural
heritage site and its virtually recreated counterpart using a
tablet computer in combination with sensor/actuator infras-
tructure and the OpenSim virtual world platform.We believe
that VTW exemplifies the worth of the cross reality paradigm
outside the research lab and the importance of adopting
global standards for communication between virtual worlds
and between virtual worlds and the real world.
REFERENCES
[1] M.Wright,H.Ekeus,R.Coyne,J.Stewart,P.Travlou,and
R.Williams,“Augmented duality:overlapping a metaverse with the
real world,” in Proceedings of the 2008 International Conference
on Advances in Computer Entertainment Technology,ser.ACE ’08.
New York,NY,USA:ACM,2008,pp.263–266.[Online].Available:
http://doi.acm.org/10.1145/1501750.1501812
[2] C.Timmerer,“Representation of Sensory Effects:
Call for Proposals,” 2008.[Online].Available:
http://multimediacommunication.blogspot.com/2008/05/representation-
of-sensory-effects-call.html
[3] R.Want,“Through Tinted Eyeglasses,” IEEE Pervasive
Computing,vol.8,no.3,pp.2–4,Jul.2009.[Online].Available:
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5165551
[4] M.Kim,H.J.Gak,and C.S.Pyo,“Practical RFID +
sensor convergence toward context-aware X-reality,” in Proceedings
of the 2nd International Conference on Interaction Sciences:
Information Technology,Culture and Human,ser.ICIS ’09.
New York,NY,USA:ACM,Nov.2009,pp.1049–1055.
[Online].Available:http://dl.acm.org/citation.cfm?id=1655925.1656115
http://doi.acm.org/10.1145/1655925.1656115
[5] J.A.Paradiso and J.A.Landay,“Cross-Reality
Environments,” Pervasive Computing,IEEE,vol.8,
no.3,pp.14–15,Jul.2009.[Online].Available:
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5165555
[6] J.Lifton and J.Paradiso,“Dual Reality:Merging the Real and
Virtual,” in Proceedings of the First International ICST Conference on
Facets of Virtual Environments (FaVE),Jul.2009.[Online].Available:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.147.9419
[7] P.Milgram and H.C.Jr.,“A Taxonomy of Real and
Virtual World Display Integration,” 1999.[Online].Available:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.6230
[8] Y.Sivan,“Overview:State of Virtual Worlds Standards in 2009,”
Journal of Virtual Worlds Research2,vol.2,no.3,2009.[Online].
Available:http://journals.tdl.org/jvwr/article/view/671/539
[9] Linden Research Inc.,“Virtual Worlds,Avatars,free 3D chat,
online meetings - Second Life Official Site.” [Online].Available:
http://secondlife.com/
[10] OpenSimulator Project,“OpenSim.” [Online].Available:
http://opensimulator.org/wiki/Main
Page
[11] Open Wonderland Foundation,“Open source 3D virtual
collaboration toolkit — Open Wonderland.” [Online].Available:
http://openwonderland.org/
[12] Unity Technologies,“Unity - Game Engine.” [Online].Available:
http://unity3d.com/
[13] Unigine Corp,“Unigine:real-time 3D engine (game,simulation,
visualization and VR).” [Online].Available:http://unigine.com/
[14] MIT,“DoppelLab - Exploring Dense Sensor Network Data Through A
Game Engine,” nurlfhttp://www.media.mit.edu/resenv/doppellab/g.
[15] A.Hendaoui,M.Limayem,and C.W.Thompson,“3D
Social Virtual Worlds:Research Issues and Challenges,” IEEE
Internet Computing,vol.12,no.1,pp.88–92,2008.[Online].Available:
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4428344
[16] S.Warburton,“Second Life in higher education:Assessing the
potential for and the barriers to deploying virtual worlds in learning
and teaching,” British Journal of Educational Technology,vol.40,
no.3,pp.414–426,2009.[Online].Available:http://blackwell-
synergy.com/doi/abs/10.1111/j.1467-8535.2009.00952.x
[17] W.S.Bainbridge,“The scientific research potential of virtual worlds.”
Science,vol.317,no.5837,pp.472–6,2007.[Online].Available:
http://www.ncbi.nlm.nih.gov/pubmed/17656715
[18] K.Getchell,A.Miller,R.Nicoll,R.Sweetman,and
C.Allison,“Games Methodologies and Immersive Environments
for Virtual Fieldwork,” IEEE Transactions on Learning Technologies,
vol.3,no.4,pp.281–293,Oct.2010.[Online].Available:
http://ieeexplore.ieee.org/xpl/freeabs
all.jsp?arnumber=5557838
[19] F.Bose and R.Bose,“Sensor Networks Motes,Smart
Spaces,and Beyond,” Pervasive Computing,IEEE,vol.8,
no.3,pp.84–90,Jul.2009.[Online].Available:
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5165565
[20] R.Faludi,Building Wireless Sensor Networks,1st ed.,B.Jepson,
Ed.Sebastopol:O’Reilly Media Inc.,2010.[Online].Available:
http://www.faludi.com/bwsn/
[21] TinyOS Alliance,“TinyOS Home Page.” [Online].Available:
http://www.tinyos.net/
[22] A.Dunkels,“The Contiki OS.” [Online].Available:http://www.contiki-
os.org/p/about-contiki.html
[23] I.A.Oliver,A.H.D.Miller,and C.Allison,“Virtual worlds,real
traffic:interaction and adaptation,” p.305,2010.[Online].Available:
http://doi.acm.org/10.1145/1730836.1730873
[24] E.Song and K.Lee,“Understanding IEEE 1451-Networked
smart transducer interface standard - What is a smart
transducer?” IEEE Instrumentation & Measurement Magazine,
vol.11,no.2,pp.11–17,Apr.2008.[Online].Available:
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4483728
[25] M.Botts,G.Percivall,C.Reed,J.Davidson,S.Nittel,A.Labrinidis,
and A.Stefanidis,“GeoSensor Networks,” Lecture Notes in Computer
Science,vol.4540,no.175,pp.175–190,2008.[Online].
Available:http://www.springerlink.com/index/ux1224j76264g8j4.pdf
http://www.springerlink.com/content/ux1224j76264g8j4/
[26] A.Kansal,S.Nath,J.Liu,and F.Zhao,“SenseWeb:
An Infrastructure for Shared Sensing,” Multimedia,IEEE,
vol.14,no.4,pp.8–13,Oct.2007.[Online].Available:
http://ieeexplore.ieee.org/xpl/freeabs
all.jsp?arnumber=4354151
[27] K.Aberer,M.Hauswirth,and A.Salehi,“The Global Sensor Networks
middleware for efficient and flexible deployment and interconnection of
sensor networks,” 2006.
[28] J.Gelissen and Y.Sivan,“The Metaverse1 Case:Historical Review
of Making One Virtual Worlds Standard (MPEG-V),” Journal of
Virtual Worlds Research,vol.4,no.3,2011.[Online].Available:
https://journals.tdl.org/jvwr/article/view/6066
[29] L.Micro-Star Int’l Co.,“MSI Global Notebook - WindPad
110W.” [Online].Available:http://www.msi.com/product/nb/WindPad-
110W.html
[30] D.Camera,“Four Walled Cubicle - LUFA (Formerly MyUSB).”
[Online].Available:http://www.fourwalledcubicle.com/LUFA.php
Pilot Aided Channel Estimation for SISO and
SIMO in DVB-T2
Fatimaalzhra Salman and John Cosmas
School of Engineering and Design
Brunel University
Uxbridge UB8 3PH,UK
Email:Fatimaalzhra.Salman@brunel.ac.uk
Email:John.Cosmas@brunel.ac.uk
Yue Zhang
Department of Computer Science and Technology
University of Bedfordshire
Vicarage St,Luton,Bedfordshire LU1 3HZ
Email:Yue.Zhang@beds.ac.uk
Abstract—In this paper,the development of a channel esti-
mator and equalizer functions is explored for SISO and SIMO
transmission with 8 different pilot patterns that aim to improve
the performance of current DVB-T2 system.We investigate the
performance of our system using different types of Channel Esti-
mation methods with increasing levels of computation complexity,
namely:Linear Interpolation(Step),Linear Interpolation (Line-
order0),Linear Interpolation(Line-order1) and Spline Least
Square Best Fit.These channel estimation methods have been
proposed to estimate the Channel Frequency Response (CFR)
between the pilots’ location.The implemented SISO and SIMO
system models are based on the Orthogonal Frequency Division
Multiplexing (OFDM) system.We provide performance results
for SISO and SIMO by producing Bit Error Rate (BER) against
Signal to Noise Rate (SNR) graphs for the specific configurations
of QAM modulation order,number of sub-carriers and radio
environment and different types of pilot patterns and estimation
methods.It is not known how different pilots patterns influence
the performance of the pilot based channel estimator.
I.I
NTRODUCTION
Orthogonal frequency division multiplexing (OFDM) is a
suitable technique for broadband transmission in multipath
fading environments and is implemented in some new broad-
cast standards like terrestrial digital video broadcasting (DVB-
T).Coherent OFDM detection requires information about the
channel state that has to be estimated by the receiver.For
this purpose,known pilot symbols are periodically multiplexed
in the data.Channel estimation is performed by interpolating
the time-frequency pilot grid and exploiting the correlations
of the received OFDM signal in time and frequency domain
[1].Multiple In and Multiple Out (MIMO) radio transmission
diversity technologies uses multiple antennas at the transmitter
and receiver in order to improve the quality of transmissions
to increase data throughput and link range without additional
bandwidth or increased transmit power [2].Diversity is applied
to radio transmission systems to add further transmission
paths that have different multipath frequency selective fading
thereby improving the performance of the wireless system.
Diversity in a wireless system provides the receiver with
multiple copies of the same transmitted signal.Each of these
different copies is known as a diversity branch.If these ver-
sions experience fades in multipath channels,the probability
that all of the branches are affected by fading at the same time
will decrease as the number of these copies increases [3].So
there should be at least one branch that is strong enough to be
received.Therefore diversity helps to improve the performance
in terms of error rate and link reliability through channel
hardening [4].Several diversity methods can be utilised within
a wireless system.The diversity schemes in time,frequency
and space,may be used to exploit the multipath propagation.
In most cases the number of antennas at the transmitter and
receiver are no more than two.In the some cases the SIMO
transmission system is for one antenna at the transmitter
and the MISO transmission system is for one antenna at
the receiver [5].The DVB-T2 standard only specifies MISO
transmission because the TV industry did not want to burden
the consumer with the extra cost of a second receive antenna
for a MIMO or SIMO transmission system.However in this
paper we show that with the cost of an additional cheap receive
diversity box and a cheap indoor antenna the received SNR
can be increased by at least 1dB and at most 2dB depending
on the type of channel estimator at the receiver [6].The rest
of this paper is organized as follows:Section II describes
the DVB-T2 system architecture.In Section III transmission
modes of the DVB-T2 system are discussed.In section IV
channel estimation and Equalisation function are explained.
In Section V simulation results are presented comparing the
performance of different pilot patterns and different channel
estimation methods.Section VI gives the final conclusion of
this paper.
II.DVB-T2
SYSTEM ARCHITECTURE
The functional block diagram of the physical layers of
DVB-T2 transmission system for SISO is shown in Fig.1.For
SISO transmission model there is only one transmit antenna
and one receive antenna.The first stage of the DVB-T2
system is to generate the data cells after that the pilot cells
are inserted amongst the data cells.The Inverse Fast Fourier
Transform (IFFT) transforms the cells amplitude and phase
properties into an equivalent time domain signal.The IFFT
process is an effective way to modulate the frequency cells
into a time varying symbol for transmission.The sub-carriers
of each cell are orthogonal to each other thus preventing
inter-carrier interference.Guard intervals are inserted between
the transmitted symbols to reduce the effect of (ISI).
Fig.1.DVB-T2 SISO Transmission
The receiver of the DVB-T2 SISO transmission system
consists of a reverse operation of the transmitter to obtain
the original serial data stream.However the received signal
suffers from attenuation and fading effects produced by the
multipath radio channel medium.As a result the received
signal is different to the transmitted signal.To minimise these
unwanted effects introduced by the radio channel brings,a
pilot based channel estimator and equaliser is integrated into
the receiver.The Matlab simulator is based on the baseband
equivalent system.
Fig.2.DVB-T2 SIMO Transmission
The functional block diagram of the physical layers of a
DVB-T2 SIMO transmission system has the same functions
as the SISO system at the transmitter side as shown in Fig.2.
At the receiver,there are two receivers with the same function
as the SISO receiver but we add a new module to combine
the two channel estimation branches together.The following
configuration parameters can be set for the T2 standard:
• OFDMmodulation with QPSK,16-QAM,64-QAM,or 256-
QAM constellations.
• OFDM modes are 1k,2k,4k,8k,16k,and 32k.
• Guard intervals are 1/128,1/32,1/16,19/256,1/8,19/128,
and 1/4.
• FEC is LDPC and BCH,with rates 1/2,3/5,2/3,3/4,4/5,
and 5/6.
• There are 8 different pilot-patterns with a range of different
% overheads.
• In the 8k,16k and 32k mode,a larger part of the standard
8 MHz channel can be used,adding extra capacity with an
extended OFDM carrier mode.
• DVB-T2 is specified for 1.7,5,6,7,8,and 10 MHz channel
bandwidth [7].
III.TRANSMISSION MODES OF THE DVB-T2
SYSTEM
In our DVB-T2 simulation model the following four di-
versity schemes can be selected:Single Input Single Output
(SISO),Single Input Multiple Output (SIMO),Multiple In-
put Single Output (MISO),Multiple Input Multiple Output
(MIMO).
A.SISO Transmission Mode
Single input,single output (SISO) is the standard trans-
mission mode in most wireless communication systems.The
DVB-T2 standard utilizes the SISO configuration as it is very
simple and reliable mode of transmission to implement.This
mode consists of a single transmitting antenna T
x1
,the channel
h
11
and a single receiver antenna R
x1
.An example of a SISO
system is shown fig.3:
￿
Fig.3.SISO DVB-T2 Theoretical Model
The SISO mode diagram shown above is fairly simple to
implement in practice,however in order to increase capacity
and data rate a multi-degree transmission system can be used.
This improved transmission systems make use of diversity
technique to reduce the effects of multipath fading.The
capacity of the SISO system can be expressed in with the
following equations;
r
11
(f) = h
11
(f).α
1
(f) +n
11
(f) (1)
Where,r
11
(f) denotes the received symbol in the frequency
domain,h
11
(f) represents the channel frequency response
between the two antennas,α
1
(f) is the transmitted signal
in the frequency domain and n
11
(f) is the additive white
Gaussian noise for that channel.Equalisation calculations
based on the DVB-T2 coding scheme:
ˆα
1
(f) = h

11
(f).r
11
(f) (2)
h

11
(f)r
11
(f) = (h
11
(f).h

11
(f))α
1
(f) (3)
Where ˆα
1
(f) represents the estimate of α
1
(f) and “∗”
indicates a complex conjugate.The equation above represents
the equalisation process,which is used to acquire the corrected
sub-carriers.The corrected sub-carriers are obtained by mul-
tiplying the received signal by the conjugate of the estimated
channel.
B.SIMO Transmission Mode
SIMO is the Single Input,Multiple Output configuration
that consists of a single transmit antenna,with a single data
stream which feeds two receiver antennas via a wireless
medium.Fig.4 is a graphical representation of the transmis-
sion of data symbols using the SIMO mode of transmission.
￿
Fig.4.SIMO DVB-T2 Theoretical Model
With the addition of the extra antenna at the receiving
side,SIMO offers the option to achieve diversity at the
receiver.However the systembecomes more complex as a new
dimension is introduced.This technique aids the recovery of
the received signal,especially in cases where the SNR is poor
due to multipath fading.Receive diversity is implemented to
the received signal and combination methods can be applied
to obtain a better error ratio.Therefore,this subsequently
improves Quality of Service (QoS).The equations below rep-
resent received signals,with a shared time period (TP) slot:
r
11
(f) = (h
11
(f) ×α
1
(f)) +n
11
(f) (4)
r
12
(f) = (h
12
(f) ×α
1
(f)) +n
21
(f) (5)
Equalisation calculations:To recover the transmitted sym-
bols at the receiver,both channel frequency responses
((h
11
(f) and h
12
(f)) are estimated and its complex conjugate
is multiplied with each of the received SIMO transmissions
(h
11
(f)α
1
(f) and h
12
(f)α
1
(f)) and combined.This can be
expressed as:
ˆα
1
(f) = h

11
(f)(h
11
(f) ×α
1
(f)) +h

12
(f)(h
11
(f) ×α
1
(f))
(6)
The equations above represent the equalisation process which
is used to obtain the corrected sub-carriers.
IV.C
HANNEL
E
STIMATION AND
E
QUALISATION
FUNCTION
Channel estimation is a fundamental function which all
wireless communication systems need to compensate for the
effects of a multipath channel.In order to recover the trans-
mitted data efficiently,the frequency response of the channel
is estimated from the received signal pilots.This is achieved
by interpolating between each of the pilot sub-carriers to
generate an estimate of the channel’s frequency response at the
data cells.Different interpolation methods can be employed
to provide different performance.In this paper we present
four interpolation methods:Linear Interpolation (Step),Linear
Interpolation Order 0 (Line0),Linear Interpolation Order 1
(Line1) and Spline Least Squares Best Fit estimation functions.
The channel estimation function consists of estimating the
amplitude and phase distortions due to the effects of the
transmission channel.This function allows compensation of
the channel effects to be applied to both the data and pilot
cells.Since the parameters of pilots such as position,amplitude
and phase are known at the transmitter,this can be used to
estimate the frequency response of the channel.
TABLE I
N
UMBER OF AVAILABLE DATA CELLS IN ONE NORMAL SYMBOL AND
OVERHEAD
%
Pilot
PP1
PP2
PP3
PP4
PP5
PP7
Overhead %
8,33
8,33
4,17
4,17
2,08
1,04
2K
1522
1532
1596
1602
1632
1646
Table I shows overhead percentage for all pilot patterns
and the number of data cell for each pilot pattern when FFT
size=2048.For PP1 there are more pilot cells with higher %
overhead whilst for the PP7 and there are less number of pilot
cells with lower % overhead.
TABLE II
SIMULATION CONFIGURATION PARAMETERS FOR PP1 AND
PP7
Parameter
Specification
FFT size
2048
Number of available carriers
1705
Signal constellation
4QAM
Guard interval
1/4,1/32
Scattered pilot pattern
PP1,PP7
Channel model
Rayleigh
Simulation iteration
1000
Radio environments
Rural area
Radio channel type
Raleigh channel
Table II shows the parameters used in simulation configu-
ration for SISO and SIMO with PP1 and PP7.
A.Linear Interpolation(Step)
In order to equalise the received signal,a Linear Inter-
polation (Step) function was created to estimate the phase-
amplitude of data cells based on the effect of the channel on
pilot cells of the received signal.The amplitude and phase of
the frequency response of all the data cells between each pair
of pilot cells is set to the amplitude and phase of the frequency
response of its the first pilot cell.The channel model is an
8MHz bandwidth channel centered at 0MHz between -4MHz
and +4MHz [8].Fig.5 shows the uncoded performance of
Linear Interpolation (Step) for different pilot patterns.PP7 has
a poorer performance than the other pilot patterns because for
a given SNR it produces a greater BER.For PP1 to PP5(Linear
interpolation(Step)) at BER= 0.1 the SNR improvement is 10
dB compared with PP7 All the other pilots (PP1 to PP5)
perform similarly.
0
5
10
15
20
25
30
10
−3
10
−2
10
−1
10
0
SNR(dB)
BER
PP1 SISO Step
PP2 SISO Step
PP3 SISO Step
PP4 SISO Step
PP5 SISO Step
PP7 SISO Step
Fig.5.Uncoded Performance of Linear interpolation (Step) for different
pilot patterns for SISO
B.Linear Interpolation (Line)
Linear Interpolation (Line) method interpolates between two
pilots and we have investigated the performance of order 0
and order 1 interpolators.Line-order 0 interpolator interpolates
between two pairs of pilots of the swapped amplitude and
phase of the pilot model frequency response at their mid-point
with a zero order line [8].Because the phase change wraps
around 2π causing a discontinuity for the phase estimator that
straddles 0 and 2π,an unwrap function is used to unwrap the
discontinuity in the phase by adding or subtracting 2π to the
phases.
0
5
10
15
20
25
30
10
−3
10
−2
10
−1
10
0
SNR(dB)
BER
PP1 SISO Line−order0
PP2 SISO Line−order0
PP3 SISO Line−order0
PP4 SISO Line−order0
PP5 SISO Line−order0
PP7 SISO Line−order0
Fig.6.Uncoded Performance of Linear interpolation (Line-order0) for
different pilot patterns for SISO
Fig.6 shows the uncoded performance of Linear interpo-
lation (Line-order0) for different pilots patterns.PP7 has a
poorer performance than the other pilot patterns because for a
given SNR it produces a greater BER.For PP1 to PP5 (Line-
order0) at BER= 0.1 the SNR improvement is 9 dB compared
with PP7.All the other pilots (PP1 to PP5) perform similarly.
Line-order 1 interpolator interpolates between two pairs of
pilots with a first order line [8].
0
5
10
15
20
25
30
10
−3
10
−2
10
−1
10
0
SNR(dB)
BER
PP1 SISO Line−order1
PP2 SISO Line−order1
PP3 SISo Line−order1
PP4 SISO Line−order1
PP5 SISO Line−order1
PP7 SISO Line−order1
Fig.7.Uncoded Performance of Linear interpolation (Line-order1) for
different pilot patterns for SISO
Fig.7 shows the uncoded performance of Linear interpola-
tion (Line-order1) for different pilot patterns.All pilot patterns
have similar performance,for a given SNR it produced the
same uncoded BER.
C.Spline Least Squares Best Fit
A Spline Least Squares Best Fit estimates (via the least
squares criterion) the parameters in a spline polynomial
method.The advantage of using a spline channel estimator is
that it plots a least squares best fit line through the points.This
method can obtain the best approximation of the multipath
fading in the presence of noise.As for the disadvantage,this
method has difficulty to estimate channel amplitude with deep
fading [8].Since there are different numbers of pilots for each
pilot pattern,Spline polynomials of different sizes are used.
The main advantages of Spline Least Squares Best Fit are its
stability and calculation simplicity [9].
TABLE III
SPLINE ORDER AND SPAN FOR SPLINE LEAST SQUARE BEST
FIT
PP number
Spline order
Span
Dx
Dy
PP1
14
141
3
4
PP2
14
141
6
2
PP3
14
70
6
4
PP4
14
70
12
2
PP5
14
35
12
4
PP7
8
19
24
4
Table 3 show the spline order and span that sited depend
on separation of pilot bearing carriers (Dx) and number of
symbols forming one scattered pilot sequence and (Dy) that
assign depending on which PP number has been selected
[10].Fig.8 shows the uncoded performance of Spline Least
Square Best Fit for different pilot patterns.PP7 has a poorer
performance than the other pilot patterns because for a given
SNR it produces a greater BER.For PP1 to PP5 (Spline Least
Squares Best Fit) at BER= 0.1 the SNR improvement is 11 dB
compared with PP7 All the other pilots (PP1 to PP5) perform
similarly.
0
5
10
15
20
25
30
10
−3
10
−2
10
−1
10
0
SNR(dB)
BER
PP1 SISO Spline
PP2 SISO Spline
PP3 SISO Spline
PP4 SISO Spline
PP5 SISO Spline
PP7 SISO Spline
Fig.8.Uncoded Performance of Spline Least Square Best Fit for different
pilot patterns for SISO
Simulation results show that the number of pilot symbols
is a trade-off between data throughput and estimation accu-
racy for pilot aided channel estimation.However,there is
no performance gain between PP1/PP2 with 8.3% overhead,
PP3/PP4 with 4.17% overhead and PP5 with 2.08% overhead.
Thus,we can choose the optimum pilot pattern for Rural Area
environment as PP5 to maximize the data throughput.
V.SIMO/SISO S
IMULATION AND
R
ESULTS
In this section,we investigate the performance simulation
of different scattered pilot patterns and different channel
estimation functions for SIMO and SISO configurations.BER
against SNR is recorded to compare the performance of SISO
and SIMO transmissions with different configurations.We
suppose to have perfect time and frequency synchronization
and have chosen the guard interval to be greater than the
maximumdelay spread to avoid inter-symbol interference.The
bandwidth that is used for our DVB-T2 simulations is 8MHz.
The main system parameters for SISO and SIMO simulations
are listed in table IV.
TABLE IV
S
IMULATION CONFIGURATION FOR
SIMO
SIMO and SISO simulation parameters
Channel Parameters
Values
Frame Length
91.426
QAM Modulator Order
4
Number of Subcarriers
2k
Guard Intervals
1/4,1/8,1/16,1/32
Radio Environment
Rural Area
Pilots Patterns
1,2,3,4,5,7
Radio Channel Type
Rayleigh Channel
Estimator and Equaliser method
Linear Interpolation(Step),
Linear Interpolation(Line),
Spline Least Square Best Fit
Speed of the mobile
3Km/hour
The next fig.9,10,11 and 12 show the BER comparison
performances of PP1 for SISO and SIMO for different channel
estimation methods.For Linear interpolation (Step) at BER=
0.05 the SNR improvement is 0.5 dB,for Linear interpolation
(Line-order0) at BER=0.05 the SNR improvement is 1dB,
for Linear interpolation (Line-order1) at BER=0.05 the SNR
improvement is 1dB and Spline Best Fit Square at BER=0.05
the improvement is 1.5dB.Thus the performance of SIMO
is better than SISO for all estimation methods.At low SNRs,
the Spline Least Square Best Fit estimator has a slightly better
performance rather than Linear Interpolation (Step) and Linear
Interpolation (Line 0,1) estimators for PP1.At high SNRs,the
Line estimator has better performance compared with Spline
estimator because the Line estimator can estimate deep fading,
whilst the Spline estimator is unable to fit a line down into
the deep fading.For PP1,pilot pattern which has more pilot
cells with more dense spacing (comparing with PP7),the four
estimator methods performbetter than PP7 because the smaller
spacing between pilots allows more accurate estimation of the
frequency selective fading channel.Simulation results show
that there is a performance gain to be made of between 0.5dB
to 2 dB by employing SIMO transmission diversity.
0
5
10
15
20
25
30
10
−3
10
−2
10
−1
10
0
SNR(dB)
BER
PP1 SISO Step
PP1 SIMO Step
Fig.9.Comparison between SISO and SIMO for Linear interpolation(Step)
for PP1
0
5
10
15
20
25
30
10
−3
10
−2
10
−1
10
0
SNR(dB)
BER
PP1 SISO Line−order0
PP1 SIMO Line−order0
Fig.10.Comparison between SISO and SIMO for Linear interpolation(Line-
order0) for PP1
0
5
10
15
20
25
30
10
−3
10
−2
10
−1
10
0
SNR(dB)
BER
PP1 SISO Line−order1
PP1 SIMO Line−order1
Fig.11.Comparison between SISO and SIMO for Linear interpolation(Line-
order1) for PP1
0
5
10
15
20
25
30
10
−3
10
−2
10
−1
10
0
SNR(dB)
BER
PP1 SISO Spline
PP1 SIMO Spline
Fig.12.Comparison between SISO and SIMO for Spline Least Square Best
Fit for PP1
Fig.13,14,15 and 16 show the BER performances of PP7
for different estimation methods.Performance of SIMO for all
estimation methods is better than SISO for PP7.For Linear
interpolation (Step) at BER= 0.05 the improvement is 7.5
dB,for Linear interpolation (Line-order0) at BER=0.05 the
improvement is 4dB,for Linear interpolation (Line-order1)
at BER=0.05 the improvement is 1dB and Spline Best Fit
Square at BER=0.05 the improvement is 17.5 dB.At low
SNRs,the Spline Least Square Best Fit estimator has a
slightly better performance than Linear Interpolation (Step)
and Linear Interpolation (Line order 0,1) estimators for PP7.
Spline Least Square Best Fit estimator plots a least squares
best fit line through the pilot points,whilst the Linear In-
terpolation(Step)estimator follows the noise.At high SNRs,
the Linear Interpolation (Line order1) estimator has better
performance compared with Spline estimator because the Line
estimator can estimate deep fading whilst the Spline estimator
is unable to fit a line down into the deep fading.Simulation
results show that the performance gained by using SIMO is
much greater (between 4 and 17.5 dB) when PP7 is used than
when PP1 is used.
0
5
10
15
20
25
30
10
−2
10
−1
10
0
SNR(dB)
BER
PP7 SISO Step
PP7 SIMO Step
Fig.13.Comparison between SISO and SIMO for Linear interpolation (Step)
for PP7
0
5
10
15
20
25
30
10
−2
10
−1
10
0
SNR(dB)
BER
PP7 SISO Line−order−0
PP7 SIMO Line−order−0
Fig.14.Comparison between SISO and SIMO for Linear interpolation
(Line-order0) for PP7
0
5
10
15
20
25
30
10
−3
10
−2
10
−1
10
0
SNR(dB)
BER
PP7 SISO Line−order−1
PP7 SIMO Line−order−1
Fig.15.Comparison between SISO and SIMO for Linear interpolation
(Line-order1) for PP7
0
5
10
15
20
25
30
10
−2
10
−1
10
0
SNR(dB)
BER
PP7 SISO Spline
PP7 SIMO Spline
Fig.16.Comparison between SISO and SIMO for Spline Least Square Best
Fit for PP7
VI.C
ONCLUSION
In this paper,the performance simulation of different scat-
tered pilot patterns and different channel estimation config-
urations for SISO and SIMO transmissions is investigated.
BER against BER is recorded to compare the performance
of different configurations.Different types of scattered pilot
patterns for DVB-T2 standard are analyzed evaluated and
compared in BER through Linear interpolation (Step),Linear
interpolation (Line order 0),Linear interpolation (Line order
1) and Spline Least Squares Best Fit estimator methods.As for
Rural Area environment there is no performance gain between
PP1/PP2 with 8.3% overhead,PP3/PP4 with 4.17% overhead
and PP5 with 2.08%overhead.Thus,the optimumpilot pattern
can be regarded as PP5 to maximize the data throughput.
Simulation results show that for Rural Area environment and
PP1,there is a performance gain between 0.5dB to 2 dB
by employing SIMO transmission diversity.Simulation results
also show that the performance gain for PP7 in SIMO is much
greater (between 4 and 17.5 dB) than that of PP1.
R
EFERENCES
[1] S.Sand,A.Dammann,and G.Auer,“Adaptive Pilot Symbol Aided
Channel Estimation for OFDM Systems”,German Aerospace Centre
(DLR),Institute of Communications and Navigation.
[2] S.Alamouti,“A Simple Transmit Diversity Technique for Wireless Com-
munications”,IEEE journals on select area in communications,vol.16,
NO.8,October 1998.
[3] S.Coleri,M.Ergen,A.Puri and A.Bahai,“Channel estimation techniques
based on pilot arrangement in OFDM systems”,IEEE Transactions on
Broadcasting,Vol.59,pp223-pp229,September 2002.
[4] Y.Hwan You,J.Kim and H.Kyu Song,“Pilot-Assisted Fine Frequency
Synchronization for OFDM-Based DVB Receivers”,IEEE Transactions
on Broadcasting,Vol.55,pp674-pp648,September 2009.
[5] H.Ebrahimzad and A.Mohammadi,“Diversity-Multiplexing Tradeoff in
MISO/SIMO Systems at Finite SNR”,65th IEEE Vehicular Technology
Conference Location:Dublin,Ireland Date:April 22-25,2007.
[6] H.Le,T.Ngoc,C.Ko,“RLS-Based Joint Estimation and Tracking of
Channel Response,Sampling,and Carrier Frequency Offset for OFDM”,
IEEE Transactions on Broadcasting.Vol.55,NO.1 pp84-94,March 2009.
[7] C.Alberto Lpez Arranz and C.Enrique Herrero,“Design of a DVB-T2
simulation platform and network optimization with Simulated Annealing
”,University Politechica De Catalunya,July 2009.
[8] F.Salman,J.Cosmas and Y.Zhang,“Modelling and Performance of a
DVB-T2 Channel Estimator and Equaliser for Different Pilot Patterns ”,
conference of BMSB,2012.
[9] Spline fit,data plot reference manual,September 12,1996.
[10] Digital Video Broadcasting(DVB):Framing Structure,Channel cod-
ing and Modulation for digital terrestrial television broadcasting
system(DVB-T2),ETSI EN 302755 v1.1.1,September,2009.
[11] S.Tomasin and M.Butussi,“Analysis of Interpolated Channel Estima-
tion for Mobile OFDMSystems ”,IEEE Transactions on Communications,
Vol.58,NO.5,MAY 2010.
An Intelligent TV White Space Management
System
Bo Ye,Anjum Pervez
yebo1986@gmail.com,perveza@lsbu.ac.uk
the Faculty of Engineering,Science and the Built Environment
London South Bank University
London SE1 0AA,UK
Maziar Nekovee
maziar.nekovee@bt.com
BT Research,
Polaris 134 Adastral Park,
Martlesham,Suffolk IP5 3RE,UK
Abstract—The trend to move to digital TV broadcasting is
gaining momentum in most parts of the world.The process of
Digital Switchover (DSO) will be completed in the UK this year.
After the DSO,there will be opportunities for the spatially unused
licensed portion of the digital TV spectrum (TV White Space
(TVWS)) to be utilized by other communicating devices (TVWS
devices).For maximum spectrum utilization a large number of
TVWS devices must have access to the TVWS at any given time.
Such a high degree of spectrum utilization may give rise to RF
interference between the entities operating within the TVWS
environment.In this paper,we propose a spectrum management
system that has the ability to make intelligent decisions for
spectrum access.The system is simulated without and with
optimisation.Simulated Annealing (SA) algorithm is used for the
optimisation.The results are compared for the cases:without
optimisation,using SA with and without parameter selection.
The results show that SA with parameter selection has the best
performance in terms of the number of TVWS devices allowed
to access the TVWS at any given time.
I.I
NTRODUCTION
Broadcast TV services operate in the licensed portions of
VHF and UHF radio spectrum.In most countries,the regula-
tory authorities do not allow the use of unlicensed devices in
TV bands,except remote control,medical telemetry devices,
and wireless microphones.In most of the developed countries
TV stations are required to be converted from analogue to
digital transmission by the national regulators.The process of
Digital Switchover (DSO) was completed in the United States,
in June 2009,and will be completed in the UK by this year.
DSO is already completed,underway or being planned in the
rest of Europe and many other countries all over the world.
After the DSO a significant amount of RF spectrum within the
old analogue TV band will become vacant.The vacant portion
of RF spectrum will then be reallocated by the regulators to
other services through auctions.
In addition,there will be opportunities to utilize the spatially
unused licensed portion of the digital TV channels,which is
known as TV White Space (TVWS) or Interleaved Spectrum
in the language of the UK regulator.A transmitter at a much
lower power level would not need a great physical separation
from co-channel and adjacent channel TV stations to avoid
causing interference.Low power devices can therefore operate
on vacant channels in locations that could not be used by TV
stations due to interference planning.
The license holders,referred to as the primary users (PU),
may allow the non-license users,referred to as the secondary
users (SU) to access their spectrum for the duration when
the spectrum is unused by the PU.It is strongly believed
that the devices with cognitive capability will be the prime
contenders for the dynamic spectrum access to TVWS.Before
accessing the spectrum,the SU must establish the existence
of TVWS.This requires that a cognitive device must be able
to sense the presence or absence of the PU.In this way,the
existence of TVWS may be determined.The sensing features
have been extensively investigated and numerous algorithms
have been proposed in the literature to test their ability [1].
Several interesting theoretical solutions [2]–[6] have been
suggested,however,a practical implementation of a reliable
sensing mechanism still seems far away.A technically more
feasible approach instead of sensing is the use of geo-location
database.Spectrum opportunity detection via a geo-location
database appears to be the most promising approach at present.
This is a centralised opportunity detection scheme.
To find out what spectrum opportunities are available at a
certain time at a given position the secondary device makes
a channel allocation request to the centralised database.This
request message may be sent via any commonly used networks
eg.a wire line link such as ASDL or a wireless connection.
The database then performs the required calculations and
analysis to determine the current status and responds with
information on the available spectrum opportunities,which
may be a list of available channels accompanied with limits
on the allowed transmit power on each,QoS and pricing etc.
For maximum spectrum utilization the system must be able
to accommodate a large number of secondary users at any
given time within a TVWS environment.Furthermore,this
high degree of spectrum utilization must be realised without
harmful interference between all the entities operating within
the TVWS environment.This implies that a highly intelligent
spectrum management/channel allocation systems needs to be
designed.
In this paper,we are proposing a centralised and highly
intelligent TVWS management (ITVWSM) system that will
control allocation of unused spectrum to the secondary users
in an intelligent way by employing its analytical abilities and
power of reasoning.The ITVWSM incorporates two decision
mechanisms - an analytical mechanism,which contains math-
ematical tools and optimisation algorithms,and a reasoning
mechanism,which is based on artificial intelligence and self-
learning principles.
Our initial work,which is focused on the analytical decision
mechanism,and the results achieved so far are presented in
this paper.
The paper is organized as follows.In section II,some related
works are introduced.In section III,architecture of Intelligent
TV White Space Management system is introduced.In section
IV,TVWS network model and details of the problem are
presented.The decision process is introduced in section V.
The development of optimisation process and all parameters of
optimisation algorithm are explained in section VI.In section
VII simulation results are presented.The proceed parameter
selection is explained and compared.Conclusions are given in
the final section.
II.R
ELATED
W
ORKS
There have been several ideas for the development of a geo-
location database proposed by industrial and other researchers
working in this field [7]–[9].The approaches proposed by BT,
Dell,Google,Microsoft and Motorola all have common ele-
ments but,at the same time,have significant differences.With
the exception of Motorola all the others are concentrating on
supreme accuracy of the geo-location database.Motorola has
been more concerned with the compactness of the database.
Most of the work is focused on creating a pixel-based
database,which separates the landscape into geographic pixels.
The belief is that describing a region based on contours cannot
be as accurate as a pixel-based solution.However,the final
answer is still unclear as there has been no detailed comparison
of the two approaches carried out to our knowledge.
While many ideas proposed in the literature so far are good,
none of the studies have suggested an actual database design
with the required degree of intelligence that,we believe,will
be necessary for this task.
III.I
NTELLIGENT
TV W
HITE
S
PACE
M
ANAGEMENT
We are proposing a new approach for dynamic management
of TVWS.The approach is based on the idea of a geo-location
database operating in conjunction with intelligent decision
engines.
Fig.1 shows an overview of the proposed Intelligent TV
White Space Management (ITVWSM).It consists of two main
modules:Data depository module and Decision & manage-
ment module.
Data depository (DD) - the function of DD is to store past
and current information.It contains four databases.
Geo-Location Database (GLD) stores information about the
radio environment [3],geographical environment,location,
transmission signal,mobility,power supply,and priority of
the mobile node.
White Space Database (WSD) stores information about
availability of spectrum (presence or absence of white spaces)
and also updates this information dynamically.
Data Depository
Decision and Management
… ...
Decision Engine
CBD
Support
Manager
Security
Position
Verification
Translator
Tool
Policy Engine
CD
KBD
SLD
Optimiser
… ...
WSD
… ...
GLD
… ...
KD
C&C
Fig.1.The architecture of ITVWSM
Knowledge Database (KD) and Case Database (CD) store
domain knowledge and the experience that the system gains.
Decision and Management (DAM) - the functions of DAM
are to make intelligent decisions and manage/verify informa-
tion.It contains Decision Engine,Policy Engine and Support
Manager.
The Decision Engine (DE) performs the decision process
in a number of different ways.The decision may be based on
a simple sequential logic,referred to as the sequential logic
decision (SLD),or the current knowledge of events,referred
to as the ’knowledge based decision (KBD)’,or the history
of events (stored in the DD),referred to as the ’case based
decision (CBD)’,or a combination of the two,referred to as
’decision by fusion’.In situations where the initial decisions
made by the DE do not meet the requirements of the mobile
nodes,DE may move to even a higher level of intelligence.
This level may be realised by the use of optimisation [4].
The support manager translates the radio environment,ver-
ifies position of the wireless node and ensures the level of
security.
The policy engine checks the validity of the decision made
by the DE,and ensures that the decisions conform to the
regulations,standards and specifications.Policy Engine may
also guide DE to make valid decisions.
IV.TVWS N
ETWORK
M
ODEL
A simple TVWS network model is shown in Fig.2.Three
Digital TV (DTV) stations are located in a given geographical
area.Each DTV transmitter radiates a predefined fixed power
and has a predictable coverage area,shown by thick line
surrounding each transmitter.The coverage area is determined
by the transmitted power and the DTV receiver sensitivity.
There are 32 DTV channels (this assumption is based on UK
digital TV band [10]);however,each station will select n
different channels out of the pool of 32 channels to avoid inter-
channel interference.The space outside the coverage area of
all the three stations constitutes a spatial TV White Space.If
wireless nodes (referred to as the white space devices) operate
in the TVWS and,in addition,only use those channels that are
DTV Transmitter
DTV Transmitter
WS Node
WS Node
User Interface
Decision and Management
KD
WSD
GLD
CD
Decision Engine
CBD
Support Manager
Policy Engine
KBD
Optimiser
- - - - - - - - - - - - - - - - - - - - - - - - - - - - -
DTV Transmitter
WS Node
SLD
1
3
2
C&C
Fig.2.The Architecture of TVWS Network
not used by any of the three DTV transmitters,the question of
interference between DTV signals and the signals transmitted
by the WS devices does not arise.Furthermore,if there are m
unused DTV channels,where m = 32 −3n,the question of
interference between WS devices does not arise if the number
of WS devices is not greater than m.
However,if k devices are requesting allocation of a channel,
where k > m,some of the channels will have to be reused.It is
envisaged that in a TVWS network,there will be a very large
number of TVWS devices requesting the use of a channel,
which implies that several channels will have to be reused all
the time.This situation will have the potential to give rise
to interference between those WS device which are using the
same channels in the TVWS network.
This implies that evaluation of inter-device interference as
a function of the number of WS devices in a given TVWS
network is one of the most important considerations.
Let’s suppose that there are k(k >> m) WS devices in a
TVWS network.Each device transmits a predefined average
power.The average power transmitted by the jth device
propagates a distance d and arrives at the ith device (P
ij
(d)).
This power,(P
ij
(d)),is given by:
P
ij
(d) = P
j
G
j
G
i
(λ/(4πd
ij
))
2
/L (1)
where,
P
j
is the average power transmitted by jth device.
G
j
is the transmitter j antenna gain.
G
i
is the receiver i antenna gain.
d
ij
is the i-j separation distance in meters.
L is the systemloss factor not related to propagation ( L ≥ 1
).
λ is the wavelength in meters.
The channel allocation array X is defined as
X = {x
1
,x
2
,x
3
,......,x
k
}
where x
i
∈ {1,2,3,......,m}i ∈ {1,2,3,......,k}.
The interference from jth to ith device,I
ij
(x
i
,x
j
) is then
given by
I
ij
(x
i
,x
j
) =
￿
P
ij
x
i
= x
j
0 x
i

= x
j
(2)
Equation (2) gives interference from the jth device to the
ith device,however,in a network of k devices some of the
remaining k −2 devices may cause further interference to the
ith device.Thus,the total interference,I
i
(x
i
),caused by all
the k −1 device to ith device needs to be evaluted.The total
interference is given by:
I
i
(x
i
) =
k
￿
j=1
I
ij
(x
i
,x
j
),j ∈ {1,2,3,......,k} and j 
= i (3)
The overall network interference,I(X),is given by the sum,
I(X) =
k
￿
i=1
I
i
(x
i
) (4)
As k will have a different value in different situations,it
is more constructive to consider the average network interfer-
ence,I(X)
ave
,produced by k devices in a given WS network.
I(X)
ave
=
k
￿
i=1
I
i
(x
i
)/k (5)
V.T
HE
D
ECISION
P
ROCESS
When a request for allocation of a channel is received by
ITVWSM,it consults the appropriate database in its data
depository.If k ≤ m,one of the m channels is allocated
to the requesting wireless node.If,however,k > m,one of
the m channels will have to be reused.This will result in
some degree of interference.A decision threshold,DT,is set
in the calculator & comparator (C&C) module.If the ratio
of transmit power of the WS device (predefined and fixed
for all nodes) to the average network interference is above
DT,the channel allocation decision is made by the SLD.If,
however,the ratio is below DT,the process of optimisation is
invoked.The optimiser module tries to find a reuse channel
that produces minimum value of I(X)
ave
.The result is sent
back to the C&C module for comparison with DT.If the
ratio is above DT,that specific channel is allocated,otherwise
channel allocation denied.
VI.D
EVELOPMENT OF
O
PTIMISATION
P
ROCESS
In our application the process of optimisation essentially
aims to minimise the average Network interference as a
function of the reuse channel for a given node at a specified
location within the TVWS area.The objective function for the
optimisation is
Minimise I(X)
ave
=
k
￿
i=1
I
i
(x
i
)/k (6)
There are numerous optimisation algorithms in the litera-
tures,however,we focused on only those which appeared to
have the potential to be employed in our work.
A distributed Simulated Annealing (SA) has been proposed
for minimisation of the total interference from all Access
Points (AP) in order to solve dynamic channel allocation
problem in 802.11b/g based HD-WLANs [11].However,the
number of APs considered in this simulation is limited to 50.
Noting that the number of WS devices in a TVWS network
is envisaged to be very large.
Genetic Algorithms (GA) generate solutions to optimisation
problems using techniques inspired by natural evolution,such
as inheritance,mutation,selection,and crossover.In [12]–
[15],GA is used to solve channel allocation problem from a
search space with given constraints.In [16],authors proposed
a fully distributed and self-managed algorithm to minimise
total interference of all APs in WLAN.
In [17],authors proposed a hybrid algorithm that combines
GA and SA.The algorithm allows an AP to select the channel
that has been least used among its neighbouring APs in the
mutation step.In this,the search strategy moves to the domain
of local search and tends to be trapped in local minima,failing
to find global minima.
The above study also reveals that it is often very difficult
to choose good initial parameters for GA as the choice of
parameters is dependent on the specific situation in hand.
It is not possible to generalize the selection of good initial
parameters.Combination of algorithms is always not easy.SA,
on the other hand,has the potential to find a near-optimal
solution.However,SA often requires execution of a large
number of experiments to determine good initial parameter
settings.
The main advantages of SA over GA are that a) SA requires
fewer parameters,b) a suitable set of parameters can be
determined and c) is easier to realize.
SA is motivated by an analogy to annealing in solids.It
simulates the cooling of material in a heat bath [18].In the
simulation,the states of the cooling process represent the re-
quired solution,the physical energy represents the function to
be minimised,and the temperature (T) is the control parameter
for the optimisation process.A minimisation problem in SA
is solved by simulating a random walk on the set of states and
looking for a low-energy state.
In our adaptation of SA,the states are the solutions
of channel allocation among WS devices.The physical
energy is the interference.T is the control parameter for the
optimisation.
The steps of the simulation are as follows:
1.X ∈ Ω is a current solution.Then a neighbor solution
X
￿
of the current solution X is selected.
2.Equation (7) is used to determine if X is replaced by
X
￿
or not.T is a temperature controlling the probability
of downward steps.I(X
￿
) and I(X) are the interference
of solution X
￿
and X.I is the difference between the
interference of X
￿
and X.
X = X
￿
if I < 0 or
r < AcceptProbability(T,I),(r = random(0,1))
(7)
SA has three main functions:Neighbor Allocation (NA),
Temperature Initialization and Acceptance Probability (TIAP),
New Allocation and Temperature Update (NATU).
NA selects a new channel allocation array X
￿
from the
neighbour set of current allocation X.TIAP initialize tem-
perature and generates the probability of replacement of X by
X
￿
with the condition that the result produced by X is worse
than that of X
￿
.
The function NA,TIAP and NATU can be implemented
using different methods as explained in the following:
A.Neighbor Allocation (NA)
1.Random Neighbor Allocation (RNA) - One WS node and
the associated channel are randomly selected.
2.Least Interference Neighbor Allocation (LNA) - One
WS node is randomly selected but a channel is chosen that
generates minimal interference.
3.Adaptive Simulated Annealing Neighbour Allocation
(ASANA) [18] - New allocation x

i
∈ X

is generated from
current solution x
i
∈ X with the range x
i
∈ [1,m],calculated
by the random variable y
i
,which is generated from a u
i
from
the uniform distribution,x
￿
i
= round(x
i
+y
i
(m−1)).
B.Temperature Initialization and Acceptance Probability
(TIAP)
1.Constant Temperature and Adaptive Metropolis Accep-
tance Probability (CTAMTIAP) - T is set a constant value
between 1 and 100.The accept probability is calculated by
AcceptProbability = e
−I/T
[18].In this paper,due to
I << T,Accept Probability is really close to 1,which
will make SA trapped in the local minima.Hence,I times
10 until it is greater than 1.
2.Initialization Temperature and Metropolis Acceptance
Probability (ITMTIAP) - T is initialized using the acceptance
ratio r ∈ (0,1),defined as the number of accepted transitions
divided by the number of proposed transitions,and Accept
Probability is calculated by e
−I/T
.
T
init
= r
￿
￿
n
i=1
(I
i
−(
￿
n
i=1
I
i
)/n)
2
/(n −1)
n (Steps):Number of steps to determine initial temperature;
r > 0 (Ratio):The ratio of standard deviation.
C.New Allocation and Temperature Updating (NATU)
When generating new allocation solution,x
i
∈ X is set
with a random number within the range [1,m].
1.Fast Annealing cooling schedule (FACS):T
i
=
T
init
i
2.Exponential schedule (ECS):T
i+1
= αT
i
,α ∈
[0.5,0.99].
VII.P
ERFORMANCE
E
VALUATION
The performance of TVWS network is evaluated by defin-
ing the area,A,of a WS and then randomly deploying k
WS devices within the specified area.The average network
interference as a function of k is calculated using equation
(5),in the case of SLD operation,whereas equation (6) is
used in the case when the process of optimisation is invoked.
The optimisation is carried out employing SA algorithm,first
without parameter selection and then with parameter selection
(the reason for parameter selection has been explained in
section VI).
For this evaluation A is assumed to be a square with sides
of 1km,and k ranges from50−500.For each k the simulation
is repeated 100 times and the average is plotted.
The result for the SLD operation and optimisation without
parameter selection cases are shown in Fig.3.
50
100
150
200
250
300
350
400
450
500
0
1
2
3
4
5
6
7
8
9
x 10
−21
Number of Nodes
Average interference (W)
SLD
SA without selection
Fig.3.Results of SLD and SA Without Parameter Selection
As has been mentioned in section VI,the three parame-
ters(NA,TIAP and NATU) in SA can be implemented using
different methods.
Our extensive experimentation confirms that a carefully
selected set of initial parameters can produce better results.
Only two results are presented here to prove this point.
NA,TIAP and NATU are implemented using RNA,CTAM-
TIAP and FACS respectively in experiment 1,and using RNA,
CTAMTIAP and ECS respectively in experiment 2.The results
show no difference in their performance (Fig.4).
NA,TIAP and NATU are implemented using ASANA,
ITMTIAP and FACS respectively in experiment 3,and using
ASANA,ITMTIAP and ECS respectively in experiment 4.
The results in this case show a significant difference in their
performance (Fig.5).
For the optimisation with parameter selection case,firstly
the best set of initial parameters is determined.This set is then
used to minimise average network interference.The results are
compared with the previous two cases in Fig.6.
It is clear from Fig.6 that SA with parameters selection
can produce the best results compared with SLD and SA
without parameters selection.It is also clear that the benefit
becomes significant as k increases.For example,if the decision
threshold at ITVWSM is set to correspond to an average
network interference of 6 ×10
−21
,the maximum number of
WS devices allowed to access TVWS is 448,462 and 500 for
SLD,SA without selection and SA with selection respectively.
50
100
150
200
250
300
350
400
450
500
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
x 10
−22
Number of Nodes
Average interference (W)
Experiment 1
Experiment 2
Fig.4.Results of Experiment 1 and 2
50
100
150
200
250
300
350
400
450
500
0
1
2
3
4
5
6
7
x 10
−22
Number of Nodes
Average interference (W)
Experiment 3
Experiment 4
Fig.5.Results of Experiment 3 and 4
VIII.C
ONCLUSION
A new approach has been proposed for TV white space
management.The need for a highly intelligent channel man-
agement systemhas been identified and a suitable architecture,
referred to as ITVWSM,for such a system has been presented.
An overview of the functionality of ITVWSMhas been given.
One of the most critical functions of ITVWSM is the process
of decision for channel allocation.This function has been
tested by applying the proposed philosophy to a simple TVWS
network model.It has been shown that the decision process can
be improved by the use of suitable optimisation algorithms.
The results show that SA with parameters selection can give
improvement in minimisation of TVWS network interference
and the improvement becomes more significant as the number
of WS devices increases.However,the process of optimisation
increases computational complexity and processing time.This
aspect is being investigated at present.In future the work will
be extended to achieve further improvements in the process of
50
100
150
200
250
300
350
400
450
500
0
1
2
3
4
5
6
7
8
9
x 10
−21
Number of Nodes
Average interference (W)
SLD
SA without selection
SA with selection
Fig.6.Results of SLD,SA With and Without Selection
optimisation and implement KBD and CBD processes.
R
EFERENCES
[1] J.N.Wang,A.Pervez and M.Nekovee,“A review and verification
of detection algorithms for DVB-T signals,” Communication Systems
Networks and Digital Signal Processing,7th International Symposium
on,,pp.469,2010.
[2] A.He,J.Gaeddert,K.Bae,T.Newmana,J.Reeda,L.Morales,and
C.Park,“Development of a case-based reasoning cognitive engine for
ieee 802.22 wran applications,” Mobile Computing and Communications
Review,vol.13,no.2,2009.
[3] Y.Zhao,J.Gaeddert,L.Morales,K.Bae,J.Um,and J.Reed,“De-
velopment of Radio Environment Map Enabled Case- and Knowledge-
Based Learning Algorithms for IEEE 802.22 WRAN Cognitive Engines,”
Cognitive Radio Oriented Wireless Networks and Communications,2nd
International Conference on,p.44,2007.
[4] C.Rieser,T.Rondeau,C.W.Bostian,and T.Gallagher,“Cognitive radio
testbed:further details and testing of a distributed genetic algorithmbased
cognitive engine for programmable radios,” Military Communications
Conference,p.1437,2004.
[5] A.He,K.Kyung,T.Newman,J.Gaeddert,K.Kim,L.M.-T.R.Menon,
J.Neel,Y.Zhao,J.Reed,and W.Tranter,“A survey of artificial intelli-
gence for cognitive radios,” Vehicular Technology,IEEE Transactions on,
vol.59,no.4,pp.1578–1592,2010.
[6] A.MacKenzie,J.Reed,P.Athanas,C.Bostian,R.Buehrer,L.DaSilva,
S.Ellingson,Y.Hou,M.Hsiao,J.Park,C.Patterson,S.Raman,and
C.da Silva,“Cognitive radio and networking research at virginia tech,”
Proceedings of the IEEE,vol.97,no.4,pp.660–688,2009.
[7] D.Gurney,G.Buchwald,L.Ecklund,S.L.Kuffner and J.Grosspietsch,
Geo-Location Database Techniques for Incumbent Protection in the TV
White Space.New Frontiers in Dynamic Spectrum Access Networks,
3rd IEEE Symposium on,pp.1,2008.
[8] Ofcom,Repsonse to Ofcoms consultation by Dell,Google and Microsoft.
http://stakeholders.ofcom.org.uk/binaries/consultations/cogaccess/
responses/Dell
Google
Microsoft.pdf,November 17,2009.
[9] Ofcom,Repsonse to Ofcoms consultation by BT.
http://stakeholders.ofcom.org.uk/binaries/consultations/cogaccess/
responses/BT.pdf,November 17,2009.
[10] UK FREE.TV.http://www.ukfree.tv.
[11] J.Y.Chen,S.Olafsson,X.Y.Gu and Y.Yang,A fast channel allocation
scheme using simulated annealing in scalable WLANs.Broadband
Communications,Networks and Systems,2008.BROADNETS 2008.5th
International Conference on,pp.205.
[12] M.Asvial,B.G.Evans,and R.Tafazolli,Evolutionary genetic DCA for
resource management in mobile satellite systems.Electronics Letters,
vol.38,no.20,pp.1213-1214,2002.
[13] I.E.Kassotakis,M.E.Markaki and A.V.Vasilakos,A hybrid genetic
approach for channel reuse in multiple access telecommunication net-
works.Selected Areas in Communications,IEEE Journal on,vol.18,
no.2,pp.234-243,2000.
[14] S.S.M.Patra,K.Roy,S.Banerjee and D.P.Vidyarthi,Improved
genetic algorithm for channel allocation with channel borrowing in
mobile computing.Mobile Computing,IEEE Transactions on,vol.5,
no.7,pp.884-892,2006.
[15] A.Y.Zomaya and M.Wright,Observations on using genetic-algorithms
for channel allocation in mobile computing.Parallel and Distributed
Systems,IEEE Transactions on,vol.13,no.9,pp.948-962,2002.
[16] D.J.Leith and P.Clifford,A Self-Managed Distributed Channel Se-
lection Algorithm for WLANs.Modeling and Optimization in Mobile,
Ad Hoc and Wireless Networks,4th International Symposium on,pp.1,
2006.
[17] J.Y.Chen,S.Olafsson,and X.Y.Gu,A Biologically Inspired Dynamic
Channel Allocation Technique in 802.11 WLANs with Multiple Access
Points.Personal,Indoor and Mobile Radio Communications,2007.
PIMRC 2007.IEEE 18th International Symposium on,pp.1.
[18] L.INGBER,VERY FAST SIMULATED RE-ANNEALING.Mathemat-
ical and Computer Modelling,12,967-973,1989.
3D for Transfer of Spatial Representation Knowledge:
How Users Navigate and Familiarize Themselves with Real World Places Using Virtual
Worlds
Luke Okelo
School of Computing &
Mathematical Sciences
Liverpool John Moores
University, Liverpool, United
Kingdom,
cmplokel@ljmu.ac.uk


Dr. David England
School of Computing &
Mathematical Sciences
Liverpool John Moores
University
Liverpool, United Kingdom
D.England@ljmu.ac.uk


Prof. Abdennour El Rhalibi
School of Computing &
Mathematical Sciences
Liverpool John Moores
University
Liverpool, United Kingdom
A.Elrhalibi@ljmu.ac.uk


Abstract—Our research will evaluate how 3D virtual
navigation systems can be used to effectively transfer
learning of route and survey spatial knowledge of a real
world environment. We propose to evaluate how users of 3
different types of virtual navigation systems navigate and
learn the layout of an environment using different
representations of the same spatial model to determine
which navigation systems virtual representation is more
effective at enabling users to better familiarize themselves
with a real-world environment.
Keywords: 3D; Spatial Knowledge, Navigation,Virtual
Environment
I. INTRODUCTION
Navigation in 3D consists of knowing the current
position and orientation of the user as movement
takes place through the virtual environment
[1]
. It is
a common task in all virtual environments and
allows users to explore, search, and operate inside
the V.E. During 3D navigation individual users
usually engage in travel and wayfinding which are
the two primary components comprising navigation
in 3D. Whereas travel is the main component
involving control of the user’s viewpoint motion
through a virtual environment, wayfinding
corresponds to the cognitive component of
navigation which allows users to be located in the
environment and to choose a trajectory for
displacement
[2]
.
The role of active navigation as a factor for
optimizing virtual to real transfer of spatial
representations still requires further exploration and
better understanding given the few studies that have
been devoted to this, and their contradictory results.
This is because effective transfer of spatial
knowledge from virtual to real environments
depends on variables relative to both internal as
well as external factors e.g. characteristics of the
user and of the virtual reality system, learning/recall
conditions etc.
The remainder of this paper is organized as
follows: - section II sets the context for our research
by providing the background on how 3D has been
used to transfer spatial learning of real world
environments and looking at related work. Section
III briefly discusses the role of virtual world
representations in effectively facilitating transfer of
learning from 3D virtual environments to real world
environments. In section IV discusses how we
intend to implement and evaluate the design of our
research. Finally section V provides a conclusion
and discussion of future work.
II. BACKGROUND
It has been asserted that studying and
understanding human navigation and motion control
is of great importance for understanding how to
build effective virtual environment travel interfaces
A virtual world or V.E. can be defined as an
online, persistent interactive environment accessible
to many users’ simultaneously.
[20]

Several studies have shown that spatial learning
acquired in a VE is substantially similar to that
acquired in real conditions
[3], [4], [5],
and
[6]
. It has
been argued that much of what is already known
about navigation in the physical world is
independent of the type of space and therefore can
be applied to computer-generated environments;
thus real world navigation and environment design
principles can be effective in designing virtual
worlds which support skilled navigation behavior
Among the attempts to apply real world navigation
and environment design principles in designing
virtual worlds is the development of a set of generic
components including paths, edges, landmarks,
nodes and districts as usable constructs for
cognitive maps of urban environments.
These generic components are based on research
stating that humans form cognitive maps of their
environments for use during navigation. These
cognitive maps aid in their navigation, and
incorporate spatial knowledge into mental
representations for the user, based on either route
knowledge or map knowledge
[3]
. Whereas route
based representations involve remembering a
particular route or pathway to follow, map based
representations involve knowing aspects of the
topography or spatial configuration of the
environment
[7]
.
Route based representations have however been
criticized as lacking flexibility compared to map
based representations which can be acquired on the
basis of direct experience with an environment or
by using external representations of the
environment that provide access to spatial
topography.
However the effectiveness of virtual world
representations designed with such generic
components still requires further exploration to
establish whether there is any improvement in
navigation performance and overall user
satisfaction, as well as what is their specific role in
increasing spatial knowledge and promoting
learning of spaces for navigation in the physical
world environments they represent
Spatial memory comprising of topographic
representations accumulated by an individual
classically fall within three levels i.e. landmark
knowledge, route knowledge and survey
knowledge. Landmark knowledge is used by an
individual’s spatial memory to note the perceptual
features of a topographic representation which can
later be easily recalled. This type of knowledge
should provide information about the visual details
of a specified location in an environment so that the
navigator can quickly identify salient features and
details to aid in navigation.
Procedural knowledge is acquired by the spatial
memory to aid in building connections of routes so
as to form a larger, more complex topographic
representation of the structure that can be learnt
quickly and recalled easily, and survey knowledge
provides the spatial memory with configural or
topological information which is the most essential
topographic representation required for skillful
wayfinding and object location.
Based on the past research we believe there
should a significant difference in how users
navigate and learn navigation if they use 3D and
virtual worlds to navigate spaces and familiarize
themselves with real places, and the results of our
experiment should reflect this. Acquiring spatial
knowledge at either landmark, route or survey
knowledge levels does not necessarily have to occur
step-by-step; spatial memory can learn and recall
topographic representations of a spatial model
through parallel processes in which the brain
“switches” between egocentric and allocentric
representations during navigation
[10]

However in many situations, the process of
acquiring spatial knowledge on all 3 levels is not
necessary for example in cases where knowledge of
survey representation is not used. It has been
argued that this could be due to the fact that
whereas survey knowledge allows people to take
shortcuts or new paths in an environment, routes
may actually be sufficient enough to reproduce a
way through an environment
[11]

In its advanced stages of development,
procedural knowledge becomes survey knowledge,
and enables inferences to be made from a geocentric
perspective.
Given the limitations of Landmark, Survey and
Route knowledge acquisition, alternative theories
have been developed for navigation knowledge
acquisition, most notably the graph approach which
gives the enacted spatial representations a central
role. The starting point of the graph approach is the
distinction between a cognitive and perception-
action system in which the cognitive system
underlies the development of internal spatial
memory representations based on configuration and
the perception-action system generates spatial
memory representations that associate motor
responses with views or places
[12]
,
[13]

III. EFFECTIVENESS OF VIRTUAL WORLD
REPRESENTATIONS
Regardless of the type of spatial knowledge,
mental representations of both real and virtual
environments require flexibility so as to enable
information to be readily integrated from a variety
of perspectives. Ultimately the nature of the virtual
representations is strongly contingent upon which
aspects of the virtual environment are attended to
during the encoding
[8]

Due to the accurate measurement of navigational
efficiency and behavior utilization, it has been
argued that virtual environments should provide a
realistic analogue to real-world navigation to better
represent real world environments which can be
exceptionally detailed and dynamic, containing
multitudes of local and global landmarks. Virtual
environment design therefore requires designing an
environment through which navigation can take
place in simplistic terms with modifiable and
carefully controlled cues and scenery
[9]

III. METHODOLOGY
The goal of our research is to provide a method
of designing virtual worlds through which skilled
navigation behavior can facilitate the effective
learning and transfer of route and survey spatial
knowledge to a real world environment so as to
enable users to learn navigation in, and familiarize
themselves with such an environment.
Navigation techniques in 3D for first person
travel through immersive virtual environments and
sufficient for an isolated navigation task include: -
i. Gaze directed steering techniques which
require a user’s head to be tracked
ii. Pointing technique, which requires a user’s
hands to be tracked
iii. Map-based travel techniques which require a
2D display and a pointer
iv. Grabbing the air technique which requires
pinch gloves
The various navigation techniques have been
classified according to 3 basic tasks of displacement
i.e.
i. The choice of direction or target
ii. The choice of motion speed/acceleration
iii. The choice of entrance conditions
Whereas augmentations e.g. direction indicators,
maps and path restrictions can greatly improve both
navigation performance and overall user satisfaction,
our method aims at increasing user’s familiarization
with real world spaces and navigation techniques as
navigation performance improves so as to enhance
the navigators capacity to transfer learning of the
real world environment’s spatial knowledge after
completing the navigation task.
Our implementation is based on three different
navigation systems, namely: -
i. A 3D implementation that shows the 3D
model without necessarily requiring the user’s input
to interactively drive system navigation
ii. A 3D implementation that allows the step
through by the user to drive the 3D spatial model
navigation (i.e. user navigates interactively)
iii. A 3D implementation that shows both the
user driving the 3D spatial model (i.e. user
navigating interactively) and the 3D model (i.e. user
navigation without interactive input) on a single
display on a smart phone e.g. iPhone
Each implementation has a different topographic
representation of the same spatial model in order to
effectively compare and evaluate transfer of spatial
knowledge gained through navigation based on
similar real world settings across all 3 navigation
systems. This will enable us to accurately measure
how users are able to create and maintain an
accurate spatial model of an environment after
using all 3 virtual navigation systems to navigate
the same environment.
Our measurements also include examining the
following navigation performance standards in
users: -
i. Can they recall spatial information about the
navigation environment after navigation has taken
place?
ii. How accurate is the spatial information they
can remember about the path followed?
iii. Can they recreate a representation of the
environment navigated and the path they followed
when asked to form an accurate mental
representation after completing the task?
In addition to this, our task in this experiment
will measure the execution and duration time of the
user, as well as record any errors made by the user
in the course of using the navigation systems
The navigation task to be realized involves
moving from point A to point B in a real world
environment using all 3D navigation systems.
While doing so users will be required to:-
i. Follow as accurately as possible a trajectory
in 3D
ii. Gather information from the environment
(i.e. familiarize themselves with specific spatial
knowledge of the environment e.g. landmarks,
routes, destinations, directions etc.) which can be
used to form a cognitive mental map of the
environment.
Through doing this we aim to measure how well
the user understood the surrounding environment
during 3D navigation, as well as which method was
more useful in facilitating familiarization with real
world places
After successful completion of the navigation
task we intend to rate the participants on how well
they know the path which they navigated on i.e.
how much of the path information and movement
within the path can they judge as remembering with
accuracy, and can the accuracy of this information
be able to generate an accurate representation of an
integrated layout when the path information is
recalled?
V.CONCLUSION&FUTURE WORK
Based on past research we believe there should
be a significant difference in how users navigate
and learn navigation if they use 3D and virtual
worlds to navigate spaces and familiarize
themselves with real places.
Whereas the active learning of virtual to real
environments can hinge on the construction of ego-
centric spatial representations, the role of active
navigation
Whether in virtual or real world environments,
spatial navigation should facilitate both ambulatory
and non-ambulatory “intelligent” movement which
an individual can carry out within a space, whether
limited and not
[3]
. This would then enable skilled
navigation behavior learnt in a virtual environment
to transfer from virtual to real environments from
the spatial representations
More research is required in this area as
currently only 2 studies have been conducted on
active learning of virtual to real environments and
these have been based on the ability to construct
ego-centric spatial representation i.e. Landmark and
Route levels.
It has therefore been difficult to reach specific
conclusions on the effect of the transfer of spatial
learning from a VE to real environment due to the
fact that currently only 2 levels of spatial
knowledge have been considered
[14], [15]
. Therefore
the role of active navigation as a factor for
optimizing the virtual to real transfer of spatial
representations still needs to be explored further in
relation to the levels of spatial representation
knowledge required to facilitate the process

REFERENCES
[1] Darken, Rudolph P., Sibert, John L.,
“Navigating Large Virtual Spaces”,1996, 49-72,
International Journal of Human-Computer
Interaction
[2] 2. Wu, Anna, Zhang, Wei, Zhang,
Xiaolong,”Evaluation of Wayfinding Aids in
Virtual Environments”, 2009, 1-21,
International Journal of Human-Computer
Interaction
[3] 3. Haydar, Mahmoud, Maidi, Madjid, Roussel,
David, Mallem, Malik “A New Navigation
Method for 3D Virtual Environment
Exploration” 2009, CP1107, Intelligent Systems
and Automation, 2nd Mediterranean Conference
[4] Darken, P. Rudy, Sibert, John L, “A Toolset for
Navigation in Virtual Environments”, 1993,
157-165, Proceedings of ACM User Interface
Software & Technology
[5] Martin, Javier Velasco, “Information
Architecture in Virtual Worlds”, 2011, (Vol.37,
No.2), Bulletin of the American Society for
Information Science and Technology”
[6] Miyake, Yoshihiro, Kenichi Suzaki, Shinji
Araya, “A Web Page that Provides Map-Based
Interfaces for VRML/X3D Content”, 2009 (Vol.
92, No.2), Electronics and Communication in
Japan
[7] Park, Andrew J., Calvert, Thomas W.,
Brantingham, Patricia L., Brantingham, Paul J.,
“The Use of Virtual and Mixed Reality
Environments for Urban Behavioural Studies”,
2008, 119-130, (Vol.6, No.2), Psychology
Journal
[8] Carelli, Laura, Rusconi, Maria Luisa, Scarabelli,
Chiara, Stampatori, Chiara, Mattioli, Flavia,
Riva, Giuseppe, “The transfer from survey
(map-like) to route representations into Virtual
Reality Mazes: effect of age and cerebral
lesion”, 2011, Journal of NeuroEngineering and
Rehabilitation
[9] Bowman, Doug A. Koller, David, Hodges,
Larry F., ” Travel in Immersive Virtual
Environments: An Evaluation of Viewpoint
Motion Control Techniques”, 1997, 45-52,
Virtual Reality Annual International
Symposium.
[10] Wallet, Gregory, Sauzeon, Helene., Pala,
Arvind, Prashant., Larrue, Florian, Zheng, Xia,
Kaoua, Bernard N., “Virtual/Real Transfer of
Spatial Knowledge: Benefit from Visual
Fidelity Provided in a Virtual Environment and
Impact of Active Navigation”, 2011, (Vol.14,
No.7-8), CyberPsychology, Behavior and Social
Networking
[11] Witmer, Bob G., Bailey, John H., Knerr,
Bruce W.,”Virtual spaces and real world places:
transfer of route knowledge”,1996, 413-428,
International Journal Human-Computer Studies
[12] Gardony, Aaron, Brunye´, Mahoney, Tad T.,
Caroline R, Taylor, Holly A., “Affective States
Influence Spatial Cue Utilization during
Navigation” 2011, (Vol. 20, No. 3), 223-240,
Presence
[13] Boudoin, Pierre, Otmane, Samir, Mallem,
Malik,”Design of a 3D Navigation Technique
Supporting VR Interaction” 2008, First
Mediterranean Conference on Intelligent
Systems and Automations (CISA ’08).
[14] Jan,Shau-Shiun,Hsu,Li-Ta, Tsai,Wen-Ming,
“Development of an Indoor Location Based
Service Test Bed and Geographic Information
System with a Wireless Sensor Network”, 2010,
Sensors ISSN 1424-8220
[15] Ruddle, Roy A., “Generating Trails
Automatically To Aid Navigation When You
Revist An Environment”, 2008, 562-574, (Vol.
17, No.6), Presence .
[16] Welch, Gregory F., “History: The Use of the
Kalman Filter for Human Motion Tracking in
Virtual Reality”, 2009, 72-91, (Vol. 18, No.1),
Presence.
[17] Zeng, X., Mehdi, Q. H., Gough, N. E.
“Implementation of VRML and Java for Story
[18] Visualization Tasks”,2004,
http://wlv.openrepository.com/wlv/handle/2436/
31536

[19] Cammack, Rex G.,” Location-based service
use: a metaverse investigation”, 2010, 53-65,
(Vol.4, No.1), Journal of Location Based
Services
[20] Lee, Bhoram, Bang, Won-Chul, Kim, James
D. K., Kim, Chang Yeong,”
Orientation Estimation in Mobile Virtual
Environments with Inertial Sensors”, 2011,
(Vol.75, No.2), IEEE Transactions on
ConsumerElectronic