Microsoft Research Technical Report A Semantic SensorMap

economickiteInternet και Εφαρμογές Web

21 Οκτ 2013 (πριν από 3 χρόνια και 7 μήνες)

96 εμφανίσεις

Microsoft

Research Technical Report

A
Semantic SensorMap



Peng Li
(lipeng@cs.ubc.ca)

Supervised by Kalyan Basu and Suman Nath




1.
Introduction

The SensorMap project

[
1
] of Microsoft Research is a platform for publishing and searching for

real
-
time data such as thermometers, webcams, traffic sensors, etc. Users can browse and
query
points of interest base
d

not just on static data, but also on real
-
time data. For example, a user can
ask for

the

current temperature in a region. In response,
the
SensorMap shows live temperature
s

collected from thermometers in the region on the map. Furthermore, data is aggregated in different
granularities for display; for example, instead of showing hundreds of temperature readings,
average, minimum and maxim
um
temperature of the area
is

shown. Individual publishers can
publish data collected from different types of sensors and they can specify who would be able to
see
their

data

just
them
,
their

friends, or the world. More details of
the
SensorMap
project
can

be
found a
t

[1]
.


Here is an example

of a real life pre
-
movie dilemma

[
1
]
. It’s 6 p.m., and the film you want to see
starts at 7
pm
. You have got enough time to grab a quick dinner

if you don’t have to wait for a
table. You know the neighborhood around the

theater has some good restaurants, but there’s no
time to waste. So, the most interesting question for you is not where restaurant
s are, but
where are
the restaurants that have a waiti
ng time of less than 20 minutes
?


The biggest challenge of the SensorMa
p project is scalability

[
1
]. If
we

have thousands of sensors
and if
we

have
many
users, how can we scale efficiently for users to query their points

of interest?
For example,

suppose

a

user

wants to
publish some

thermometer


sensors

but

he/she names the
sensors as
“temperature”

sensors

unconsciously
.
Obviously
,
the temperature sensor and the
thermometer sensor are equivalent

in this situation. However, when another
user
wants to query
all

the “thermometer” sensors
, t
he response from the query would not co
ntain the published
“temperature” sensor
s

because the SensorMap database considers the two
kinds of
sensors are

different

based on their names
.
W
e can do some changes in the SensorMap project to
involve

t
he

equivalence
. N
evertheless
,
when we have many user
s and sensors
,
the
equivalence
is difficult to
be maintained by humans; hence we have to find a more intelligent
approach
by using machines.

In the rest of this report, we will see more problems when we scale our SensorMap project to a
realistic applicatio
n

and

how we use the Semantic Web technology to resolve them.


The
Semantic Web

[2]

is an extension of the current Web, which is
considered to be the next step
in web evolution
.

This is
proposed by the inventor of the World Wide Web, Tim Berners
-
Lee. As

his description, all the information in the
S
emantic
W
eb is given well
-
defined meaning, better
enabling computers and
people to work in cooperation.
The
data
on Web

can be processed directly
and indirectly by machines [
3
]. Based on Tim Berners
-
Lee’s idea,

it

is about having data as well as
documents on the web so that machines can process, transform, assemble and even act on the data
in useful ways [
4
].


The basic part
of the

Semantic Web is
the
representation of knowledge: knowledge about the
content of
Web resources, and knowledge about the concepts of a domain of discourse an
d their
relationships. F
or users’ queries,
the
Semantic Web needs to be able to derive new data fr
om the
knowledge representation

by using the constraints

(rules)

that are specified

by users.



The
Semantic Web technique is an
intelligent

approach to process large amount
s

of
data

automatically and i
t
addresses

our
scalability

problem

properly
.
In

this project, we
would like

to
incorporate the Semantic Web technologies
in
to

the curre
nt prototype of
the
SensorMap

project
.
We will describe properties of sensors in

the

SensorMap in RDF and represent the relationship
between different sensor types in an ontology expressed in OWL. By doing this, we will see that
the
SensorMap
can
automatic
ally explore the semantic relationship between different sensor types.
For example, the ontology can specify that
the
thermometer and
the
temperature

sensor are
equivalent
; then someone querying for
thermometer
data will get data collected from
both
thermo
meter
and
temperature

sensors. In addition, we will specify some rules, by using these
rules
,

the inference engine can reason

about

the

exist
ing

data and obtai
n more information that we need,
for
example
,
if the current humidity value is great than 45%
.




2.
Related Work
s

So far,
many

researchers have proposed to use RDBMSs to store and query RDF data using the
Protocol and RDF Query Language

(
SPARQL
)

[
5
]
.
N
umerous

other
query languages have been
developed, for
example,
RDQL

[
6
], RDFQL [
7
], RQL [
8
], RSQL [
9
] and so on.
These are
declarative query languages with

quite a few similarities to
Structured Query Language

(SQL)
.

Furthermore,

[
10
] proposed to use SQL itself to query RDF data by introducing a table function.

The main advantage of this SQL
-
based schem
e is that it allows leveraging the rich functionality of
SQL and efficiently combining graph queries with queries against traditional database tables.



S
ince
the vast majority of relational data is stored in conventional SQL data stores
. I
n order to
make
use
of
the

query languages
,
we have to migrate data from
conventional SQL data stores

to
semantic stores.

However, in this project, we do not want to change the data stores

of the
SensorMap project
.
W
e have to find another approach to
involve

the
semantic
technique

in
to

our
project
.

Some other research
involving
semantic
technology

and database
s

are focusing on semantic query
optimization.
T
he idea of semantic query optimization is to use semantic knowledge about the
database to
transform

a query into
ano
ther query that
is
semantically

equivalent

to the original
query. Furthermore the new query can be executed efficiently

[
11
]
.
Informa
lly, semantic
equivalence
means

that the
transformed

query has the same answer as the original query on all
databases satis
fying the integrity constraints.
This optimization technique was
first

proposed by
King
[
1
2
]
, Hammer and Zdonik
[
13
]
.
There are m
any
current
researchers
working on this topic. For
example,
Xu [
14
] discusses query optimization for Select
-
Join
-
Project queries
for relational

databases. Jarke et al. [
15
] deal with semantic query optimization in the context of an

optimizing
PROLOG front
-
end to a relational database system.

And
Shenoy and Ozsoyoglu [
16
] provide a
detailed investigation of the use of two

important t
ypes of integrity constraints in semantic query
optimization.


In
this

project,
our
trial
is
to use
a
s
emantic
technique

to
optimize SQL queries

based on
semantic
knowledge and constraints
.

Different
from

semantic query optimization, our optimization wi
ll
not
con
sider

database query
performance
. Instead,
our focus will be

on

semantic

c
ompletion
. After
optimization,
the

new SQL query
will include more

complete
information that
is derived
from the
knowled
ge representation or reasoned by the
inference

engine.



3.
Terminolog
y

RDF

[17]
:
The
Resource Description Framework

(RDF) is
a formal method
that uses

XML for the
description of web resources using machine readable metadata
.


Notation 3

[18]
:
In short is N3, and it is
a

non
-
XML serialization of RDF models
, wh
ich is more
human
-
readable than XML RDF format.


OWL:

The Web Ontology Language (OWL)

is a language for defining a
nd instantiating Web
ontologies
.

An

OWL ontology may include descriptions of classes, along with their related
properties and instances

[
19
].

OWL

extends RDF
by providing additional vocabulary along with a
formal semantics
.


SWRL

[20]
:

A Semantic Web Rule Language

(SWRL)

is a formal XML representation of
semantic constraints.


Triple

[21]
:

A triple is a 3
-
turple. It is used to express a

fac
t as <subject, predicate, object>
format
.
For example, the fact, “
The sky has the color of blue
”, can be

represented by <

The
sky
”, “
Has
color
”, “
Blue
”> triple.


4
.
Current Problem

and
Our Solution

4
.1
.

Current Model

Currently
, the SensorMap project is ba
sed on the
client

-

database server
mode
l

shown in

Figure 1
.
However, when the SensorMap project is scaled to
large

numbers
of
users and sensors, we will
have

some problems. For example, since all the users can publish their sensors,
this

means they
can na
me their sensors
arbitrarily
.
L
et

s say a user publishes a new
thermometer

sensor, and the
thermometer sensor is
absolutely

identical with the temperature sensor. However,
other
users did
not know the two kind
s

of sensor
s

are identical. So, when the thermo
meter sensor and
corresponding record are stored into the database, we cannot query them by using a query like

Select * From Temperature_Sensor Where temperature_value = 22


because
the
database consider
s

that
thermometer sensor and temperature sensor are

different
although

they
are identical semantically.




Figure
1

The
Database Query based on the Current Client
-
S
erver
Model



4
.2
.

Enhanced Model

In Figure 2
, we
present

the enhanced model of the SensorMap

project
that

integ
rat
es

with
the
semantic technology
.
In this enhanced model, we can

see that queries are not sent to the database
directly, instead, they are sent to our
platform. In our platform
, the queries are
semantically
optimized and then the optimized queries are fo
rwarded to the

SensorMap

database.
After

this
semantic optimi
zation
,
the new queries will be able to answer
users


questions more efficiently by
engag
ing

semantic knowledge and constraints.
For the example that is given
in
s
ection
4
.1, the
optimized

querie
s

will
include the
thermometer sensor since we specify that the
thermometer

sensor and temperature sensor are equivalent

in
our

ontology
.


From the

enhanced model, we
can find

that when records are obtained from the
database
, they are
not sent to users di
rectly as usual.
T
hey are sent to our platform first, and then after semantic
inference, our platform will return the records and
metadata to users, the metadata contains
reasoned results.
For example,
a record
,

Temp5 temperature_value 35

,

is found from
the
The Sensor Map
Database

Select *

From Temperature_Sensor

Where temperature_value = 22

Temp1 temperature_value 22

Temp3 temperature_value 22

Temp8 te
mperature_value 22

….

database. T
his record is sent to

our platform

first
,
it will then determine whether this record
violates a rule

such as
,

temperature value is greater than 30 degree will give an alert

.
O
ur
platform

will

then

return an alert and the same
record


Temp5

temperature_value 35


to
users.
By doing this, our
platform
will

be able to give users
more
information
than

just the
database records.




Figure 2

Enhanced Model with
o
ur Project integrated


5
.

Implementations


F
igure
3
gives
the structure of our p
latform
. In this report, we divide all the components

in the
project

to two
division
s,
the

dash
-
line
d

box represents the
database division

with two database
related components.

T
he solid
-
line
d

box

contains
all the components
tha
t
are semantic
technique

related,
which is the
semantic division
. So, in the
rest
of this
s
ection, we will introduce the two
divisions respectively.


The Sensor Map
Database




Project

Web Service



Process SQL Queries

Process Records

Queries

Records

Metadata


Figure 3
the
Platform Structure


5
.1 Semantic Division

This part
consists o
f five components as following,

1.

Authoring tool

2.

SWRL/Notation 3 rule translator

3.

RDF/OWL parser

4.

Inference e
ngine

5.

Database
storage

component


5
.1.1

Process Flow


In this
semantic
division, users first specify
knowledge repr
esentation and rules by using an

authoring tool, and then the authoring tool will generate the corresponding knowledge
representation in OWL format and corresponding rule
s

in SWRL format.
Next, our parser will
parse the OWL to triples and put the
triple
s
into the inference engine
.
At the

same time, t
he parser
also
store
s

the triples into
the
database

as a backup
.
T
he rule
translator tak
es SWRL rules and
converts

them to Notation 3 rules and put
s

the Notation 3 rules

into the inference
engine
. Now, the
inference
engine
has
known

knowledge
representation

and constraints, so it is ready to do
reasoning for the incoming SQL
queries

and records.


5
.1.2

Authoring tool

Authoring Tool

1
.

RDF/OWL Parser

3
.


Inference Engine

4
.

SWRL/Notation 3
Translator

2
.

P r o c e s s S Q L
Q u e r i e s

S Q L
S e r
v e r

5
.


P r o c e s s D a t a



There are many authoring tools
enabling

users to build ontologies for the
s
emantic
w
eb
, for
example,
IsaViz

[
2
2
]

and so on.
With

these authoring tools
, users can create

RDF/OWL/SWRL
nodes and add Datatype and properties on the nodes
as easy as pla
y
ing Legos. Users can easily
build
knowledge representation and
rules

in
RDF/
OWL and SWRL format
without the need
to
write one line

of
co
de.


In this project
, we are using an open source authoring tool called
Protégé

[
23
]

from the Stanford
University as
our

OWL/RDF and SWRL editor
.
By using t
his authoring tool
,
OWL/RDF and
SWRL
can be easily created and
edited

by
point
-
and
-
click in a

What
You See Is What You Get


(WYSIWYG) way.



We chose
Protégé

due to ease of usage after
comp
aring

several authoring tools.
Furthermore, it is
well

documented and user
-
group
supported
.


5
.1.3 RDF/OWL Parser

Our RDF/OWL parser is
fully compliant with the W3C R
DF
/OWL

syntax specification
. It takes
RDF
/
OWL and
generates

triples.
Furthermore, our parser is also able to generate a

DOT

1

file to
visualize

the RDF/OWL
.



Suppose we have an RDF
example

[
24
]
as
shown in

Figure

4,



Figure
4

Article
-

Article

Name


Author
Information
RDF example


O
ur parser will take the RDF file and output corresponding
triples

as
F
igure

5,






1

DOT is a plain text graph description language. It is a simple way of describing graphs that both humans and
computer programs can use. DOT graphs are typically files that end with the .dot extension.

<?xml version="1.0"?>

<rdf:RDF xmlns:rdf="
http://www.w3.org/1999/02/22
-
rdf
-
syntax
-
ns#
"










xmlns:dc="
http://purl.org/dc/elements/1.1/
"










xmlns:ex="
http://example.org/stuff/1.0/
">



<rdf:Description rdf:about="
http://www.w3.org/TR/rdf
-
syntax
-
grammar
"



















dc:title="RDF/XML Syntax Specification (Revised)">





<ex:editor>







<rdf:Description ex:fullName="Dave Beckett">









<ex:homePage rdf:resource="
http://purl.org/net/dajobe/
" />







</rdf:Description>





</ex:editor>



</rdf:Description>

</rdf:RDF>



Figure
5

the Triples Output


Besides the triples, we also obtain a DOT file an
d we can use some open source tools, for example,
Graphviz
2

to represent t
he structure of the RDF

file

as Figure
6


Figure
6

The
Graph for the RDF Example


5
.1.4 SWRL/Notation 3 rule translator

In our project, the inference
en
gine

we
chose

can only digest Notation 3 rules. However,
Protégé

can only generate SWRL rules. So, in our project, we wrote an
Extensible Stylesheet Language
Transformations

(
XSLT
)

to translate SWRL rul
es to Notation 3 rules.
This will enable

us
to take
bo
th rules and
make

our

system quite broadly applicable
.



5
.1.5 Inference Engine

Understanding and using the data and knowledge encoded in semantic

web documents requires an
inference engine.

In this project,
an inference engine called
SemWeb

[
25
]

is used
to take the
reasoning

role.
SemWeb takes triples as knowledge representation and N
otation

3 rules as
constrain
t
s, and then does reasoning based on the users


questions.


5
.1.6

Database
S
tor
age

C
omponent

W
e store the triples

that are generated by the parse
r

into the inference engine’
s data structure,
MemoryStore
,
which is the main storage mechanism

for small amounts of data.

However
,
this
storage is very restricted
,

switching to other inference engines will not be applicable
. Therefore,
we also

store the ge
nerated triples to the
database as a backup.




2

Graphviz

is

a package of open source t
ools
for dr
awing graphs specified in DOT Language scripts
.

<
http://www.w3.org/TR/rdf
-
syntax
-
grammar
> <
http://example.org/stuff/1.0/editor
>


_:genid1.

<
http://www.w3.org/TR/rdf
-
syntax
-
grammar
> <
http://purl.org/dc
/elements/1.1/title
> "RDF/XML Syntax
Specification (Revised)".

_:genid1 <
http://example.org/stuff/1.0/fullName
> "Dave Beckett".

_:genid1 <
http://examp
le.org/stuff/1.0/homePage
> <
http://purl.org/net/dajobe/
>.

5
.2 Database Division

T
his part mainly consists of the
components that
hook

up
the SensorMap database

with the
semantic process division
.
As we explained in
s
ection

5
.1.5
, the inference engine we used can only

digest triples. So
,

the most important job of this database division is to translate SQL queries and
records to triples and then put the triples into the inference engine

for reasoning
.



Figure
7
The Struc
ture of the Database Division
3



5
.
2
.1

Process Flow

Figure
7

presents the process flow of this
division,
the

Process SQL Queries


component

accepts
SQL queries from
users, and then
translate
s

the
SQL queries to triples and put
the triples
into the
infere
nce engine. The inference engine
does reasoning based on the rules and the knowledge
representation that are specified.

Finally, the reasoned
results (triples) are

translated

back to SQL
queries and the
semantically
optimized SQL queries are sent to the Se
nsorMap database.


For the

Process Data


component, a
fter records are obtained from the SensorMap database, they
are not sent to
users. Instead, the records are forwarded to the

Process Data


component.
A
nd
then, they are processed in the similar
flow

w
ith the

Process SQL Queries


component.
F
inally
the same records and

the

metadata that contains the reasoned result are returned
to
users.
In t
his
project, because we
assumed

the
returned

records of the SensorM
ap database
are

triples,

we will
not consider

the record
-
to
-
triple

translation
.


5
.2.
2

Translation

Illustration

W
e need a
mechanism

to translate a
n

SQL query to
triple
(
s
)

since the inference engine can only
digest triples
.
Furthermore
, the
translated

triples
should be semantic
ally correct

and
should

not

lose any
information

for our future reasoning.
Since the project is still a prototype
at this moment
,
we do not have a
general

algorithm

for this translation
, but
we will use an example to explain how



3

The Inference Engine is not a part of the
Database

Division.
W
e put it here is only to illustrate the process flow
completely.

Process SQL
Queries


Process Data



Inference Engine

SQL Queries

Records

our project translates a SQL query to triple(s).

Th
e

approach will be evaluated by several case
studies

later in this
s
ection
.


Suppose we have a SQL query as
follow
s
:


Select *

From Humidity_Sensor

Where humidity_value = 55%


W
e first need to find which clauses are important for our future
inference

an
d

which clauses can
be eliminated.
Because

the


Select


statement

is

used to show the columns that users want to see,
and it changes from time to time depends on users


request, we will not translate

Select


statement
s

into triples.
E
mpirically
, the

From


and

Where


clauses are
significant

for our future
reasoning,
so

we need to translate them to triples.


For the

Where


clause, we translate

Where humidity_value = 55%


to a triple


Someone

humidity_value

55%


by introducing

a new key word

Someone

. An
d
then,
for the


From


statement
, we translate

From Humidity_Sensor


to a triple

Someone BelongTo
Humidity_Sensor


by
introducing

another key word

BelongTo

.
F
inally, we
obtain

two

t
riples

from the

SQL query as

the
following,

Someone
humidity_value


55%

Someone BelongTo
Humidity_Sensor


Actually, we found
that
we can combine

the two triples into one by moving

th
e object of the
second triple
to the
place of the
subject of the first triple and get the following new triple,

Humidity_Sensor

humidity_
value

55%


The new triple probably is not very meaningful for human
s

to understand, but this triple is
processed by machine
s
. So as long as it can be understood by inference engine
s
, it is a
valuable

triple.


Fro
m the new triple, we can see that it cont
ains all the information of the SQL Query except for
the
operator

=

.
S
uppose users send another query by changing

=


to

>

, which is no different
for our platform. The reason why we do not consider the operators
in
queries

is because
our
translation

on
ly
co
nsiders

database schema

such as

table name, column
name
and so on.

So,
our
translation does

not need to consider the operator

=


and value

55%

.

However, intuitively
,


55%


can be considered as the object of the

humidity_value


to build a complete
triple,

A
sensor has humidity value 55%

. So, we still keep

55%


even it
will

not

be

used by the
upcoming

reasoning.



5.3 Web Service

Finally, w
e put our project onto a Web
Service to
enable our platform to interact
easily with the

Microsoft SensorMap

Project.



Figure

8

Our project on the web s
ervice

to
offer five functions


From Figure

8
, we
see that the Web Service has five interfaces. Besides

Process SQL Queries

,

Process Records


and

Database

Storage


interfaces, w
e also have

Upload Knowledge
Representation


and

Upload Rules

.
T
he two interfaces are used to upload OWL knowledge
representation and SWRL rules that are
generated

by
Protégé

to our
project since
it

is not a part of
our platform.


6
.
Scenario
and
Case

Studies

T
his
s
ection presents one scenario and

three case studies to show
how to use our platform to
resolve some realistic problems

that we
encounter

in the SensorMap project
.
However
,
the
application of
our platform
is not only
restricted

to the exampl
es. Actually, this platform
is
applicable to current and future problems

of the SensorMap project
as long as users specify proper
knowledge representation and rules.


6.1 Scenario

The
knowledge representation
:



Suppose we have three kinds of sensors: temper
ature sensor, thermometer sensor and
humidity sensor.



A weather station consists of a temperature sensor and a humidity sensor.



A temperature sensor and a thermometer are equivalent.



Suppose we have three kinds of user groups, Microsoft_Users, Redmond_Use
rs and
Redmond_MSR_Users



Furthermore, all the groups have the following relationship,



Redmond_MSR_Users
SubGroupOf

Redmond_Users



Redmond_Users

SubGroupOf

Microsoft_Users




Our Project

On Web Service






Process SQL
Queries

Process Records

Database

Storage

Upload
Knowledge
Representation

Upload Rules

Rules



If a sensor is accessed by a group g, it can be accessed f
rom all the subgroup of g.



A pollen alert should be raised when temperature is >=30
or

humidity >=50%.


6.2 Case Studies

6.2.1

Case Study 1
-

SQL Query Process


As shown,

we have two tables in the SensorMap
database

as Figure

9

and u
sers would like to
sea
rch all the temperature sensors that have the temperature value equals 28 degree. The SQL
query is

Select * From Temperature_Sensor Where temperature_value = 28

.







Figure
9

The Temperature_Sensor Table and
t
he Thermometer
_Sensor Table


The query is translated to triples, and then they are put into the inference engine.
Because

we have
specified the predicate

thermometer_value =
=

temperature_value
r


in
our
knowledge representation
,

therefore
after reasoning, our

platform
will return two queries
to the
SensorMap database as the

following
:


Select *

From Temperature_Sensor

Where temperature_value = 28


Select *

From Thermometer_Sensor

Where thermometer_value= 28


From t
he
returned

queries
, we can see that
even users do no
t know anything about the
Table Temperature_Sensor,




Table Thermometer_Sensor,


thermometer

table
.
O
ur project
is

still

able to find the new queries that contain more complete
information

based on the semantic knowledge.



6.2
.2

Case Study
2

-

SQL Query Process


In this case study,
suppose we have a table

Accessed_By_Group


as Figure

10

that list
s

which
sensor can be accessed by which group. In addition, we have some rules and predicates
that
are
specified

in

Figure
11
.










Figure

10

The

Accessed_By_Group Table









Figure

11

Predicates and Rules


Now, when use
r
s send a query

Select *

From Accessed_By_Group Where group
=Redmond_MSR_Users


and would
like

to find all the sensors that can be accessed by

Redmond_MSR_Users


group
, and
after
the semantic optimization, we
will
obtain the
following SQL query
:


Select
*

From Accessed_By_Group

Where group =Redmond_MSR_Users OR


group = Redmond_Users OR


group = Microsoft_Users


The
optimized query

is obtained by
e
ngaging
the logic predicates and the rule we specified. By
using this query, the SensorMap database will return more records that are
semantically

correct.



6.2
.3

Case Study
3



Data Process

Suppose a SensorMap record

Temp1 temperature_value 32


is s
ent to our Semantic
Table Accessed_
By_Group,


Predicates,

Redmond_MSR_Users
SubGroupOf

Redmond_Users

Redmond_Users

SubGroupOf

Microsoft_Users


Rule,

If a sensor is accessed by a group g, it can be accessed from all the subgroups of g.

SensorMap platform.
T
he record is put into the inference engine directly
because

it is
already
a
triple format.
B
ased on the
predicates

and rules
that
are specified in the

scenario, we know that


thermometer_value =
=

temperature_value

thermometer_value >
=
30

=> Alert


So, finally, our platform will return the same
record

to users plus the metadata that contains the
reasoned
result


Alert

.

The new model of the
SensorMap
with data process

component
is
presented
in Figure

12
.




Figure

12

The New Model of
the SensorMap that
returns users records and reasoned results


7
.

Future Works

Currently, our semantic SensorMap pla
tform is still a prototype. The

improvements of the
translation of

the SQL query into triples
are

still needed, it is not refined yet.

In our case studies,
there are

some SQL clauses we did not consider, e.g., Join.


For the testing, currently we are ju
st trying out various inputs to

make sure that our platform
works on

most common cases, but this
is

not really a systematic approach, an
d bugs can crop up in
the least

expected places. So we need to develo
p a more comprehensive approach

to test the
correctness of our platform by some automated approaches.


8
.

References


[1]Microsoft Research,
SenseWe
b Project
,
http://research.microsoft.com/nec/senseweb/
, 2/9/2007

[2]W3C, Semantic Web,
http://www.w3.org/2001/sw/
, 2/9/2007

[3]Liyan Yu,
Introduction
to the Semantic Web and Semantic Web Services
,
Chapman &
Hall/CRC
,
June 14, 2007

[4]
Grigoris Antoniou and Frank van Harmelen
,
A Semantic Web Primer
,
The MIT Press
,
April 1,
Temp1 tem
perature_value 22

Temp3 temperature_value 45 and Alert!

Temp0 temperature_value 32 and Alert!

….


The

Semantic

Sensor
Map




2004

[5]
W3C
,
SPARQL Query Language for RDF,
http://www.w3.org/TR/2004/WD
-
rdf
-
sparql
-
query
-
20041012/
, 2/9/2007

[6]
W3C,
RDQL
-

A Query Language for RDF,

http://www.w3.org/Submission/2004/SUBM
-
RDQL
-
20
040109
,
2/9/2007


[7]
RDFQL Database Command Reference,

http://www.intellidimension.com/default.rsp?topic=/pages/rdfgateway/reference/db/default.rsp.

[8]
G. Karvounarakis, S. Alexaki, V. Christophides, D. Plexousakis, M. Scholl. RQL: A Declarative
Query La
nguage for RDF. WWW2002, May 7
-
11, 2002, Honolulu, Hawaii, USA.

[9]
Li Ma, Zhong Su, Yue Pan, Li Zhang, Tao Liu. RStar: An RDF Storage and Querying System
for Enterprise Resource Management. CIKM, pp. 484
-
491, 2004.

[10]
E. I. Chong, S. Das
, G. Eadon, and J
. Srinivasan,
An

Efficient

SQL
-
based RDF Querying
Scheme,

Proceedings of

the 31th Int. Conf. on Very Large Data Bases, pp.1216
-
1227,

Aug. 2005.

[11]
U. S.

Chakravarthy, J.

Grant and J.

Minker
,

Logic
-
based approach to semantic query
optimization.
ACM Trans.
Database Syst.
, volume

15
, number

2
, pp.
162
-
207
,
Jun. 1990

[12]
J. J.

KING, QUIST: a system for semantic query optimization in relational databases. In

Proceedings of the 7th VLDB Conference,

510
-
517
,

1981

[13]
M.M. Hammer, S.B. Zdonik, "Knowledge Based

Qu
ery Processing", Proc. of the 6th Int.
Conf.on Very Large Data Bases, Sep 1980.

[14]
G. D

X
u
, Search control in semantic query optimization. Tech. Rep. 83
-
9, Dept. of Computer

Science, Univ. of Massachusetts, Amherst
, 1983

[15]
M.

JARKE, J.

CLIFFORD, AND Y.

VASSILIOU, An optimizing Prolog front
-
end to a
relational

query system
,

In Proceedings of the ACM
-
SZGMOD Conference,
pp.
296
-
306
,
1984

[16]
S. T.

SHENOY, AND Z. M.

OZSOYOCLU, A system for semantic query optimizati
on
,

In
Proceedings

of the

ACM
-
SZGMOD Confere
nce,
pp.
181
-
195
,
1987

[17]W3C,
Resource Description Framework (RDF)
,
http://www.w3.org/RDF/
, 2/9/2007

[18]W3C, Notation 3,
http://www.w3.org/DesignIssues/Notat
ion3
, 2/9/2007

[19]W3C,
Web Ontology Language (OWL)
,
http://www.w3.org/2004/OWL/
, 2/9/2007

[20]W3C,
SWRL
: A Semantic Web Rule Language

Combining OWL and RuleML
,
http://www.w3.org/Submission/SWRL/
, 2/9/2007

[21]Wiki, Triple,
http://en.wikipedia.org/wiki/Triple
, 2/9/2007

[22]
Emmanuel Pietriga
,
IsaViz: A Visual Authoring Tool for RDF
,
http://www.w3.org/2001/11/IsaViz/
, 2/9/2007

[23]

Stanford Medical Informatics
,

The Protégé Ontology Editor and Knowledge Acquisition
System
,
http://protege.stanford.edu/
, 2/9/2007

[24]W3
C,
RDF/XML Syntax Specification (Revised)
,

http://www.w3.org/TR/rdf
-
syntax
-
grammar/
, 2/9/2007

[25]

Joshua Tauberer
,
Semantic Web/RDF Library for C#/.NET
,
http://razor.occams.info/code/semweb/
, 2/9/2007