Presentation - Electrical and Computer Engineering - University of ...

foamyflumpΚινητά – Ασύρματες Τεχνολογίες

21 Νοε 2013 (πριν από 3 χρόνια και 6 μήνες)

84 εμφανίσεις

Dr.
Sudharman

K.
Jayaweera

and
Amila

Kariyapperuma


ECE Department

University of New Mexico

Ankur

Sharma

Department of ECE

Indian Institute of Technology,
Roorkee



5
th

July,2007

Expand Your Engineering Skills (EYES), Summer Internship Program, 2007

Introduction


Wireless Sensor Networks (WSN)
consist of nodes for sensing


Temperature


Pressure


Light


Magnetometer


Infrared


Audio/Video etc


Ad hoc WSN may require inter
-
sensor
communication.


Problem



Nodes are


of small physical dimensions



Battery operated


Major concern is energy consumption


Failure of nodes due to energy depletion
can lead to


Partition of sensor network


Loss of critical information


Requirement of application/system is that
every node should know the data of each
other node.



Related Work


Energy aware routing & efficient
information processing. [Shah and
Rabaey
,
2002]


Local compression & probabilistic
estimation schemes. [ Luo,2005]


Distributed compression & adaptive signal
processing in sensor networks with a
fusion center. [ Chou, 2003]



Our Approach

i

bit

i

bit

i

bit

2

3

4

1

i

bit

i

bit

i

bit

Proposed Algorithm


Sensor
j

predicts its own reading, depending upon
its past readings and readings from other sensors.






Depending upon error between predicted value and
actual value i.e.








sensor
j

calculates the compressed bits
i

using


Chebyshev’s

inequality method


Exact error method



Code Construction



A codebook to encode data
X

to
i

bits.



One underlying codebook that is
NOT

changed among the sensors.



Supports multiple compression rates.

A Tree
-
based Codebook

0

0

0

1

1

1

Chebyshev’s

Inequality Method


To prevent decoding errors with
i

bits






Chebyshev

bound for probability of
decoding error










Required value of Value of
i

:



Exact Error Method


To prevent decoding errors using
i

bits









As we know exact error in the prediction
of sensor data
X
, number of bits are




Send extra bits also, specifying the
number of bits in the message.

Encoder Sensors


X

is stored as the closest representation
from
2
n

values in the root codebook


(A/D converter).



Mapping from
X

to the bits that specify
the
subcode
-
book at level
i

is done
using


Decoder Sensors


Decoders receive
i
-
bit value & code sequence
f(x)
.


Traverse the tree starting from LSB of code
sequence to find appropriate
subcode

book,
S
.


Calculates the side information
Y
as







Decodes the side information
Y
, to the closest
value in
S

as




Correlation Tracking


Linear prediction method


Analytically tractable


Optimal when readings can be modeled as
i.i.d
.

Gaussian random variables.


First sensor always sends its data
compressed
w.r.t
. its own past data.


Prediction of
X

is


where



Least
-
Squares Parameter
Estimation


Prediction error is



Choose filter coefficients in order to
minimize weighted least squares error.


Least squares filter coefficient vector at
time
k

is given by



where






Recursive Least
-
Squares (RLS)
Algorithm


Filter coefficient computation is performed
adaptively using RLS








where



and


For initialization, each sensor sends
uncoded

data
samples.


In our approach reference sensor updates the
corresponding coefficients and sends them to all
other sensors.

Decoding Errors


No decoding errors in exact error
method.


In
Chebyshev’s

method, no of encoding
bits are specified within a given
probability of error and after every 100
samples.


Leads to few decoding errors, but
results in higher compression.

Implementation & Performance


Simulations were performed for
measurements on humidity data.


We assumed a 12 bit A/D converter with a
dynamic range of [
-
128,128].


Simulated results for about 18,000 samples
for each sensor (total of 90,000)


Sensor orderings are randomized every
500 samples.


For RLS training, first 25 samples of each
sensor are transmitted without any
compression.


Coefficients are updated and shared after
every 500 samples.

Exact Error implementation


With each code sequence, extra 4 bits to
specify the number of bits are also sent.







Decoding Error = 0


Average Energy Saving %= 43.34%



Sensor

#

Energy Saving%

Decoding Error%

1

45.90

0

2

49.85

0

3

38.52

0

4

40.75

0

5

41.67

0

Tolerable Noise vs. Prediction Noise

Chebyshev’s

Inequality method


Encoding bits are specified every 100 samples


Case I: Probability of Error (
P
e

)
= 0.5%











Average Decoding Error % = 0.07%


Average Energy Saving % = 45.74%

Sensor

#

Energy Saving%

Decoding Error%

1

47.74

0.32

2

53.15

0.00

3

41.08

0.02

4

43.03

0.01

5

43.74

0.00

Tolerable Noise vs. Prediction Noise

Chebyshev’s

Inequality method


Case II: Probability of Error (
P
e

)
= 1.0%







Average Decoding Error % = 0.13%


Average Energy Saving % = 49.74%



Sensor

#

Energy Saving%

Decoding Error%

1

51.91

0.32

2

57.63

0.27

3

44.92

0.02

4

46.40

0.03

5

47.84

0.00

Chebyshev’s

Inequality method


Case II: Probability of Error (
P
e

)
= 1.5%







Average Decoding Error % = 2.29%


Average Energy Saving % = 52.27%



Sensor

#

Energy Saving%

Decoding Error%

1

54.30%

0.66%

2

59.74%

7.98%

3

47.52%

2.17%

4

49.61%

0.61%

5

50.18%

0.05%

Comparison

Exact Error Method


Chebyshev’s

Method


ZERO probability of
decoding error



Compression is low (due
to extra bit information)





Strict bound



‘Instantaneous approach’


Probability of decoding
error within a required
bound.


Higher Compression can
be achieved by varying
required probability of
error.



Loose bound



‘Average approach’.

Probability of Error vs. Energy Savings

For Temperature Data


Exact error method


Average energy savings % = 56.66%


Average decoding error % = 0



Chebyshev’s

method (
P
e

=
0.01
)


Average energy savings % = 66.98%


Average decoding error % = 0.61%







For Light Data


Exact error method


Average energy savings % = 33.52%


Average decoding error % = 0



Chebyshev’s

method (
P
e

=
0.01
)


Average energy savings % = 19.29%


Average decoding error % = 1.13%


Conclusions


Energy savings achieved through our
simulations are conservative estimates
of what can be achieved in practice.


Further work can be done on


Better predictive models.


Better probability of error bound.


Can be integrated with an energy
saving
-
routing algorithm to increase the
energy savings.




Thank You!!!!







Queries Please…..