E. Electronic & Computer Engineering Project Report

chulavistajuniorΚινητά – Ασύρματες Τεχνολογίες

10 Δεκ 2013 (πριν από 3 χρόνια και 4 μήνες)

66 εμφανίσεις

1



“Monitoring mental wellbeing
using mobile app technology




Ronan D.Browne



B.
E. Electronic & Computer
Engineering Project Report



4ECE



April 2011





2






Declaration of Originality


I declare that this thesis is my original work except where stated



Date;



.................................


Signature;


.................................

































3





Abstract



Modern mobile telephones have become integral to a user’s social life. This project aims to use
a smartphone
’s unique functionality and ubiquity to monitor a user’s social interaction
throughout their daily life.


In the latter stages of this project, an attempt will be made to quantify and visualise any data
gathered to aid analysis of a subject’s social activi
ty.






















4





Acknowledgements



I would

like to thank my supervisors, Liam Kilmartin and Fearghal Morgan, for their guidance
during this project. I would also like to thank Han Wang, a postgraduate who provided help
when I encountered proble
ms during the programming section of this project and Dallan Byrne,
a postgraduate student who helped me when it came to analysing the data gathered in this
project.


I would also like to thank my family and friends for their patience, advice and guidance

during
this project.





















5


Table of Contents

Declaration of Originality

................................
................................
................................
....................

2

Abstract

................................
................................
................................
................................
...............

3

Acknowledgements

................................
................................
................................
.............................

4

Table of Contents

................................
................................
................................
................................

5

Table of Figures

................................
................................
................................
................................
..

8

Glossary of terms

................................
................................
................................
................................

9

Chapter 1
-

Introduction

................................
................................
................................
....................

10

1.1
-
Introduction to Project

................................
................................
................................
................

10

1.2


Project Goals

................................
................................
................................
.............................

11

1.3


Structure of report

................................
................................
................................
...................

12

Chapter 2


Project Background

................................
................................
................................
........

13

2.1
-
Background

................................
................................
................................
................................
.

13

2.2
-
Introduction to Android

................................
................................
................................
..............

15

2.2.1


What is And
roid?

................................
................................
................................
...................

15

2.2.2


Why Android?

................................
................................
................................
........................

16

2.2.3


Android Implementation and programming

................................
................................
.........

19

2.3
-

Previous Work using Smartphones

................................
................................
............................

20

2.3.1
-

Methods of Sensing

................................
................................
................................
.................

21

2.3.2
-

Literature Review

................................
................................
................................
....................

22

Chapter 3
-

Implementation of Smartphone Application

................................
................................
..

28

3.1
-

Structure of application

................................
................................
................................
.............

28

6


3.1.1
-

Application Requirements

................................
................................
................................
......

28

3.1.2

-
Application Design

................................
................................
................................
...............

29

3.2
-

Bluetooth Search

................................
................................
................................
.......................

30

3.2.1
-

What is Bluetooth?

................................
................................
................................
.................

31

3.2.2
-

Why use Bluetooth?

................................
................................
................................
...............

32

3.2.3
-

Bluetooth in Andro
id

................................
................................
................................
.............

32

3.3


Recording communication data

................................
................................
................................

34

3.3.1


Why collect Communications data?

................................
................................
......................

34

3.3.2


Data to be recorded

................................
................................
................................
................

35

3.3.3


How data is collected

................................
................................
................................
............

36

3.4


Recording GPS location

................................
................................
................................
...........

37

3.4.1


What is GPS?
................................
................................
................................
.........................

37

3.4.2


Why capture GPS data?

................................
................................
................................
.........

38

3.4.3


How to
capture GPS data

................................
................................
................................
......

38

3.5
-

Data classes for storing recorded data

................................
................................
........................

40

3.5.1


Why store recorded data

................................
................................
................................
........

40

3.5.2
-
What is a data class

................................
................................
................................
.................

40

3.6
-

Audio Recording

................................
................................
................................
........................

41

3.6.1


Why is an audio recording u
seful?

................................
................................
........................

41

3.6.2


What is an audio recording?

................................
................................
................................
..

41

3.6.3


How to take an audio recording in Android

................................
................................
..........

42

3.7
-

Storing data in memory

................................
................................
................................
..............

42

7


3.7.1


Overview of data stored

................................
................................
................................
........

42

3.7.1


Why is data stor
ed?

................................
................................
................................
...............

43

3.7.3


How is data stored?

................................
................................
................................
...............

43

3.8
-

Running application in the background

................................
................................
.....................

44

3.8.1


Why should the application run in the background?

................................
.............................

44

3.8.2


What is a background program?

................................
................................
............................

44

3.8.3


How is this accomplished?

................................
................................
................................
....

45

3.9
-

Stabilising/Testing the application

................................
................................
.............................

48

Chapter 4;

................................
................................
................................
................................
..........

52

4.1
-

Purpose of Analysis

................................
................................
................................
...................

52

4.2
-

Text File Analysis

................................
................................
................................
......................

52

4.2.1


File Format

................................
................................
................................
............................

52

4.2.2


Reading data from file and sorting data

................................
................................
................

53

4.2.3
-

Call Analysis

................................
................................
................................
.............................

53

4.2.4
-

SMS Ana
lysis

................................
................................
................................
............................

55

4.2.5
-

Bluetooth Analysis
................................
................................
................................
...................

57

4.3
-

Audio Analysis

................................
................................
................................
...........................

60

4.3
.1
-

Purpose of Analysis

................................
................................
................................
.................

60

Chapter 5

................................
................................
................................
................................
...........

67

5.1
-

Problems encountered

................................
................................
................................
................

67

5.2
-

Future work

................................
................................
................................
................................

68

References;

................................
................................
................................
................................
........

70

8


Table of Figures


Figure 2.1


Samsung i5500
smartphone
…………………………………
………………….....14

Figure 2.2


Android Logo
………………………………
……………………………………...15

Table 2.1


Smartphone Features & Criteria
……………………………………………………17

Figure 3.1


Application Flowchart
……………………………………………………………..30

Figure 3.2


Bluetooth Search Flowchart
……………………………………………………….33

Figure 3.3


Testing GUI
………………………………………………………
………………..49

Figure 4.1
-

Sample CallPlot
……………………………………………………………………55

Figure 4.2


SMS Activity Plot
…………………
……………………………………………….57

Figure 4.3


Bluetooth Devices Histogram
……………………………………………………..58

Figure 4.4


Bluetooth Devices Encounter Mapping
………………………………………
…...59

Figure 4.5



Audio File Spectrogram
……………………………………………………………61

Figure 4.6



Graph of Audio file Energy and Spectral Entropy
………………………………...62

Table 4.1


Summary of data collected from Audio files
……………………………………….
62

Table 4.2


Threshold values of speec
h
…………………………………………………………65

Figure 4.7



Boxplot of Audio File Energy and Spectral Entropy
………………………………65

















9


Glossary of terms


The following is a glossary of terms used in the project report.


SDK;
Software development kit, an application which
is used to develop software for a given
technology

OS;
Operating System

IDE;
Integrate Development Environment, an application which is used to write computer code

GPS;
Global Positioning System

























10


Chapter 1
-

Introduction


In this ch
apter, a short background to the project, as well as the objectives of the project will be
given. The structure of this report will also be detailed.


1.1
-
Introduction to Project


Contemporary smart phone platforms provide an excellent platform for recordi
ng a rich set of

information concerning the use
r’s physical and social context
.

Custom
software applications can
be
developed to gather information from a diverse set of “sensing” sources. This information
can subsequently be analysed to extract informatio
n concerning an individuals activities,
interactions with contacts, social networks etc.


Up until now, most information gathering applications have revolved around the use of GPS and
a phone's in built accelerometer to learn about the user's activities,
neglecting the data that can
be inferred from surrounding bluetooth devices and the phone's microphone.


T
his project

develop
ed

and test software for a smartphone to address the issue of "mental
wellbeing" of
the phone's user. The aim of this project was

t
o quantify how much interaction the
user (ostensibly an elderly or disabled person) has with other people

throughout the day. This
was

achieved in a number of ways:



(1) Recording details of voice calls and SMS transmitted and received by the person's pho
ne


(2)Recording GPS information to determine whether they are moving significantly (increasing
the likelihood of them interacting with other people)


(3)Examining whether they are physically close to other people by looking for Bluetooth
identities on oth
er devices


(4)Analysing the sound being recorded on the handsets micro
-
phone (when a call is NOT active)
to determine whether it "sounds like" they are talking to someone else



11


The aim of this project was

to develop software for the mobile application (a
nd back end
database) to record some or all of these types of data.


Fusing and analysis of information
from this variety of sources

allow
ed

inferences to be drawn

concerning mental wellbeing (e.g. amongst older populations), degree and strengths of

socia
l contacts and networks. L
ater stages of this project

focus
ed

on developing software to
access this information autom
atically from the handset and

provid
ing

a means of visualising and
analysing the contextual relevance of the individual data sources.



1.2



Project Goals


This project begins with the purpose of developing a smartphone based application which will
leverage the smartphone's native hardware sensors, including the smartphone microphone, to
mine data about the smartphone user's activities.


On
ly sensors which could give some insight into the user's social inteaction will be mined, but
not to the extent that it is possible to violate the user's privacy. To this end, only the occurence of
data, and the involved parties are recorded, and not the c
ontent.


The application itself will not require any active input on the part of the user, and will attempt to
minimise the effects of the applications activites on the smartphone's battery life.


It is hoped that the data gathered by the application could

be used to better understand a user's
daily interaction with other people. To this end, a Matlab program will be created which will
attempt to quantify a user's social activities during a given period.


Through the Matlab program's analysis of the audio
files recorded, it is hoped that a method of
determining if the smartphone had recorded speech will be created, as well as an efficient
method of programmatically determing the presence of speech in an unknown audio file.


12


1.3


Structure of report


This r
eport is divided into chapters, with each chapter representing a section of the project.


Chapter 2, entitled Project Background, will detail the background of modern smartphones,
including their development, outline the differences between the three major

smartphone Oss
currently available, the reason the Android OS was picked for this project, and a short review of
previous research using smartphones.


Chapter 3, entitled Implementation of Smartphone Application, will outline the structure of the
final ap
plication, the reasoning behind the use of specific sensors, and how the required
functionalities were implemented.


Chapter 4, entitled Analysis of Data, will outline the process of analysing the data gathered by
the application detailed in Chapter 3.


Ch
apter 5, entitled Project Conclusion, will outline the limitations of the project, and suggest
areas where the work done in this project could be built upon in future.


Appendices are not included in the main body of the report, but are contained in a CD i
ncluded
in the report. This is done as it was felt that it would aid the understanding of the reader to be
able to both read and run the code than to read large sections of computer code.









13


Chapter 2


Project Background


In this chapter, the backgr
ound to mobile phone will be detailed, as well as an introduction to
the Android OS, and a review of previous relevant research work on smartphones.


2.1
-
Background


For most of the twentieth century, a telephone was a method for two people to communicate
over long distances. It was an expensive technology, and limited in that phones were generally
confined to the home or business, and so if the recipient was not nearby, the sender would be
forced to either leave a message or ring him or her back later.


Th
is changed in the later decades of the twentieth century with the advent of the mobile phone.
Although the technology for mobile phones was demonstrated in 1973 by Dr. Martin Cooper of
Motorola, the first commercially available mobile phones were only rele
ased in 1983 (the
Motorola DynaTac


weighing 793g, 25cm high, and capable of maintaining a phone call for up
to 60 minutes, and of course, no SMS).


Over the next two decades, mobile phones became smaller, more power efficient, and
inexpensive. As they di
d, the increased in popularity until they became near ubiqitous. As this
was happening, phone manufacturers began to incorporate new technologies into the phones
(SMS, infrared data transfer


later replaced by bluetooth, Java runtime support), even in low
end
models.


The latest stage of phone development has been the smartphone. Taking advantage of the falling
price of computer processors and memory, manufacturers began to design and release a new
generation of phones that more closely resembled computers
than the mobile phones that had
come before.
A picture of a modern smartphone is shown below in Figure 2.1.






14


Figure 2.1


Samsung i5500 smartphone





While previous phones only had a rudimentary operating system, and did not have sufficient
processing power for user installed applications to run, these new phones had operating systems
and interfaces that resembled laptops, and all
owed the development of full blown applications,
which could be deployed to them. They also incorporated sensors such as cameras,

accelerometers, GPS sensors, bluetooth,

sensors

that allowed internet access and most
importantly, built up software developme
nt kits for developers to allow easy access to these
through the native OS.


The SDKs allowed developers easy access to the hardware of the handsets, but the inception of
“app stores” allowed applications to be distributed easily, quickly and cheaply thro
ugh the
internet.


As developers quickly scrambled to use this new technology, apps began to appear. Many of
these apps simply imitated functionality from conventional computers(eg. PDF/Document
readers, music players) while others were simply games. Howev
er, as time went on, developers
started to leverage the phone's unique hardware to infer data about the user for use by their apps.


Often, these would require users to input data manually(eg. calories counter would compare
information entered against onli
ne databases to calculate total calories consumed), while others
would work in the background (eg. pedometers would utilise the accelerometer to determine
when a footstep had occurred).


15


However, very few applications would utilise the sensors in combinat
ion with each other, ie.
The information gathered would never be analysed in combination with data from the bluetooth
sensor or the data from the accelerometer. Such data, when analysed jointly, could give a more
complete view of the phone user's lifestyle
, and thus could be of use in many apps which
attempt to aid the user by building up a profile of him or her and adjusting the services based on
the results.


2.2
-
Introduction to Android


In this section, a short explanation on the background of Android OS
, why it was chosen for this
project,

and the smartphones which use it, will be given.


2.2.1


What is Android?



Android is an operating system for mobile smartphones developed originally by Android Inc.
which was acquired by Google in 2005.
The logo for

the Android operating system is shown
below in Figure 2.2.


Figure 2.2


Android Logo






The first version of Android was released o
n September 23
rd

2008. Since then there have been
several updates for the Android platform, with the latest version, version 2.2 (dubbed “Froyo”)
16


being rolled out on newer devices, and a new version (“Gingerbread”) recently announced, as
well as a table o
nly version called “Honeycomb” in development.


Android implements all the standard functions of the modern phone (phones calls, SMS) and
supports onphone cameras, bluetooth, GPS, accelerometers, internet (both through the mobile
network operator and loc
al available wireless networks). All Android phones feature a
touchscreen, and the interface displayed can be customised with background pictures, and apps
pinned to the screen, similiar to a PC's desktop.


As with other smartphones, software applications
(dubbed “apps”) can be written for phones
running Android. These are distributed through online application marketplaces, with the main
distributor being Google's own Android Marketplace.


Currently, apps exist for a wide range of functions such as playin
g music, email clients
(Google's gmail is accessible through an app called Google Mail), web browsing (multiple
apps), social networking (Facebook has developed an app for accessing its site for all
smartphone platforms), news (many organisations have deve
loped apps which can be used to
follow any updates they release, eg. MMANews allows users to follow developments from
various websites dedicated to Mixed Martial Arts).


Many of these apps merely alter a website's presentation to compensate for the reduce
d screen
size of the handset, but some take advantage of the handsets sensors to aid the user, eg. Shazam,
an app that utilises the microphone and signal processing techniques to identify music upon
command, and then retrieves

the details of the song onlin
e. This allows the user to learn the
details of songs which he or she may only be hearing for the first time.


2.2.2


Why Android?


There are currently three Operating Systems for smartphones


iOS [for iPhones],
BlackberryOS[for Blackberry phones] and A
ndroid.


For this project, certain hardware and software capabilities were required to implement the
desired functions. The standard of development environment for each phone would also impact
17


on ease of development for the project.


A list of required fu
nctionalies was compiled, without which the project would not be able to
achieve success on a platform, and a second set of criteria, which were non
-
essential, but
preferable was listed, so as to estimate how easy it would be to develop on a given platform
.


The criteria, as well as how well each platform meets them, is listed below

in Table 2.1
.


Table 2.1


Smartphone Features & Criteria


OS

Android

Blackberry

iOS(iPhone)

Essential Criteria




Bluetooth

Present

Present

Present

Global Positioning
System

Present

Present

Present

Allows programs
access to the mic

Yes


Yes

Tracking of Calls and
SMS

Yes

Partial


ca渠潮ny
景汬潷⁳浳⁡湤⁣a汬猠s猠
瑨ty c潭攠楮Ⱐ湯o
瑲tc歩kg映獥湴n
ca汬猯s浳

ves

䵵汴楴a獫楮s
e湶楲潮浥nt

ves

ves

乯Ⱐ瑨潵k栠he牴r楮i
晵湣瑩潮猠oa
渠牵n⁩渠 桥
扡ckg牯畮r

Non
-
essential Criteria




Development Software

Free plugin for Eclipse
IDE

Free plugin for Eclipse
IDE

Development kit from
Apple, licence costs
money, only compatible
with Mac computers

Ease of testing

Emulator packaged
Command line
Can be deployed to
18


with dev
elopment
software, and can
upload to Android
phone from Dev Kit for
testing and debugging

software available for
uploading to handset
via USB. Emulator also
integrated into
development package.

devices connected to
Mac, but
a development
certificate is required.
Emulator is also
available.

Quality of official
support

Large quantities of
documentation
available on the
android developer
website. Tutorials,
sample code also
provided. Official
development support
includes video

tutorials
and a blog which
showcases different
aspects of system.

Large quantities of
documentation
available on the
developer website.
Tutorials, sample code
also provided.

Large quantities of
documentation available
on the developer
website. Tutorial
s,
sample code also
provided. Video
tutorials also available.

Level of other
development support
available


Many unofficial
blogs/forums online as
well which offer
support from
independent
developers.


Many unofficial
blogs/forums online as
well which off
er
support from
independent
developers.


Many unofficial
blogs/forums online as
well which offer support
from independent
developers.

Computer language

Applications written
predominantly in Java
with libraries to access
OS functionality
provided in SDK,
but
C++ is also supported,
though its use is not as
widespread.

Java with libraries
which allow developer
to access OS
functionality.

Objective C

Level of use in
Several lectuers and
None

None

19


department

postgraduate students
have used Android and
are available for

support

Availability of devices
in lab for testing

The ECE lab contained
multiple Android
phones for testing an
application on

None

None


As can be seen above

in Table 2.1
, Android is the clear choice due to it's ability to multitask (
which i
s essential for any application which runs in the background) and its ease of access to
memory, which is necessary to access the recording logs of SMS and calls.


The iPhone's lack of support for Windows OS based development and reliance on Objective C
(a
n obscure computer language) both leave it at a disadvantage for this project, but its lack of
true multitasking leaves it without any prospect of being used, as the type of application which
will be developed cannot be implemented without multitasking.


T
he Blackberry is an unsuitable candidate for this project due to the limited access to SMS and
call logs. SMS can be detected upon reception using the SDK, but SMS sent cannot be detected,
which would inhibit the functionality of the prospective applicatio
n. A similiar restriction is in
place on phone calls. This is probably implemented to aid the privacy of users, and avoid
eavesdropping of apps on users, but is still a major reason why this project cannot be developed
on the Blackberry platform.


2.2.3


Android Implementation and programming


Based upon a modified version of the Linux Kernel[1], Android was developed and released
with the cooperation of the new formed Open Handset alliance[2]. The Android Open Source
Project (AOSP) is tasked with the mai
ntenance and further development of the Android
platform.


20


Android runs on a wide range of smartphones, from the Samsung Galaxy Europe to the HTC
Desire. This “one size fits all” approach is achieved by allowing the manufacturers of the
phones to write the

driver software for any components they include, very similiar to how
modern Pcs allow components from different manufacturers to be integrated with each other.


The OS is written in a combination of computer languages [4] and uses an SQLLite database
man
agement system. Apps themselves are written in Java and run on the phone using a virtual
machine and JIT (Just In Time) compilation.


However, to develop for Android, one only needs a knowledge of Java. Google have released an
SDK (Software Development Kit
) which is integrated with the Eclipse IDE (Integrated
Development Environment). This SDK features libraries for accessing the various parts of the
phone (screen, sensors, memory,

etc) and allows a developer to debug his or her application via
an emulator
or on the phone itself.


The structure of any Android application consists of standard Java classes and a permissions file
(Manifest.xml). UIs (User Interface) are defined through an XML based mark up language, and
can be loaded from the java classes.


A
standard Android application is called an Activity, and contains a UI. Services are also
available, and do not contain user interfaces. Android supports threading, but does not allow a
thread to access the UI, as only the main application may interact with

the UI. Threads must also
not consume too much computer resources as the UI may hang due to the thread. An Activity
that is considered unresponsive by the OS (does not react to an input within a given time) is
stopped and is considered to have crashed.


2.3
-

Previous Work using Smartphones


In this section, a review of past work using smartphones will be given, as well as a short
summary of academic papers/projects which were studied during the project.


21


2.3.1
-

Methods of Sensing


In this section, a short

rundown on previous sensing work using smartphones will be given.


Due to its relatively recent a
vailability, there is a distinct

lack of academic research done for the
Android platform, indeed, research has centred on the Apple iPhone, as has
been the ca
se with
the SoundSense

project. However, much of the academic research that has taken place with
Android has focused on using data gathered from Android phones to infer information about a
set of users. Broadly speaking, these projects fall into two catego
ries;




Personal sensing;
Information about an individual is collected and analysed for the
individual. Information is then stored and the inferred data can be seen by the user. The
analysis is performed on the phone itself. Examples of such apps include p
edometers and
calorie counters.


1.

Community sensing;

Information is collected about a large group of individuals who
form a community. This data is then sent off to a server, via the internet, to be analysed.
The results of this analysis can be fed back to
the users to highlight trends in their
community. The analysis must be performed at a server as (a) the gathered data must be
collated for analysis and (b) current generation smartphones lack the processing power to
analyse such a large amount of data in a

satisfactory time.


Examples of these projects include Galway Traffic [3], which uses gathered data from a large
number of users to estimate traffic conditions on selected routes. This data can then be displayed
on the project's website, allowing the popu
lation to view the es
timates and plan their routes
a
ccordingly.


Methods of sensing are themselves categorized into two categories;




Passive sensing;

Passive sensing involves the application using the onboard sensors to
gather data about the user, without
any continous input from the user. The data is
gathered periodically to minimise impact on battery life and processor time. The main
challenge with passive sensing is to know when data should be gathered, as there will be
22


long periods when no useful data
can be gathered (eg. If recording audio and the phones
is in the user's schoolbag, the mic will not be able to detect anything so any recording
will be an unnecessary burden on the phone). Such incidents can be minimised by asking
the user to turn off the
app, as is the case with Galway Traffic, where the user is
supposed to inform the app of when he or she is about to start driving, and when he or
she is finished.




Active sensing;

Active sensing either involves the user manually inputting the data which
th
e application requires, or by sensing “on demand”, where the user will ask the app to
perform a specific function. Basic apps requiring user input include calorie counters,
while advanced apps include a barcode scanner which uses the phones camera to scan
a
barcode before searching the internet for its product information.


2.3.2
-

Literature Review


The following research papers were reviewed during the course of this project. A short summary
of each paper, as well as how the paper aided this project, is in
cluded below.



A Survey of Mobile Phone Sensing(
Nicholas D. Lane, Emiliano Miluzzo, Hong Lu, Daniel Peebles,
Tanzeem Choudhury, and Andrew T. Campbell, Dartmouth College
)
[5]

This paper provides an outline of the (1)current standard sensors in the modern m
obile phone,
and how they are used in smartphone apps, (2) an overview of the creation and distribution
networks for these apps, (3) how to broadcast this information through existing channels, such
as Social Networks, and (4) how collating data from a la
rge number of users can be used to infer
information about an area.


Currently, apps are mainly distributed from central “app stores” which are managed by the
company which makes the Smartphone Operating System. Such apps are downloaded by the
user, with a

charge being at the discretion of the developer.


The largest smartphone OS, Android (which is the subject of this project) has a public SDK
available, so anyone, from students to professional programmers can leverage the features of the
23


smartphone and c
reate apps. Currently, apps leverage the phone sensors to infer data abouts the
user's Health & Fitness, as well as their Environmental Impact.


Using the data collected from a large pool of users, information sourced from the phone sensors
have been used
to predict conditions about the user's location, in the case of the
MIT VTrack

project it is used to predict traffic conditions, which the user can then use to know the route
which is least congested.


However, a major obstacle to such apps is the ability
to give the data collected context.
However, data mining, and machine learning techniques can be used to overcome this.


Another major paradigm regarding mobile computing is the use of the cloud. As mobile phones
do not have powerful processors relative to

traditional computers, any apps that use continuous
sensing must send the raw data collected to a server for processing. This is done to avoid
affecting the user experience of the phone.


This paper was useful to the project, as it discussed the standard
sensors found in phones,
outlined standard design paradigms and how data mined from the phone is used by applications
to aid the user.



Mobile Phone Sensing is the next Big Thing
(Andrew T. Campbell, Dartmouth College,ACM MobiOpp
2010 Keynote Address
)


Thi
s presentation charts the growing interest in mobile computing. Major players such as Intel,
Nokia, Microsoft and Motorola are keen on using the mobile platform as a sensor, and using this
data in apps. Academics in institutions such as MIT, UCLA, UIUC and

UW are utilising the
mobile platform to infer information about human activities from data collected from the users
mobile phone.


It explains how the mobile phone will become the main platform for sensing innovation over the
next decade, as smartphones c
an sense a user's surroundings, learn their behaviour
(predominantly their interaction with their peers and their surroundings) , with this information
being used to help the user navigate their surroundings and improve their quality of life.

24


The presentat
ion proposes that eventually, mobile phones will form sensor networks on a
societal scale, with the data collected being used to support applications which work on a
communal, urban and global scale to solve the user's problems.


This presentation was inv
aluable to the project as it showed that phone based sensing is a
growing research area, and showed how large scale phone applications can grow into an
information network to aid groups of users.



SoundSense: Scalable Sound Sensing for People
-
Centric Appl
ications on Mobile Phones

(
Hong
Lu, Wei Pan, Nicholas D. Lane, Tanzeem Choudhury and Andrew T. Campbell

[6]


This paper describes the design and implementation for an application that senses ambient noise
from the microphone on the smartphone.
SoundSense i
s implemented on the Apple iPhone and
represents the first general purpose sound sensing system specifically designed to work on
resource limited phones. Soundsense was designed for scalability and uses a combination of
supervised and unsupervised learning

techniques to classify both general sound types (e.g.,
music, voice) and discover novel sound events specific to individual users. It is notable in that
the system runs solely on the mobile phone with no back
-
end interactions, due to potential user
concer
ns about privacy.


Soundsense is ideal for continuous sensing in that it's utilisation of the microphone as a source
of information cannot be rendered completely useless by the context of the phone (eg. If a phone
is in the user's pocket, the sound receive
d is of slightly lower quality, but can still be analysed).


Soundsense works by seperating sound detected into frames. Frames are analysed for to detect
levels of energy and spectral entropy. Frames which are below a certain energy level, and above
the
a threshold spectral entropy level are rejected.


Recognition of sound is done in two stages; coarse category classification, and finer intra
-
category classification. Coarse category classification places the frame in one of three categories
; Speech, Mus
ic or Ambient sound. Finer intra
-
category classification is then carried out.


With finer intra
-
category classification, new sounds are detected using the characteristics
25


extracted from the frame. User's are asked to classify new sounds for the system upon

discovery.
Known sounds are then saved to memory and will be given a rating of importance based on their
duration and frequency of their occurrence.


Soundsense was implemented on the iPhone using a mixture of C, C++ and Objective C.
Objective C was nece
ssary to construct a GUI on the phone and to access the hardware of the
phone. Signalling processing and classification software was created using both C and C++.
These languages were used due to their compiled nature, which allows faster processing relati
ve
to interpreted languages like Objective C.


This paper was useful in giving a practical introduction to signal processing on the mobile
platform, both in terms of design and implementation. The techniques involved in sound
classification, while beyond t
he scope of this project, could prove useful in future projects which
build upon this one.



Research of Android Smart Phone Surveillance System
(Heming Pang, Linying Jiang, Liu Yang, Kun
Yue
-

Software College, Northeastern University, Shenyang 110004,
China)
[7]


In this paper, presented at the 201O International Conference On Computer Design And
Appliations (ICCDA 2010), the use of an Android handset as a mobile sensor is explored.


The researchers propose utilising the handset primarily as a method of
transmitting, storing and
displaying data received from external sensors. Sensors such as a humidity monitor, a camera
and a smoke monitor were connected to an ARM processor via USB cables. The ARM processor
itself was connected to the handset via a USB ca
ble to form a client/server architecture, with the
handset acting as a client, and the processor acting as a server.


The ARM processor transmits data gathered from the sensor to the handset, which displays the
video recording taken by the camera, and can

change its operation depending on the data
gathered from the other external sensors.


This paper was relevant to this project as it showed that an Android handset could be
successfully leveraged as a smart sensor, as well as how the Android system was wel
l suited to a
26


client
-
server architecture. While it's technical value was limited to the implementation of this
project as it lacks any details on the analysis of gathered data, it does indicate that multiple
sensors can be used concurrently on the Android
OS.



An Integrated Monitoring System for Smartphones
(Christopher Miller,Sarah Chasins,Carolyn
Farris,Justin Varner,Curtis Carmony,Christian Poellabauer)
-

Colloboration between multiple colleges
[8]


The goal of the project presented in this paper was to c
reate a common interface to a
smartphone's hardware, which would service the requirements of any application which would
attempt to access a smartphone's hardware sensors.


The justification for this project was that if one piece of software managed the ph
one's sensors,
energy, and by extension battery life, could be saved as opposed to multiple pieces of software
accessing the same hardware individually, and thus wasting valuable battery life.


The most interesting aspect of this project was its attempt to

establish a profile of the user's
activities on the phone in order to optimise system performance. Features which were monitored
included use of apps, telecommunications, email and screen use.


The impacts of the use of the examined features on phone bat
tery life were monitored in order to
show how a given application's use affects the system battery. Such a tool could prove helpful in
the design of an app for developers, as it would be possible to see how the use of an app affects
the battery life of a s
martphone, a concern which is not present when developing on a desktop,
or inherent when using the Android SDK's inbuilt emulator to test software.


This paper was useful to the project as it demonstrated how applications needed to be efficient
both in the
ir use of hardware, and their design. Such an insight proved useful when creating a
design which leveraged many of the smartphone's sensors, as any aid in reducing the impact of
an application on the user's experience with the phone is welcome when an app
is constantly in
use.



27


Community
-
Guided Learning: Exploiting Mobile Sensor Users to Model Human Behavior
(Daniel
Peebles,Hong Lu, Nicholas D. Lane,Tanzeem Choudhury,Andrew T. Campbell)


Dartmoth College
[9]


The objective of this project was to design a so
ftware framework that allows programmer
defined structures to learn how to mine relevant data from user defined, and thus unreliable, data
structures.


The classification software designed in this project uses notions of similarity to incorporate data
from

multiple users in a flexible manner that neither places excessive emphasis on labels nor
ignores them completely.


CGL used existing unsupervised and supervised classifiers to find groupings of the gathered data
that can maximize both the robustness and the
performance of a classifier.


Data structures with the same labels were compared for similarity in their internal structures,
and dissimiliar segments within classes of the same label were split into different classes. Using
such groupings, the similiarity

between internal structures of classes with different labels was
measured.


Through training the classifiers to analyse data based on its structure rather than it's label, the
effects of inconsistently labeled could be minimised.


Such a paper was useful

to this project, as it demonstrated how to classify data which was
defined chaotically by comparing its structure with data which was well defined and whose
characteristics were known.


While the classifiers used in this paper analysed poorly labeled tex
t data, which would not occur
in this project as all text data structures were well defined and did not incorporate any aspects of
machine learning, the audio analysis benefitted from the techniques and paradigms for analysis
of erratically structured data

detailed in this paper.





28


Chapter 3
-

Implementation of Smartphone Application


This chapter will detail the technical aspects of building the application, focusing on both the
overall structure, and the implementation of the individual components
[10]

o
f the project.


3.1
-

Structure of application


This section will detail what requirements the application would require of a smartphone to
fulfil its tasks, as well as how it could schedule these tasks so as to be stable.


3.1.1
-

Application Requirements


In order to gain a picture of the user's social interaction, it was decided that the sensors on the
smartphone could be leveraged in various ways to determine data about their environment. A list
of useful potential data about a user's social interaction

within their environment was drawn up,
and how it could possibly be determined from a smartphone sensor. The list is given below.




Is the user(or anyone else) speaking?
-

Possible to determine using the mic



Is the user contacting anyone via the smartphone

itself?
-

Easily found through
monitoring the SMS and calls routed through the phone



Is anyone else nearby?
-

Possible to estimate the number of other people in the user's
environment through monitoring other bluetooth devices around the user



Is the user
encountering the same people in different locations?
-

Possible to estimate
through monitoring the user's GPS coordinates using the smartphone's GPS sensor, and
geocoding this data into data gathered about surrounding bluetooth devices


The application wa
s required to do the following




conduct a bluetooth search



record audio



track phone calls sent/received



track SMS sent/received

29




log details of all recorded data to memory



work in the background, and not require any input from the user



identify the user's G
PS location


Data is to be gathered periodically, at different rates for each data category, so as to conserve
battery life.


3.1.2

-
Application Design


To minimise the application's use of system resources while maximising the quality of data
gathered,
it was decided that a bluetooth search would be performed roughly every twenty
minutes, with audio of duration six seconds being recorded every ten minutes. Data would be
logged to a file every hour, and details of calls and SMS would be retrieved immediat
iely before
this.


In order to avoid over consumption of resources, one thread would handle these functions,
measuring how long since a function had last been called, and conducting that function if its
waiting time had elapsed.


The GPS functionality coul
d be implemented using a

Listener

class, which is called every time
the GPS location of the phone was changed. Last known location is held in memory, and this
information would be written into the data of any bluetooth devices found, as an indicator of
whe
re it had been located.


Audio is recorded for six seconds every ten minutes, to both minimise power consumption, and
to avoid causing user's to misinterpret the application as one which would cause them to be
under constant surveillance. An
AudioRecorder

object could be used for this.


Information regarding SMS/Calls is mined using
Content Providers
, which are the standard
method of querying the Android memory database.


In order to work in the background, all functions would be implemented in a app known

as a
Service
. An Android
Service

is a piece of software that can be run in the background, and will
30


not be shut down when the user exits the GUI of the application. A
Service

can also be allocated
its own process pool, so if it crashes, the application it
self will not be affected.


Flowcharts of the application an
d its tasks are presented below in Figure 3.1.


Figure 3.1


Application Flowchart




3.2
-

Bluetooth Search


In this section, the section of code implementing a search of the surrounding area fo
r Bluetooth
devices will be discussed, in particular its background the reasoning behind its inclusion.


31


3.2.1
-

What is Bluetooth?


Bluetooth is a wireless communication technology, utilising short wave radio transmission,
which is used to transfer data
between devices. It has a maximum range of 100m.


Each device incorporating Bluetooth functionality has its own Bluetooth identity, so as to allow
its identification in the ad
-
hoc networks formed each time a connection is formed. Each
bluetooth device can
connect to several other devices incorporating bluetooth.


Bluetooth provides a secure way to connect and exchange information between devices such as
fax machines, mobile phones, telephones, laptops, personal computers, printers, Global
Positioning System

(GPS) receivers, digital cameras, and video game consoles.


Every device has a unique 48
-
bit address. This address appears when another user scans for
devices.


Bluetooth devices can be discovered through another bluetooth device conducting a bluetooth
d
iscovery. Bluetooth devices have two modes which determine their response to a discovery,
discoverable

and
hidden
(which causes the device to be invisible to other Bluetooth devices).


Any Bluetooth device in
discoverable mode

will transmit the following in
formation on demand:




Device ID



Device Address



Device class



List of services



Technical information (for example: device features, manufacturer, Bluetooth
specification used, clock offset)


The most pertinent of these is the device's address. It uniquel
y identifies a device, and so can be
used to determine how many unique devices are encountered in an area, or during a period of
time, and by extension, how many people are leaving/entering a user's environment.


32


3.2.2
-

Why use Bluetooth?


As mentioned in

the above section, bluetooth can be found in objects which are ubiqitous to
people's lives. It is rare to meet someone without some form of mobile phone, and if most
phones incorporate some bluetooth functionality then they can be used to indicate the pre
sence
of another person.


Most importantly, while a single bluetooth device can only connect with a limited number of
other bluetooth devices,
one does not need to form a bluetooth connection to determine another
device's bluetooth ID
. This means that ther
e is no danger of linking to too many devices during a
search, and thus a search can be carried out in public places, noting all the available bluetooth
devices and their Ids. If we assume that most people will rename their phones, then we can use
this to

uniquely identify bluetooth devices, and through their reocccurences, determine if the
owner of the device is a regular feature of our user's life.


Regularly occurring bluetooth devices could indicate a person who has some relationship to our
user


a c
o
-
worker, friend, family member or classmate. As such, the odds of the user interacting
with the owners of reoccurring bluetooth devices is substantial, and can be a useful indicator of
the amount of interaction the user has with his or her peers, especial
ly when data gathered from
bluetooth devices is cross referenced against the other data gathered by the application.


3.2.3
-

Bluetooth in Android


A flowchart giving a high level overview of how a bluetooth search is carried out is shown
below

in Figure 3
.2
. The implementation of the bluetooth search is then discussed afterwards.









33


Figure 3.2


Bluetooth Search Flowchart




In Android, access to the bluetooth stack is permitted. In order to form a bluetooth connection
with another bluetooth capable
device, an application must implement a

BroadcastReceiver

class.


The application must then assign an intent to the class. This can be done in the back end java
code, or an intent can be applied through the manifest file. In this project, the intent was ap
plied
in the java code, as it was found that if applied in the manifest, it could not be unregistered,
which could cause problems in other applications, as only a certain number of classes may listen
for a specific intent.


After the
BroadcastReceiver

is i
mplemented and assigned an Intent, the bluetooth discovery
must be initiated.


34


Two intents are assigned to this BroadcastReceiver. An intent determines when a
BroadcastReceiver is called by the Operating System. To this end we assign
Intents

to the
Broadca
stReceiver
, one for detecting the discovery of a bluetooth device, and another to detect
the end of the discovery period, so we can unregister the
BroadcastReceiver

from the OS.


When a device is detected, its device address, and the location where it was
encountered are
noted. The location encountered is captured by the GPS software detailed in a later section.


The device address is recorded as it is a unique identifier for a bluetooth device. It is a 48 bit
identifier sequence. While the device name coul
d be used, testing has shown that this relies on
the surrounding devices' names being changed, which is not always the case.


As such, were the application to note the device name instead of the address, it would not
necessarily be able to distinguish bet
ween devices, as the standard device name on different
devices would make it appear as if the user was spending an extended period of time with one
person, rather than several people encountered over the course of the day who merely have the
same type of b
luetooth device with the same device name.


Fully commented code for conducting a bluetooth search can be seen in the CD provided with
this report.


3.3


Recording communication data


In this section, the methods of, and reasoning behind recording the
application user's
communication data is outlined and discussed.


3.3.1


Why collect Communications data?


In attempting to determine the level of a user's social interaction, it is doubtful that there is a
more definite indicator of social interaction wh
ich can determined by a smartphone than by
monitoring incoming/outgoing SMS and calls using the phone.


35


As the details of the communications include names, it is possible to know if the user is
contacting someone they know. A received SMS's timestamp can b
e cross referenced against
sent messages

to the same number, and through calculating the average time taken to reply,
along with how often a communication is sent between the user and this person, it is possible
that an estimation for the importance of the

relationship is to the user.


3.3.2


Data to be recorded


On a modern mobile phone, a user can make and receive phone calls. Short Message Services
are also available. Details of incoming/outgoing communications are stored in the phone's
memory for use b
y applications.


An SMS has the following notable characteristics;




Message body



Sender



Receiver



Time sent


A call h
as the following notable charac
teristics;




Duration of call



Caller



Receiver



Time started


Other characteristics are available (eg. SMS messa
ge format), but they a
r
e beyond the scope of
this project.


This data can be mined programmatically by using a content provider. Content providers allow
apps to access the underlying database used by the Android Operating System.


To do this, a content res
olver must be created. This content resolver is then passed the Uniform
36


Resource Identifier(URI) of the location in memory to be accessed through its query method. It
is also passed a list of the database columns to be retrieved. Optionally, it can be pass
ed
arguments to return specific rows, or only rows which satisfy certain criteria. The order in which
they should be returned (eg. For messages, descending order of occurrence) can also be
specified.


3.3.3


How data is collected


A generic example of a m
ethod to poll an unspecified URI is shown below.


public

void

QueryLog(Uri MyURI, String[] projection){







// MyURI is the location of the memory to be queried



// projection is a string array containing the names of the



// columns to be returned






cr

= getContentResolver();



Cursor c =
cr
.query(allCalls, projection,
null
,
null
,null);



//Query the MessageLog for details



if

(c.moveToFirst())
// Make sure cursor is pointing towards







//first element



{



do
{
// For each row returned, the following occurs




// Some code which polls the returned Cursor's columns



}



while

(c.moveToNext());

// Code will repeat as long as Cursor can successfully move to next element

// in Cursor




}



return
;



}


Inside the
do
-
while

block, code can be inserted to return specific columns of the row that the
cursor is currently pointing towards. These rows are often unique to the URI being queried,
sharing their names with the individu
al strings in the projection array. As such, there is little
advantage in including them in a generic example.

37


However, since it is necessary to store the data until such a time as the SMS details can be
written to a file, a data class for containing detai
ls of the messages retrieved from the URI must
be created.


The pertinent details of an SMS are the the time sent, the correspondent, and the correspondent's
phone number. These are retrieved and stored in a data class.


For polling the CallLogs, the same

method can be used, but the data which must be retrieved is
different. The pertinent details of a call are the correspondents, the time started, and it's duration.
These are mined, and stored in a data class. Fully commented code for this section can be s
een
in the CD provided with this report.


3.4


Recording GPS location


In this section, the background to GPS, the reasoning why it is used in the project, and its
integration into the project application is detailed.


3.4.1


What is GPS?


The Global Pos
itioning System (GPS) is a space
-
based global navigation satellite system
(GNSS) that provides reliable location and time information in all weather and at all times and
anywhere on or near the Earth when and where there is an unobstructed line of sight to

four or
more GPS satellites. It is maintained by the United States government and is freely accessible by
anyone with a GPS receiver.


A GPS receiver calculates its position by precisely timing the signals sent by GPS satellites high
above the Earth. Eac
h satellite continually transmits messages that include



the time the message was transmitted



precise orbital information (the ephemeris)



the general system health and rough orbits of all GPS satellites (the almanac).

The receiver uses the messages it re
ceives to determine the transit time of each message and
38


computes the distance to each satellite. These distances along with the satellites' locations are
used with the possible aid of trilateration, depending on which algorithm is used, to compute the
pos
ition of the receiver. This position is then displayed, perhaps with a moving map display or
latitude and longitude; elevation information may be included. Many GPS units show derived
information such as direction and speed, calculated from position change
s.


Many modern smartphones incorporate a GPS device. Android OS allows users to both actively
poll the GPS device for the current GPS location of the phone, and also set a listener class which
detects when the user's GPS location has changed.


3.4.2


Wh
y capture GPS data?


Through GPS, the movement patterns of the user can be inferred. However, this in itself is of no
interest to an application which attempts to build up a profile of their social interactions.


However, if the GPS location were to be cr
oss referenced with some data on who is around the
user, the regularity of being in the presence of another person could be analysed within a new
dimension


are they always being spotted in the same location, or is the user meeting them in
different area
s?


To this end, the GPS data is encoded into the Bluetooth Search method outlined previously.
Through examining where the user is encountering people


a person who is being spotted in
multiple areas is more likely to be an important fixture in the user's

life than someone who is
only encountered in the same area.


This is because people who are encountered in different areas are more likely to be an arranged
encounter(eg. Meeting friends at the cinema), or to share certain traits with the user (eg, fell
ow
student, co worker), all of which indicate an elevated level of importance in the user's life.


3.4.3


How to capture GPS data


In order to detect a GPS change, a class which implements the

LocationListener interface

is
39


created. Skeleton code for such
a class is outlined below.


public

class

SampleListener
implements

LocationListener


{


@Override


public

void

onLocationChanged(Location loc)


{


// On location change detected, this method called


}


@Override


public

void

onP
roviderDisabled(String provider)


{




// If the user turns off GPS, this method is called


}


@Override


public

void

onProviderEnabled(String provider)


{


// if user turns on GPS, this method is called


}



@Override


public

void

onStatusChanged(String provider,
int

status, Bundle extras)


{


}


}



However, this listener class must still be assigned to the
LocationManager

in order for its
methods to be called. A method for assigning the
SampleListener

class is sho
wn below.



public

void

SetGPSListener(){



LocationManager
MyLocManager

=
(LocationManager)getSystemService(Context.
LOCATION_SERVICE
);;




SampleListener MyLocListener =
new

SampleListener();




MyLocManager
.requestLocationUpdates(
LocationManager.
GPS_P
ROVIDER
, 0, 0, MyLocListener);

}


It should be noted that in the application code,
MyLocManager

is a global variable, and is only
instantiated in
this method for the purposes of

demonstration.


40


In the application code, whenever the listener class detects a

change in GPS location, it changes
the variable containing the last known GPS coordinates and exits. This variable is a data class
containing the longitude and latitude of the smartphone only.

3.5
-

Data classes for storing recorded data


In this section,
the data structures which are used to store gathered data until the data is written
to the file will be discussed, as well as the reasoning behind using them.


3.5.1


Why store recorded data


As the project application gathers data in real time, it is not

feasible to write the data gathered to
a file as they come in(due to the computational cost of constantly opening and closing the
storage file).


To solve this, gathered data is stored in data classes, until such time as they can be written to a
file. Th
e instantiated data classes are added to an arraylist of data

classes, allowing scalability of
gathered data.


3.5.2
-
What is a data class


A data class is a java class which represents information. It has no methods which do not pertain
to getting, or set
ting its internal data

(ie. Its only use is to represent data, it does not process or
analyse the data it contains).


Ideally, classes contain
private variables

and
public methods
. With data classes, the methods are
used to access the variables. This is to

prevent other applications from changing the data within
the class without being subjected to the rules of the class (eg, in a class representing a fraction,
the denominator must never be equal to zero, so this check would be put into the method for
chang
ing the denominator).


41


3.6
-

Audio Recording


In this section, the reasons for the inclusion, and the methods of implementing audio recording
are detailed.


3.6.1


Why is an audio recording useful?


An audio recording of a user's audio environment can pro
vide a wealth of information about
their context.


Through audio analysis, we can determine how much background noise is in the users
environment (eg. music, passing cars,etc) and its absence could indicate the user's lack of social
activity (if the user
were sleeping, or in a library for example).


On a more advanced level, it would be possible to apply digital signal processing techniques to
determine if a conversation is occurring in a user's environment. Standard voice processing
techniques such as Ze
ro Crossing Count(ZCC), or calculating the energy of a sample could be
used to determine if a conversation is occurring (through the ZCC), while a high energy signal
could be used to filter out background noise and irrelevant samples (as samples below a ce
rtain
energy are unlikely to be attributable to the user).


3.6.2


What is an audio recording?


An audio recording is a recording of the noise t
hat the smartphone can sense thr
ough its inbuilt
mic
rophone
.


A smartphone is ideal to carry out this sort of p
assive audio sensing, as the microphones on
phones are designed to be able to pick up nearby sounds, while not transmitting sounds that are
far enough away to be background noise to the user.


Also, as the microphone is designed to only collect human speec
h, waves of frequency
significantly greater than human speech are automatically filtered out through the hardware
42


present in the microphone, making the audio recordings made by a smartphone suitable for
analysis as soon as they are made, as sound component
s whose characteristics differ
significantly from human speech do not have to be filtered out programmatically before
analysis.


3.6.3


How to take an audio recording in Android


The Android SDK allows applications to access the microphone through the

Aud
ioRecorder

class.


There are two methods in this code, a method responsible for creating the audio, and a method
for starting and stopping the recording of audio. In the project, the audio recorder is started,
allowed to run for a finite duration of time,

and then stopped and the recording written to
memory.


To facilitate testing, an audio player object is created so objects which have been recorded can
immediately be played so as to gauge the quality of an audio recording.


Fully commented code explain
ing how to take an audio recording, as well as how to play the
audio back can be seen on the CD.


3.7
-

Storing data in memory


In this section, the reasons for, and the methods of writing data to memory are outlined.


3.7.1


Overview of data stored


Data

gathered by the application is periodically written to a .txt file. This data can then be taken
off the phone (either through sending the file over the Internet, or by connecting the phone to a
computer via USB).


43


3.7.1


Why is data stored?


Due to the f
inite amount of internal storage on a smartphone(140MB on the phone used in the
development of this project


Samsung i5500), micro SD memory cards are placed in
smartphones. These memory cards range in size from 1GB to 32GB.


This allows data which is not

regularly processed to be placed on the SD card, until it is needed
(rather like how on the modern PC, RAM and a hard drive interact).


As the data which is gathered by the application is not being processed on the phone, the
application has no use for f
iles which it has written, and can store them in the external memory
on the SD card.


This also allows of ease of transfer of data to a PC for remote analysis, as a PC can interact with
a Micro SD card directly.


The data gathered is written into a .txt f
ile, both for ease of debugging (ie. A human readable
format allows easy detection of errors in data) and ease of analysis (MatLab can read data in
from .txt files).


3.7.3


How is data stored?


Data can be written to a file in Java using the
BufferedWrit
er

class. In this implementation, a
new .txt file is created every time data is to be written, and a serial number, which is a function
of the current time, is appended onto the generic filename to create a unique filename.



The Arraylists which store th
e data after it is gathered are mined for their contents. Their
contents are written to the .txt file. The time at which the sampling period started and finished
are also written to the .txt file.




44


3.8
-

Running application in the background


In this sect
ion, the reasoning behind, and implementation of running a background process in
Android will be discussed.


3.8.1


Why should the application run in the background?


As this project relies on passive sensing, it does not require a GUI for the user to int
eract with.
As such, an application of type
Activity

would be unsuitable, as it would require the user to
constantly have a blank GUI onscreen. As such, the user would not be able to interact with the
phone in order for the application to work, and this is

obviously out of the question.


To solve this problem, the application must be moved to the background, so the it can serve its
purpose without impinging on the user's experience of the phone. In the background, its tasks
can be completed without the us
er's input, and the user need not even be aware that the
application is doing anything at all.

3.8.2


What is a background program?


Background programs are programs which do not a user interface, and perform their tasks
without any active participation b
y the user.


These programs run parallel to any activities which the computer (or in this case, smartphone) is
involved in. To this end, a background process requires a multitasking OS, which is the only
way that different programs can be run in parallel o
n the same machine.


With the Android OS, the standard definition of an application, an
Activity
, is a foreground
activity. An activity which runs in the foreground has all of its ongoing processing stopped when
the GUI is closed down and the developer can

choose to save data through the
OnClose

method
if they so wish.


However, for applications which conduct background processing, a special type of class, called
a
Service

is declared. A
Service

takes place in the background, without accessing the screen on

45


the phone. A Service has full access to the smartphone hardware, similiar to an Activity, but
must use the
ContentResolver

query function of accessing the phone's database, instead of how
an
Activity

can use the
managedQuery

function.


3.8.3


How is this

accomplished?


In order for the application to work in the background, it must extend the
Service
class. After
this, the it is very similiar to a class which extends the
Activity
class, as the standard methods for
creation,initialisation, closing and dele
tion of an
Activity
class can be found in the
Service
class.


As running an application in the background involves a class declaration, and including an entire
class's code in one section would be both confusing and unwieldy, only the skeleton code for
cre
ating and starting a
Service
class is included here, as well as how to start it from an Activity,
and code to start it from the moment a phone is powered on.


The skeleton code for a class of type
Service
is presented below.


And here’s the skeleton code f
or the java class;

public class MyService extends Service{

@Override

public IBinder onBind(Intent intent) {

return null;

}

@Override

public void onCreate() {

super.onCreate();

//Your code here

}

46


@Override

public void onDestroy() {

super.onDestroy();

//Your

code here

}

@Override

public void onStart(Intent intent, int startid) {

Toast.makeText( getApplicationContext(),

“Service started!”,

Toast.LENGTH_SHORT).show();

return;

}

}


The following permission must be included in the application manifest.xml file.


<service android:enabled=”true” android:name=”.MyService” />



In order to start a this class from an
Activity
class, the following code is used.


Intent MyIntent= new Intent(this,MyService.class);

startService(MyIntent);

To start the class from bootup (as

the class will not be started by the OS from bootup), a
BroadcastReceiver

class, which detects bootup and starts the Service class was created.


The
BroadcastReceiver

is internal to the Service class.



The code for such a class is shown below.



public

c
lass

StartUp
extends

BroadcastReceiver {


47




@Override



public

void

onReceive(Context arg0, Intent arg1) {




StartIntent();




return
;



}


}


private

void

StartIntent(){



Intent MyIntent=
new

Intent(
this
,MyService.
class
);


st
artService(MyIntent);


}


The following is added to the manifest.xml file to cause the BroadcastReceiver to be called upon
bootup.



<
receiver

android:name
=
".StartUp"
>





<
intent
-
filter
>





<
action

android:name
=
"android.intent.action.BOOT_COMPLETE
D"

/>





</
intent
-
filter
>



</
receiver
>



However, this in itself will not cause the application components to be called regularly, it merely