Title - DiUF

searchcoilSoftware and s/w Development

Aug 15, 2012 (5 years and 3 days ago)

286 views



Multimodial Interfaces (MMI)
:
The Papier-Mâché Toolkit
Document, Image and Voice Analysis
Research Group (DIVA)
Department of Informatics (DIUF)
, Faculty of Science
University of F
ribourg, Switzerland

Authors: Pedro de Almeida, Dominique Guinard, Martin

Eric Ritz
Report Type: Project Documentation
Date: 08.05.2006
Table of contents
Authors: Pedro de Almeida, Dominique Guinard, Martin Eric Ritz



.......................

1

Report Type: Project Documentation



...................................................................

1

Date: 08.05.2006



..................................................................................................

1

Table of contents



..................................................................................................

2

1 Introduction



............................................................................................................

3

2 The Papier-Mâché-Toolkit



......................................................................................

3

2.1 The notion of „ Papier-Mâché “



........................................................................

3

2.2 Installation



........................................................................................................

4

2.3 Functions



.........................................................................................................

5

2.4 Possibilities and borders of the toolkit



.............................................................

7

2.5 Integration into larger systems



........................................................................

7

3 Project



....................................................................................................................

8

3.1 Scenario



...........................................................................................................

8

3.2 Required Components



....................................................................................

9

3.2.1 Software



....................................................................................................

9

3.2.2 Hardware



..................................................................................................

9

3.3 Installation procedures



....................................................................................

9

Multimodal Interfaces
The Papier-Mâché Toolkit
page
2
of
9
1
Introduction
The paper is dead, long lives the paper! This slogan was heard in the past ever

more frequently. All world talked about paperless office, that owing to the techn
o
-
logical progress paper will disappear from the desks and offices shortly and make

place for a more rational way of working. Nevertheless, if one believes the statis
-
tics paper does not decrease, but increase. In our digital age we're using paper

more and more. The paperless office is a myth.
However, paper-saturated office is not a failing of digital technology. Rather it is a

validation of our expertise with the physical world. Accordingly researchers explore

how to better integrate the physical and electronic worlds by building physical in
-
terfaces. Tangible user interfaces (TUIs) augment the physical world by combining

everyday physical objects with digital information. Currently only some technology

experts can build these user interfaces because programmers are responsible for

acquiring and abstracting physical input. To simplify the developing of tangible in
-
terfaces, Papier-Mâché, an open-source toolkit was developed.
2
The Papier-Mâché-Toolkit
2.1
The notion of „ Papier-Mâché “
The expression has two meanings:
On the one hand, Papier-Mâché is a technique for creating forms by mixing wet

paper pulp with glue or paste. The crafted object becomes solid when the paste

dries. In spite of the French name, Papier-Mâché was actually originated by the in
-
ventors of paper itself - the Chinese.
On the other hand
Papier-Mâché stands for a toolkit for building tangible interfaces

using computer vision, electronic tags, and barcodes. But, what does tangible user

interfaces has to do with a handicraft technology?
The designation probably points on the characteristic of the Toolkit to be able to

join several individual elements simply to a whole one.
Moreover during their prac
-
tical attempts the developers used scraps of paper with different symbols, which

served as control units by the optical recognition.
Multimodal Interfaces
The Papier-Mâché Toolkit
page
3
of
9
2.2
Installation
For the installation instructions we fall back to the information given from the Uni
-
versity of Stanford:
A.
Before installing the Papier-Mâché Toolkit:
1.
Install
Java 2 Platform Standard Edition 5.0
1
, the
Java Media

Framework(JMF) 2.1.1e

2
and
Java Advanced Imaging (JAI) 1.1.2

3
.
2.
If you would like to use RFID
install the
Phidgets SDK
(
PHIDGET.msi
)
4
.
3.
Install a CVS client. Because of its integrated CVS support the Stanford

HCI Group recommend NetBeans 5.0
5
as an IDE.
4.
If you'd like to use a TWAIN source (for a digital
still
camera or scanner) in
-
stall
Java TWAIN

6
(note: this software component is not required for our

project).
B. Once these
components are installed, download
Papier-Mâché
from its Source
-
Forge CVS repository
7
.
To access the Papier-Mâché SourceForge
repository from Eclipse:
1.
Open Eclipse -> Windows -> Open Perspective -> CVS
Repository
2.
create a new CVS Repository:
Host:
cvs.sourceforge.net
Repository Path:
/cvsroot/papier-mache
enter "
anonymous
" as the username with Connection Type:
pserver
C. Set the Papier-Mâché project in Eclipse to be J2SE 5.0 compliant by
going to

Project->Properties-> Java Compiler, and setting the Compiler compliance level to

5.0.
D. Camera support in Papier-Mâché uses the Java Media Framework. To
use a

camera with JMF, first run the JMRegistry application.
E. RFID support in Papier-Mâché uses Bridge2Java. To use RFID in
Pa
-
pier-Mâché, add the lib folder of Papier-Mâché to your path (
e.g.
C:\dev\papier-
mache\lib
).
1
http://java.sun.com/j2se/1.5.0/download.jsp
2
http://java.sun.com/products/java-media/jmf/2.1.1/download.html
3
http://java.sun.com/products/java-media/jai/downloads/download-1_1_2.html
4

http://www.phidgets.com/modules.php?op=modload&name=Downloads&file=index&req=view

download&cid=1
5
http://www.netbeans.info/downloads/download.php?type=5.0
6
http://www.gnome.sk/Twain/jtp.html
7
http://sourceforge.net/projects/papier-mache
Multimodal Interfaces
The Papier-Mâché Toolkit
page
4
of
9
2.3
Functions
The Papier-Mâché toolkit aims at providing toolkit level support for physical i
nput.

Handling physical input at the toolkit level enables developers to build tangible

user interfaces quite fast or to adapt the underlying sensing technologies with a

small expenditure of time. Therefore a developer has two main tasks: declaring the

input that he want to process and associate it to application behaviour. Moreover

Papier-Mâché as an open-source project enables the integration of further sensing

technologies.
In its current form, Papier-Mâché supports
computer vision, electronic tags (e.g.

RFID tags), and barcode (includes 2D variants) input. Vision is the most flexible

and powerful of these technologies. The toolkit makes use from the Java Media

Framework (JMF) and Advanced Imaging (JAI) APIs. JMF supports any camera

with a standard driver, from simple webcams to high-quality 1394 cameras.
Papier-Mâché represents physical objects as Phobs. The input layer gets
sensor

input, interprets it and generates the Phobs. Phobs contain an array of data ele
-
ments (such as an RFID tag) and an array of properties (e.g. location). The toolkit

provides programmers a monitoring window which displays the current input ob
-
jects, image input and processing, and behaviors being created or invoked with

the association map.
The
programmer only is responsible for selecting input types. He needs neither to

discover the attached input devices nor to establish a connection to them. These

tasks are executed without its effort. Once he has selected an input device, Pa
-
pier-Mâché generates events representing all state changes of the corresponding

sensor. Event types are the same for all different technologies. Providing high-lev
-
el events facilitates technology portability.
Events can be filtered using
EventFilters
. A filter listen to input events, does his

work according to predetermined criteria and pass relevant events to the
Vision
-
Listeners
registered with the filter. Most filters filter events that meet a certain

classification criteria. Currently there are three implemented classifiers:
Multimodal Interfaces
The Papier-Mâché Toolkit
page
5
of
9

The
MeanColorClassifier
passes along events about objects whose col
-
or is within distance ε of a given color.

The
ROIClassifier
passes along events about objects in a particular re
-
gion of interest of the camera view.

The
SizeClassifier
passes along events for objects whose size is within

a Euclidean distance ε of an ideal size.
While all technologies
have the same events, each technology provides different

types of information about the physical objects it senses:

RFID provides
both the tag ID and the reader ID. Applications can use ei
-
ther or both of these to determine application behaviour.

Vision provides the size, location, orientation, bounding box, and mean

colour of an object.

Barcodes
provide the ID, the type (EAN, PDF417, or CyberCode), and a

reference to the barcode image, which allows vision information such as lo
-
cation and orientation.
Generating RFID events requires minimal
processing. When a tag is placed on a

reader, Papier-Mâché generates a
phobAdded
event. Each following sensing of

this tag generates a
phobUpdated
event. If a tag’s presence isn’t reported within a

certain period of time, Papier-Mâché concludes that the tag has been removed an

generates as a consequence a
phobRemoved
event.
Generating vision events requires much more interpretation of the input. Image

analysis in Papier-Mâché has three phases:
1)
Camera calibration:
Calibration takes place on application start-up. Perspective

correction is used. It’s an effective and efficient method. Of course there exist

more precise but then also more computationally expensive methods. The JAI li
-
brary provide perspective warp as a primitive.
Multimodal Interfaces
The Papier-Mâché Toolkit
page
6
of
9
2)
Image segmentation:
This step partitions an image into objects and back
-
ground. For this Papier-Mâché contains edge detection.
3)
Creating events:
For each object a
VisionEvent
is created. At each time step,

the
ImageSourceManager
fires a
visionAdded(VisionEvent)
for new objects, a

visionUpdate(VisionEvent)
for previously seen objects, and a

visionRemove(VisionEvent)
for objects no longer visible.
When a camera is not available, or when the
programmer wants to guarantee a

concrete input stream for testing purposes Papier-Mâché provides a so-called

WOz (Wizard of Oz) mode, in which it can simulate camera input with a directory

of user-supplied images.
2.4
Possibilities and borders of the
toolkit
Within Papier-Mâché, the processing is, not surprisingly, bound by the image pro
-
cessing computations. On an ordinary computer, Papier-Mâché runs at interactive

rates. The developers of the toolkit report that during their tests on a dual Pentium

4 running Windows XP, it runs at 5.5fps at processor load of 30%. The perfor
-
mance is more than sufficient for forms and associative applications, and sufficient

for topological and spatial applications with discrete events. Where tangible input

provides a continuous, interactive control, current performance may be accept
-
able. These performance numbers should be considered lower bounds on perfor
-
mance, as the image processing code is entirely unoptimized.
2.5
Integr
ation into larger systems
Papier-Mâché enables programmers to program an application with knowing not

more than Java. Moreover, Papier-Mâché is designed so that it can be used to
-
gether with other toolkits. In an article the developers gave twenty-four examples

of existing tangible user interfaces employing paper and other “everyday” objects.

They can be divided into the four categories: spatial, topological, associative user

interfaces and forms. In all cases the Papier-Mâché toolkit could be useful. This

Multimodal Interfaces
The Papier-Mâché Toolkit
page
7
of
9
shows clearly, that Papier-Mâché can be used for a multitude of applications even

it supports actually
computer vision, electronic tags and barcode input exclusively.
3
Project
The
Service Counter System
is an environment with the capability of providing

counter management functionalities through a tangible interface.
This environment is based on user identification and both optical and radio-fre
-
quency objects recognition. Objects motions will enable the user to interact

with the counter and, therefore, execute actions.
3.1
Scenario
The Library Assistant

:

Detect and identify user

Identify book

Choose an action
Multimodal Interfaces
The Papier-Mâché Toolkit
page
8
of
9
Digital Camera
Interactive Board
RFID Sensor
3.2
Required Components
During the development of the application we used the following components:
3.2.1
Software

SUN Java 2 SDK, Standard Edition (J2SE SDK), version 1.5.0 Update 6

NetBeans 5.0
8
3.2.2
Hardware

An interactive board

A digital Camera

RFID Sensor

RFID Transponders

A PC station
3.3
Installation procedures

Create a new project under
NetBeans.

Import the Service Counter System project (located into the CD-ROM).

Connect the digital camera and the RFID sensor

Run file «

ServiceCounterSystem.java

» as an JAVA application under
Net
-
Beans.

8
http://www.netbeans.org/
Multimodal Interfaces
The Papier-Mâché Toolkit
page
9
of
9