University of South Australia

useumpireΛογισμικό & κατασκευή λογ/κού

2 Δεκ 2013 (πριν από 3 χρόνια και 9 μήνες)

187 εμφανίσεις



University of South
Australia


School of Computer and Information Science

Master of

Computer and Information Science


Minor Thesis


Modifying 3D Objects

In Augmented Reality
Environments


Student:
QUANG

NHAT LE

Student ID: 100109823

Supervisor: Dr
.
Christian Sandor





Contents

ACKNOWLEDGEMENT

................................
................................
................................
........................

i

DECLARATION

................................
................................
................................
................................
....

ii

ABSTRACT

................................
................................
................................
................................
.........

iii

I.

INTRODUCTION

................................
................................
................................
..........................
1

1.

Motivation

................................
................................
................................
................................
.........

1

1.1.

Research Question

................................
................................
................................
........................

2

1.2.

C
ontributions

................................
................................
................................
................................

2

II.

REQUIREMENT ANALYSIS

................................
................................
................................
............
4

1.

System Interactivity Model and Scenario

................................
................................
.........................

4

2.

Users

................................
................................
................................
................................
.................

5

3.

Technical Analysi
s

................................
................................
................................
.............................

5

4.

Summary of requirements

................................
................................
................................
................

7

III.

BACKGROUND

................................
................................
................................
............................
8

1.

Augmented Reality environments

................................
................................
................................
....

8

2.

MR Platform

................................
................................
................................
................................
......

9

3.

Haptic devices

................................
................................
................................
................................
.

11

4.

Related Work

................................
................................
................................
................................
..

13

4.1.

Collision Detection

................................
................................
................................
..................

13

4.2.

Deformation Methods

................................
................................
................................
............

15

4.3.

Other Object Manipulation Researches or Programs

................................
.............................

17

IV.

SYSTEM DESIGN

................................
................................
................................
........................

18

1.

System Configuration

................................
................................
................................
......................

18

2.

S
ystem Architecture

................................
................................
................................
........................

19

3.

System Mechanism

................................
................................
................................
.........................

20

3.1.

3D Model

................................
................................
................................
................................
.

21

3.2.

Painting

................................
................................
................................
................................
...

22

3.3.

Deformati
on

................................
................................
................................
............................

23

3.4.

Subdivision

................................
................................
................................
..............................

24

3.5.

OpenCL

................................
................................
................................
................................
....

26

V.

IMPLEMENTATION

................................
................................
................................
....................

30

1.

Manipulation Algorithm’s Flowchart

................................
................................
..............................

30



2.

Moving Direction
................................
................................
................................
.............................

31

3.

Inter
section and Updating Vertex position

................................
................................
.....................

33

3.1.

Checking for Sphere

................................
................................
................................
................

34

3.2.

Checking for Cylinder

................................
................................
................................
..............

35

VI.

CONCLUSIONS

................................
................................
................................
..........................

37

1.

Results

................................
................................
................................
................................
.............

37

2.

Summary

................................
................................
................................
................................
.........

38

3.

Future work

................................
................................
................................
................................
.....

39

REFERENCES

................................
................................
................................
................................
....

40






LIST OF FIGURES

Figure 1 : Paint on a shoe in AR in TEDx Adelaide 2010

................................
................................
..........

3

Figure 2 : Interactivity Model

................................
................................
................................
.....................

4

Figure 3 : Relationship between coordinates

................................
................................
...........................

6

Figure 4 : X3D in AR Program

................................
................................
................................
.....................

7

Figure 5 : An AR scene

(Courtesy of ECRC)

................................
................................
...............................

8

Figure 6 : Mechanism of a See
-
Through Head Mounted Display

................................
.............................

9

Figure 7 : A CANON Head Mounted Display (Courtesy Magic Vision Lab)

................................
.............

9

Figure 8 : AR Environment

................................
................................
................................
.......................

10

Fi
gure 9 : A marker in the program

................................
................................
................................
.........

10

Figure 10 : Tracking by marker in MR Platform

................................
................................
.....................

11

Figure 11 : User paints on the virtual cup using haptic device

................................
..............................

12

Figure 12 :
PHANTOM OMNI



Figure 13 : SPIDAR


String based haptic device

.............................

12

Figure 14 : Building

Bounding Volume Hierarchy

................................
................................
..................

14

Figure 15 : Spatial Division Hierarchical Structure

................................
................................
................

14

Figure 16 : Bump’ shape modified by various tool (Courtesy of Noble and Clapworthy)

...................

15

Figure 17 : Discretize a function in x in [0..1]

Figure 18 : Polyhedral form of function u(x,y) = 1


x2


y

................................
................................
................................
................................
...........................

16

Figure 19 : Working space

................................
................................
................................
........................

18

Fi
gure 20 : System Architecture

................................
................................
................................
...............

19

Figure 21 : Component Diagram

................................
................................
................................
..............

20

Figure 22 : A decomposition of a 3D cube model

................................
................................
...................

21

Figure 23 : Class diagram of data structure for storing 3d model

................................
.........................

22

Figure 25 : Painting effect

................................
................................
................................
.........................

22

Figure 2
4 : Texture mapping

................................
................................
................................
.......................

22

Figure 26 : The stylus deforms the object’s surface

................................
................................
...............

23

Figure 27 : Update position for a vertex

................................
................................
................................
..

23

Figure 28 : Big polygons created when updating vertices positions
................................
.....................

24

Figure 29 : Subdivision by midpoints

................................
................................
................................
......

25

Figure 30 : People solve puzzle in parallel


Figur
e 31 : People solve puzzle in sequence

..............

26

Figure 32 : Class diagram of Model3D structure

................................
................................
....................

27

Figure 33 : Structure of CPU and GPU

................................
................................
................................
......

28

Figure 34 : Benchmarks of GPU and CPU (Courtesy of NVIDIA)

................................
................................
.

28

Figure 35 : Manipulation Algorithm’s Flowchart

................................
................................
....................

30

Figure 36 : Moving direction at different points on the stylus

................................
..............................

31

Figure 37 : Benchmark for Paint Function

................................
................................
..............................

37

Figure 38 : Benchmark for Deformation function

................................
................................
..................

38

Figure 39 : Paint on a shoe in AR in TEDx Adelaide 2010

................................
................................
......

39

Figure 40 : Deform a 3D model of a bunny

................................
................................
..............................

39


i


ACKNOWLEDGEMENT


It is a pleasure
to thank those who make this thesis possible.

Firstly, I would like to thank my supervisor Dr. Christian Sandor for his
help, support and
patience during the last year. I would thank him for giving chance to work in the Magic
Vision Lab where I could have

a
great chance to improve my knowledge.

Secondly, I am very grateful for the help of members from the Magic Vision Lab for their
support especially Ulli, Arindam and Peter
. Their advices are very helpful for my thesis
. I
am greatly
appreciated

for the de
monstration

from Donald

in OpenCL
.

I am really proud to
be a member of the Magic Vision Lab
.

Thank you all for giving me chance to work with you.

Above all, I would like to thank my family. They gave me the opportunity to come here and
study in Australia.
Especially to my wife

Hong
, she was always the great support
at all
times.


Without your help, this thesis would not have been possible. Best wishes for you all!

Quang Le

10/07/2011



ii


DECLARATION


I, Quang Nhat Le, certify that all the work presented in th
is thesis is my own work
.

It has
not
been
submitted
, either the whole or in part,

in any colleges or universities

before
.

Wherever contributions of others
a
re involved, every effort is

made to indicate this clearly,
with due reference to the literature,

and acknowledgement of collaborative research and
discussions
.



Quang Nhat Le

10/07/2011


iii


ABSTRACT


Model manipulation is always a major task in the field of industrial design. Traditional
method, which physically manipulate model, has high risk of making

wrong or broken
models. This wastes time and mone
y of designer to repair and build a good model. There
have been researches on manipulating objects virtually to overcome the limitations of
traditional method. However, virtual manipulation is hard to use a
nd requires long time to
study.

In this thesis, we introduced a new system for object manipulation on Augmented Reality

Environment
. User could benefit from the advantages of virtual manipulation such as safe
manipulation and saving time and money. Also, w
ith the enhancement of
augmented reality
environment
, user could interact with the system more natural and thus, easier to use than
on virtual environment.

We proposed a simple mathematical approach to 3D object manipulation in comparison
with other
existed methods. The core manipulation algorithms in this syst
em were
implemented on OpenCL to utilize the power of parallel computing. Also, the system was
investigated to be an extension part of X3D in order to help other developers to re
-
use it
effectiv
ely.

A demonstration of one major part of this system, which is the painting function, was
shown in TEDx Adelaide in 2010. The demo
has received a lot of attraction and good
feedbacks from audiences.It can be seen that the study, which is manipulation obje
ct in
augmented reality environment, is a potential trend and worth to be more investigated.

1


I.

INTRODUCTION

1.

Motivation

In the field of industrial design, bad or broken models are always a primary concern for
designers. A model can get corrupted due to various reasons such as wrong color choice or
bad manipulation. Once model is corrupted, a fresh new model is required to a
ccomplish
the task. This means an extra amount of money and time has to be spent to build a new
model to a state just before the previous model was corrupted.

Current state of the art allows designers to design and manipulate three dimensional (3D)
model
s through virtual manipulation. There are set of modeling softwares such as Blender,
ZBrush, Maya which allow designers to create, modify, save and load models virtually.
Using these tools, designers can load a new model in a second instead of spending tim
e and
money to build a new one. Moreover, the ability to undo any erroneous manipulation on a
virtual 3D object makes the designing experience hassle free and easier than physical
manipulation. These functions are the major advantages of the virtual manipu
lation
helping designers to overcome almost the issues of traditional method. However, the
virtual manipulation still has some limitations.

First, 3D modeling programs are hard to use and requires users to spend months to
familiarize. Second, the input me
thod for virtual manipulation, which is mostly based on
mouse and keyboard, is in a two dimensional (2D) environment. Although, there are
techniques for 3D input using mouse and keyboard such as Arcball (Shoemake K., 1992)
and shortcuts, these techniques a
re still inadequate to provide a comfortable environment
to the users.

The difficulty of virtual manipulation is one of the main reasons that inspire research on
techniques to build simpler and more interactive programs. It will be more natural for
users’

interaction to manipulate 3D objects if they get a realistic sense of vision and tactile
feedback than virtual manipulation (Sandor et. al., 2007). Recently, Visuo
-
Haptics Mixed
Reality (VHMR) systems that allow users to see and touch virtual objects are
being
investigated (Sandor et. al., 2007). Some of these studies proved the possibility of creating
3D modeling program. First, the Visionary Painting Application (Sandor et.al. 2007) in
VHMR allows user to use virtual brush and paint on a virtual teacup.
Second, the Haptics
-
Based Deformable Solids (Kevin et. al. 2002) successfully emulated the deformable virtual
clay material. Third, the Haptic Pottery Modeling (Lee et. al., 2008) enables users to deform
the virtual clay and create pottery models using hap
tic tool. Especially, the Visuo
-
Haptics in
Augmented Reality (VHAR) framework developed at Magic Vision Lab (Eck U., 2011)
enables developers to create high fidelity VHAR programs in shorter time with less effort as
it simplifies the programming phases suc
h as interaction with devices, calibration, and
2


integration. This opens the possibilities to create a better 3D modeling program which
allows users to directly interact with the model using a haptic device (for touching sense)
in a real 3D environment by w
earing a head
-
mounted display (HMD).

One of the main challenges when creating 3D modeling programs in VHAR is real
-
time
manipulation, as user manipulates the model the changes should rapidly be updated on the
model. This requires an efficient method for c
ollision detection (Lee et. al. , 2008). In 2008,
The Khronos Compute Working Group released the framework for parallel computing


OpenCL (Open Computing Language). OpenCL enables developers utilize the parallel
computing feature on either Graphic Process
ing Unit (GPU) or Central Processing Unit
(CPU). This is also a potential solution for the challenge of real
-
time collision detection.

According to some successful implementations and some potential solutions for creating
3D modeling program in AR such as
VHAR framework and OpenCL, it is possible to
conclude that there is a good potential of creating a better 3D modeling program that will
overcome the disadvantages of the physical and the virtual manipulation methods.


1.1.

Research Question

-

Is it possible to cr
eate a program that can paint on a 3D object in augmented reality
environment?

-

Is it possible to create a program that can deform a 3D object in augmented reality
environment?

-

Integrate the painting and deforming feature as a part of VHAR so that developer
s
can reuse them to create another AR program.


1.2.

Contributions

This study contributes in these following fields of Computer Graphics and Augmented
Reality:

-

It is a simple mathematical approach for simulating deformation compared with
other prior studies.

-

It provides a new method on simulating deformation object by using parallel
computing in OpenCL.

-

It provides some extended functionalities for VHAR Framework of Magic Vision
Lab(Eck U., 2011) that allows user to create rapid applications for their study.


There is one big achievement of this project as the painting program has been shown in the
TEDx Adelaide in 2010 and we have got many good feedbacks from audiences.

3



Figure
1

: Paint on a shoe in AR in TEDx Adelaide 2010

www.yo
utube.com/watch?v=U2YE2lHULWA

This is also a
good

motivation for me to continue to do the rest of this research.



4


II.

REQUIREMENT ANALYSIS

This chapter analyses the requirements for this project. First, this will propose an
interactivity model of the system and the scenarios of using it. Second, this will define users
and categorizes users into specific group based on users’ activities. Next,

this will analyze
the required techniques that used to fulfill the requirements. The final part is the summary
of the requirements.


1.

System Interactivity Model and Scenario

The model below is a proposal model for a complete 3D Modeling Program in AR. Thi
s
project does not cover all features of the program but a part of it.


(Courtesy of Magic Vision Lab)

Figure
2

: Interactivity Model

The images above describe how the system works. Basically, the usage of the system will be
in 3 steps. Step 1, a 3D object will be created by placing a real object in between 2 kinects.
The two kinects will obtain the depth
-
map information of the real obje
ct and then combine
to build a 3D model of the real object. Step 2, recent created 3D model will be loaded into
the AR workspace. The 3D model will be located in a specific position. User uses the head
mounted device to see the object and he uses the Phant
om Omni Haptic device to modify
the 3D object. For example, he can paint or deform the object. Step3, after the manipulation
is finished, the 3D object will be saved in to hard disk or be printed by 3D printer.

This project just focuses on the second step

which allows user to paint and deform a 3D
object using a Phantom Omni Haptic device.




5


2.

Users

There are 2 main types of users who will interact with the system.

Designer or Application User



Users of this type experiment treat the system as a black
-
box without knowledge about the low
-
level technique of the system. They use the provided
features to create or manipulate 3D objects.

Developer


Users of this type have a deeper knowledge about
the system compared with
users in Application User group. They can inherit the provided abilities of the system such
as painting and deforming in the implementation level to create their own programs or to
experiment another field of AR.


3.

Technical A
naly
sis

According to the proposed system, user will interact with AR by using a head mounted
display for graphical presentation and a haptic device for manipulating virtual objects. The
head mounted display is responsible for showing the real world merged with

virtual
objects (Azuma R., 1997). However, there is a challenge that is how to render a virtual
object in a specific location in the AR scene. This requires a method that strongly registers
the relationship between virtual object and real scene (Vallino J
. and Brown C., 1999). To
overcome this challenge, we use MR Platform to match the virtual world and the real world
by using markers (Uchijama et al., 2002).

As user moves the stylus of the haptic devices, it collides with the virtual object in the
scene.

When a collision of the stylus and the object occurs, the object will be updated in
some properties such as shape or color at the intersection point. However, haptic device
and head mounted display are separated devices and working independently. Each de
vice
works on its local coordinate without any information about other’s coordinate. Therefore,
collaboration between coordinates is required to do precise collision detection (Vallino J.
and Brown C., 1999). To solve this, we establish a relationship of t
wo devices’ coordinates
via marker’s coordinate.

6



Figure
3

: Relationship between coordinates

Figure above shows the relationship between coordinates. First, we place a marker in a
specific location so that we can identify the transformation from marker’s coordinate to the
haptic’s coordinate. Therefore, haptic device can get information about the
marker
coordinate. Second, using MR platform get the marker’s coordinate. Routing between
coordinates can be done via the marker’s coordinate after some Affine’s transformations.

The next requirement of this program is about real
-
time manipulation. The pro
gram should
be able to response immediately if there is any modification made on the model. For
example if user uses the haptic stylus and pushes on the model, in intersection area should
be instantly deformed. To achieve this, we need a good method to det
ect the collision
between the stylus and the virtual model. In the program, the stylus will be overlaid by a
3D model of a modification tool which is a set of polygons. Therefore, mesh
-
to
-
mesh
collision detection is required. However, the complexity of mes
h
-
to
-
mesh detection
“naively” is O(n
2
) (Backman N. , 2010). This requires a huge checking times even for just a
simple model. We will use Open Computing Language (OpenCL) that utilizes parallel
computing ability, in order to reduce the checking time (Khron
os Group, 2008).

The last requirement is to make this project to be reusable. This feature allows other
programmers to reuse the existing functions to create another AR programs. To achieve
this, the major functions such as painting and deformation should

be implemented as
libraries so that other programs can import them to use.



7









Figure
4

: X3D in AR Program

For this
project, AR programs will be defined in X3D files which is a file format for
presenting 3D computer graphics developed by X3D Specification Working Group.
Therefore, the painting and deformation functions will be implemented as X3D nodes for
the reusabilit
y. User will defines these nodes in X3D files and load them by H3D launcher
provided by VHAR framework (Eck U., 2011).


4.

Summary of requirements

According to the analysis, we can summarize the requirements for the system as following:

Functionality



User c
an use the program to manipulate 3D objects in two ways: Painting
and Deforming object. This is the major objective of this project.

Precision


The manipulation should be applied correctly. This means the routing between
devices’ coordinates should be cor
rect to get accurate manipulations. This is the second
major objective of this project.

Real
-
time Operation



The program should be able to work with normal object (less than
10.000 polygons) in real
-
time responsibility, particularly, at least 25 frames pe
r second.

Reusability



The program should be implemented in a way that other programmers can
reuse as much as possible. The objective of this program, the painting and deforming
function should be implemented as x3D nodes.



AR Framework

X3D Nodes

X3D Node
Libraries

X3D Files

H3D
Launcher

AR Program

8


III.

BACKGROUND

This section aims
to provide detailed information that has briefly introduced in the
requirement analysis. There are two parts in this section. The first part will introduce
backgrounds of required techniques that have to be applied to the system. After that, the
next part
will provide a survey of some relevant research.


1.

Augmented Reality environments

Augmented Reality is an alteration of Virtual Reality (VR) (Azuma R., 1997). In VR, user is
working in the virtual environment without perception of the real world. On the con
trary,
AR is a synchronization of real world and virtual objects as it shows both in the same
workspace (Azuma R., 1997). In AR, user’s perception of the real world is modified with the
visual information which generated by a computer so that user can inte
ract with that
information (Eck U. 2011). This brings new approach to the human
-
computer interaction
that applications such as medical training, entertainment, navigation could benefit from
that (Azuma et al., 2001).


Figure
5

: An AR scene (Courtesy of ECRC)

The picture above shows a real table and real phone. Also, there are virtual chairs and a
virtual lamp around and on the table. User has to wear an optical see
-
through head
mounted display to be able to see this. The head
-
mounted display is an AR device that used
to combine real and virtual (Azuma R. 1997).

9



Figure
6

: Mechanism of a See
-
Through Head Mounted Display


Figure
7

: A CANON Head Mounted Display (Courtesy Magic

Vision Lab)

First, camera records the video of the real world and transfers to the Video Compositor.
Second, the head tracker tracks the movement of the head to get the head location and send
to the Scene Generator. Based on the head location, the scene g
enerator will generate the
graphic images that appropriate to the user’s view and send to the Video Composer. Third,
Video Composer combines the video of real world and graphic images and sends to
Monitors. Therefore, user can see a mixed of real and virtu
al on the monitors.

Next
section

will discuss
MR Platform

which is
a tracking system of the project.


2.

MR Platform

As described in AR environment, Head
-
mounted Device is responsible for recording the
real scene and then adds virtual object in displays. The
scene of real world will be recorded
by camera of HMD and transfer to simulator. Then, simulator will add virtual elements to
the recorded scene and renders to the displays that showed to user.



10






Figure
8

: AR Environment

Virtual object has to be rendered at a specific point in the workspace. This means the
simulator has to know the correct position in the recorded images to add

virtual object.
However, the recorded data from HMD, which is just images, does not contain the
information about any specific location of the real world. Moreover, user’s perspective of
the real world recorded by HMD will be changed as user moves his hea
d when working.
This causes differences in frames that record from HMD. Therefore, it is difficult for
simulator to localize a position in the real world.

In order to overcome this issue, a tracking method is required. In fact, there have been
existed man
y tracking methods available classified into 2 groups: marker and vision based
(marker
-
less) tracking (Rolland et al., 2001). In this project, we used a marker based
system, which is MR Platform, for tracking method.



Figure
9

: A marker in the program

Markers are printed images that
used to localize the coordinate system for an AR scene. A
marker has special figures that
can be recognized by the simulator. Using markers,
simulator can extract their features to get the exact po
se of the camera relate to them (Kato
et al., 2000).

The above image is an example of a marker of MR Platform that had been used in this
system.
In MR platform, there can be maximum of one hundred various markers used in a
Simulator

HMD

Haptic

Visual Information

Tactile Feedback

Human Senses

11


program. A number is assigned fo
r a particular marker to distinguish with another marker.
User can render 2 different objects by using 2 different markers.

A marker based tracking system requires a camera to record the real world scene, and a
system of pre
-
defined markers to be recognize
d in the scene. Here is the mechanism of a
marker
-
based tracking system.


Figure
10

: Tracking by marker in MR Platform

We used markers in MR Platform in order to show virtual object at a correct location in the
workspace. First
, camera records the world scene including markers and transfers these
image data to computer. Second, MR Platform will receive that data and extracts marker
information from the scene. Once marker is extracted from the real scene, MR platform will
analyze

marker features to identify the global coordinate. Third, based on the retrieved
coordinate, we draw virtual objects to the scene and then display to user.


3.

Haptic devices

The interaction between human and computer is mainly based on mouse and keyboard.
H
owever, keyboard is specifically designed for text input and mouse is originally designed
to be a screen pointer (Subramanian et al., 2005). Although, there are research to enhance
interactivity of mouse and keyboard in 3D environment, they are still inade
quate to
provide a realistic 3D feeling to users (Hamam et al., 2010). For example, user cannot feel
the depth or the force feedback because it is impossible to transfer these kinds of data to
mouse and keyboard.

12



Figure
11

: User paints on the virtual cup using haptic device

Haptic, which is originally from a Greek word meaning “to touch”, is a technique that
provides high
-
fidelity human
-
computer interaction method. Using haptic devices, users can
experiment the tactile fee
dback in a 3D environment. For example, in the figure above, a
user is painting on a virtual cup using haptic device (Sandor et al., 2007). There are several
types of haptic devices. First, the PHANTOM device, which is built as a robot arm, enables
user to

interact with virtual object using the stylus. Second, the string
-
based haptic device
SPIDAR
-
G that provides naturally manipulation with virtual objects. SPIDAR
-
G not only
allows users to touch but also the width of the virtual object (Kim et al., 2002).




Figure
12

:
PHANTOM OMNI



Figure
13

: SPIDAR


String based haptic device

(Courtesy of Magic Vision Lab)



13


4.

Related Work

This section discusses some prior work that relevant to this research. The
first part will
look at some methods to do collision detection and how to reduce the computational
complexity for that. The second part will discuss some methods that do deformation. After
that, the last part of this section will introduce some typical obj
ect manipulation research
or programs that have been done in both virtual and AR environments.


4.1.

Collision Detection

In fact, virtual objects are images lacking physical constructed elements. If two objects are
overlapping each other, there will be no physi
cal effect occurs. This causes problems in
programs that simulating the high
-
fidelity physics simulation such as surgery simulator or
3D modeling program. Therefore, collision detection is required to overcome this problem.
Collision detection is the core
technique of various applications that try to simulate the
physical presence of the real world such as haptic rendering or geometric modeling (Zhang
X. and Kim J. Y., 2007).

However, collision detection requires a strong computational ability for edge
-
fac
e
intersection check (Smith et. al, 1995). Naively, the complexity of mesh to mesh collision
detection is O (n
2
) which n is the number of polygons involved in the test (Backman N.,
2010). According to this complexity, checking the collision for 2 normal m
odels, each
created by 1000 polygons, should require 1000 * 1000 = 1.000.000 checking time.
Moreover, if the program is required for real
-
time running, suppose 25 frames per second,
the checking time will be 25.000.000 times in a second. Therefore, there h
ave been many
intensive researches, which try to reduce the complexity of the algorithm, on collision
detection problem (Zhang X. and Kim J. Y., 2007).

There are three main approaches for collision detection: Bounding Volume Hierarchy
(BVH), Distance Trac
king and Spatial Division (Weipeng A. and Zhibang C., 2010). Bounding
Volume are regions, which are normally box or sphere areas that approximately cover the
shape of an object (Smith et al, 1995).



14



Figure
14

: Building Boundin
g Volume Hierarchy

Figure above shows the algorithm of creating Bounding Volume Hierarchy. First, bounding
volumes will be calculated for each object. Second, a hierarchy structure will be set up
based on bounding volumes. The collision detection will be d
one between bounding
volumes instead of checking edges
-
faces intersections. This method quickly localizes the
collision area and avoids checking on irrelevant objects (Smith et al, 1995).

The second method, Distance Tracking is operated by monitoring dista
nces between each
pair of objects. Test for collision detection will be done if the distance between two objects
is smaller than a limit value (Smith et al, 1995).


Figure
15

: Spatial Division Hierarchical Structure

The third method for Collision Detection is Spatial Division method. The idea of this
method is to recursively divide the space into partitions. The division method is defined by
some initial definitions such as horizontal splitting or vertical splitting.
Then, a hierarchical
structure will be built, based on some alignment methods such as axis
-
alignment, to
operate the collision detection for the whole spatial subdivisions (Jin et al, 2009).


15


4.2.

Deformation Methods

This section will not go too deep into tech
nical matter as a lot of mathematical knowledge
required. Therefore, the objective of this section is just to outline some methods that deal
with object deformation problem.

In 1987, Terzopolous developed a pioneer study on simulating deformable objects
(T
erzopoulos et al., 1987). His research was a foundation for many posterior studies on the
same field. Recently, researches on deformable objects have been investigated for gaming
industry, simulation systems (Muller et al, 2005). There have been many appro
aches in
simulating deformable objects such as the Finite Difference Approaches (Terzopoulos et al.,
1988), Finite Element Method (Muller et al., 2002), NURBS
-
based Free
-
Form Deformations
(Lamousin and Waggenspack, 1994).

In the Finite Difference Approach
, Terzopoulos suggested a method to break a deformable
object into two parts, one is deformable and the other is rigid part. The locations of mesh in
rigid part were calculated by a displacement function while the deformable part was
operated independently
. However, the deformation didn’t operate consistently on the
whole body of the object as they use one rotation matrix for the whole model (Terzopoulos
and Witkin, 1988).

In 1994, Free
-
Form Deformation (FFD) method based on Non
-
Uniform Rational B
-
Splines
(NURBS) was introduced by Lamousin and Waggenspack. This method provided a real
-
time deformation effect on simple lattice objects. However, the ability in deformation with
complex objects wasn’t shown in the study (Henry and Waren, 1994). Based on the stud
y of
Lamousin and Waggenspack, Noble and Clapworthy proposed a new NURBS
-
based FFD
method in 1999. In this study, the authors introduced three deforming tools to modify the
bumps shape by changing the weight of the control points of the NURBS.



Figure
16

: Bump’ shape modified by various tool (Courtesy of Noble and Clapworthy)



16


This method requires a method to generate suitable NURBS FFD mesh which still on
investigated (Noble and Clapworthy, 1999).

One famous method, which is

used in many studies on object deformation, is Finite Element
Method (FEM). FEM is a numerical approach for solving the partial differential equations
with specific ranges of variable. The idea of this method is to discretize the complex
continuous of a f
unction into approximate functions in smaller value ranges. For example,
the figure below shows a function that was partitioned into approximate functions, the red
lines represent for the approximate functions.



Figure
17

: Disc
retize a function in x in [0..1]

Figure
18

: Polyhedral form of function u(x,y) =
1


x2


y

(Courtesy of
http://en.wikipedia.org
)


In Muller’s study (2004), he used Finite Element Method to divide a structure into a
number of smaller elements that turn the object into polyhedral form. The global
deformation will be fragmented to local deformation at vertices on polyhedral. Then,
defor
mation vectors vertices of the polyhedral will be calculated by interpolation functions.
This method provided a fast way to estimate the rotational field while the stiffness matrix
is changed frequently. Also, it provided the ability to do real
-
time animat
ion for
deformation objects (Muller et al., 2004).




17



4.3.

Other Object Manipulation Researches or Programs

This section shows a comparison table of some other relevant applications or researches

in
object manipulation
.

The comparison is carried out base on four criteria: Paint, Deform, Haptic, AR Environment.
Value of criteria: Paint, Haptic, AR Environment has Yes/No values. “Yes” indicates the
application supports the appropriate ability and “No” indicates not support.

Value of
Deform column is denoted by numeric value. These are
0
,
1

and
2

which represent for “Not
Support”, “Point Based Deformation” and “Mesh to Mesh Deformation” respectively.


Application
s

Paint

Deform

Haptic

AR

Environment

Blender

Yes

2

No

No

ZBrush

Yes

2

No

No

Muller et al. ’s Deformation,
2004

No

2

No

No

Sandor et al’s Haptic Painting,
2007

Yes

0

Yes

Yes

Lee et al. Pottery Haptic, 2008

No

1

Yes

No

McDonell and Quin ‘s Virtual
Clay, 2002

No

1

Yes

No

This project

Yes

2

Yes

Yes


Table
1

: 3D Object Manipulation

Research




18


IV.

SYSTEM DESIGN

In this chapter,
we provided information of how the system is designed in both physical
and logical detail. We also describe the methodologies of two main functions of the
system:
Painting and Deforming 3D objects.


1.

System Configuration

This section describes the physical structure of the system including hardware devices and
how they are physically organized.

As have been discussed in Requirement Analysis section, there are

several devices take
place in the system:

-

For AR visualization, we used CANON Head
-
mounted Device.

-

For tracking, we used MR Platform and markers to set a common coordinate to
integrate other device’s coordinates.

-

For tactile interaction with the system,
we used the Omni Phantom Haptic device.
We placed the Omni Phantom device in the middle of the work space and
surrounded by a number of markers. The reason why we used more than one
marker is to avoid tracker lost. When user interacts with the system his h
and or
parts of the haptic device can hide the marker.



Figure
19

: Working space

19


The figure above shows how devices in working space are organized. User uses Head
-
Mounted Display for seeing the scene and using phantom to manipulate the virtual object.
The Phantom haptic device is placed in between of markers. The markers are carefully
measured to localize the global coordinate for devices.


2.

System Architecture

In the system, there are several components and packages for working with different
devices and integration.



Figure
20

: System Architecture


Tracking

package is responsible for analyzing the recorded images from camera to extract
the information of the world coordinate. Base on this coordinate, Graphic engine can add
virtual objects at the correct position in the real world scene.

Haptic is responsible

for managing various types of Haptic devices such as Novint Falcon,
Phantom Omni or SPIDAR. Haptic devices are the main input method of the system. Using
haptic devices, user can get the tactile feedback while interacting with the system.

20


Calibration is a

module that reduces the inaccuracy in graphical coordinates between
devices. Each device has its own inaccuracy and this become worse when we apply
transformations to get all devices working together.

H3D is a module to load AR applications. It manages al
l devices in the system and loads AR
scene from X3D files.

This project is a part of the VHAR Framework of Magic Vision Lab. It contributes 2 modules
to the VHAR framework: Paint and Deform. The main focus of these modules is to provide
to user the capabil
ity of painting and deforming a 3d object.



Figure
21

: Component Diagram


According to the requirement that these modules could be reused as much as possible, we
have implemented these component as extension nodes of X3D. Other
developers can reuse
these modules by declaring these extension nodes in X3D file and then run the application.

Therefore, Painting and deforming modules are parts of X3D package. These components
inherit from the base class X3DNode of X3D package. This co
uld help other developers can
reuse these components naturally as X3D nodes.

To reduce the bottleneck of calculating for collision detection, w
e used an external package,
OpenCL,
to utilze the parallel computing feature. The OpenCL module will be executed

on
GPU instead of

running on

CPU.

3.

System Mechanism

This section will explain the mechanism of
the system
.

First, we introduce the data
structure used for models. Second, w
e discuss the

mechanism of two modules
. Also, we will
provide algorithms
for each
module.

21


3.1.

3D Model

In computer graphics, a model is usually created by a set of polygons and each polygon is
determined by a number of vertices. In 3D environment, a vertex is defined by a set of three
values which are the coordinates in 3 axes X, Y and Z of

that vertex. Graphical data of model
is stored in a specific format which depends on the creator program. There are several
well
-
known formats for storing 3D objects information such as Wavefront (*.obj), 3D
Studio (*.3ds), X3D (*.x3d)... Based on a speci
fic format, the program uses a suitable
method as well as data structure to retrieve graphical data to render 3d objects in the file.

In this project, we used IndexedTriangleSet data structure, which is defined in X3D, as the
mechanism for storing and load
ing 3D objects. Basically, IndexedTriangleSet contains two
arrays. The first array is a list of 3
-
tuple of float values which stores coordinates of a vertex
in X, Y, Z axes. The index of each vertex is the position of it in the array starting from 1. The
s
econd array is a list of integer values which stores 3
-
tuple of indices of vertices.


Figure
22

:
A decomposition of a 3D

cube model

As the above figure, a cube is created by 12 triangles and 8 vertices. The
IndexedTriangleSet str
ucture will be defined by two arrays like following:

-

Vertex Array = (
-
1 , 1 ,
-
1,
-
1, 1, 1, 1, 1, 1, 1, 1,
-
1,
-
1 ,
-
1 ,
-
1,
-
1,
-
1, 1, 1,
-
1, 1, 1,
-
1,
-
1)

-

Vertex Index Array = (1, 5 ,6, 1, 2, 6, 1, 5, 4, 4, 5, 8, 1, 2, 4, 2, 3, 4, 3, 4, 7, 4, 7, 8, 2, 3,

7,
2, 6, 7, 5, 6, 8, 6, 7, 8)

To store model’s information in the program, we defined a data structure with 3 classes:
Model3D, Triangle, Point3D. Model has an array of Triangles, and a Triangle has 3
Point3Ds. The UML class diagram is as the below model:

22



Figure
23

: Class diagram of data structure for storing 3d model

3.2.

Painting

In 1974, Catmull introduced texture mapping method for enhancing 3D object’s
appearance such as adding color or adding texture on the surface of object (C
atmull E.,
1974). A polygon has to register on a specific region on the texture in order to apply texture

on its surface. Each vertex of a polygon is registered to a 2D coordinate on the texture. The
other pixels are then interpolated to get an appropriat
e 2D coordinate based on the
registered coordinates of vertices. The color of a pixel is determined by the color of its
location in the texture.




According to this technique, we implemented

the painting effect by changing the color of
the texture at the polygons that intersect with the stylus.


Figure
25

: Painting effect


When using painting, user will choose a color to paint on the object. Once the stylus touches
the surface of the object, we calculated the intersection area to determine which polygons
are needed to have color changing. Then, we change the color of these polygons to the
selected color.



Figure
24

:
Texture mapping

23


3.3.

Deformation

User uses stylus to push on the surface of the
virtual object to deform it. The deformation
shape is determined by the intersection area between the pen and the object. Not only the
stylus’ tip can distort the object but also the body of the pen can make the distortion. If the
body collides with the o
bject, the area of collision also be distorted. The figure below shows
various poses of the stylus interacts with the object.


Figure
26

: The stylus deforms the object’s surface

As described in the figure, the deformation effect
can be achieved by updating the position
of vertices that collide with the stylus.


Figure
27

: Update position for a vertex

The figure above describes how to update position of a vertex. The new position of a vertex
depends on t
he moving direction of the stylus. The algorithm for updating vertex’ position
is as following:


Let P
i

is an element of the array of vertices P[]; Vector = (A, d) indicate a vector starting
from A and having direction d; V is a vector represents for Moving direction of the stylus;


For every P
i

in P[]

24


If (P
i

collides with the Stylus), Then:

Create a vector

V = (P
i
, V)

Find P’ = V ∩ Stylus

Set P = P’

End If


3.4.

Subdivision

As the pen deforms the virtual object, positions of vertices are updated. While updating,
some polygons become bigger as shown in the figure below.



Figure
28

: Big polygons created when updating vertices positions


This makes deformation become rough, not smooth. To overcome this, the big polygons
have to be broken into smaller polygons to make the deformation smoother. We applied a
subdivision method for the
triangle by selecting midpoints of every edge of that triangle.
Then we connect these midpoints to break the triangle into 4 smaller triangles.



25




Figure
29

: Subdivision by midpoints

A threshold will be used for determining if
a triangle is required to be subdivided or not.
First, we do the checking for subdivision for every polygon of the model. The model is
stored using Indexed Triangle Set, so every polygon of the model is a triangle and the
model is can be considered as an a
rray of triangles. Second, we check the lengths of 3 edges
of each triangle with a threshold which set to be half size of the stylus radius. Third, if there
exist one of the edge length exceed the threshold, we subdivide that triangle into 4 smaller
triang
le. Last, we remove the current triangle from the array and add 4 smaller triangles
into the array. The pseudo code for subdivision is as following:


Let T
i
is an element of the array T[]; ∂ is a threshold; e
i

is the length of i
th

edge; v
i

is the
position of i
th

vertex;


For every T
i

in T[]:

If (T
i

.e
1
> ∂) OR (T
i

.e
2
> ∂) OR (T
i

.e
3
> ∂) Then:

m
1

= (T
i

.v
1

+ T
i

.v
2
) / 2;

m
2

= (T
i

.v
1

+ T
i

.v
3
) / 2;


m
3

= (T
i

.v
2

+ T
i

.v
3
) / 2;


Create new triangles:


ST
1

= {T
i

.v
1
, m
1
, m
2
};


ST
2

= {m
1
, T
i
.v
2
, m
3
};


ST
3

= {m
1
, m
3
, m
2
};


ST
4
= {m
2
, m
3
, T
i
.v
3
};


Remove T
i

from T[];

Add ST
1
, ST
2
, ST
3
, ST
4

in T[];

End If

Subdivision algorithm

Next section will discuss the OpenCL framework which is responsible for parallel
computing.


26



3.5.

OpenCL

The objective of this section is to provide a background of parallel computing. Then we
proved that this system can apply the enhancement of parallel computing to do the
collision detection checking.
Next, we discuss the capability of CPU and GPU in parall
el
computing . Last, we introduced the OpenCL which is a framework for parallel computing
in various platforms.

3.5.1.

Parallel Computing

Traditionally, applications in computer have been programmed in serial computation.
When executing, a program is a serial of
instructions that processed one by one by the CPU.
After an instruction is finished, the next will be processed (Blaise and Livermore, 2007).

In contrast, parallel computing enables to process multiple instructions at a time. By
breaking a program into man
y independent parts and run these part simultaneously, the
computing time is reduced significantly. According to the Amdahl’s law, the maximum
speed
-
up for using parallel programming instead of serial is: S = 1/α, w
ith
α is a fraction of
running time for n
on
-
parallelized parts (Amdahl G., 1967).

This can be easily seen via an example of a puzzle game. A puzzle game could be solved 10
times faster if there are 10 people playing it simultaneously instead of they solve it
sequentially.



Figure
30

: People solve puzzle in parallel



Figure
31

: People solve puzzle in sequence



However, not all programs can be parallelized. The possibility
of parallel depends on the
dependency of elements in the algorithm. Bernstein (1966) gave the conditions to check
the dependency of 2 processes P
i

and P
j

as following:

These processes are independent if:



I
i

∩O
j

= {}



I
j

∩O
i

= {}

27




O
i

∩O
j

= {} ({} stands for
empty set)

I
i

and O
i

are the input and output of process P
i;
I
j

and O
j

are the input and output of process
P
j
.

Next, we assess the possibility of using parallel computing for collision detection as
collision detection is the main bottleneck of this program
. Basically, manipulation happens
when phantom stylus interacts with the model. This means collision detection should be
checked for every polygon of the stylus pen and all polygons of the model.

Apply the dependency conditions given, let Pi, Pj are the p
rocesses that check collision
between polygon I and j of the object with the stylus. The input is polygons of the model. As
described in the 3D Model section, an object is an array of independent polygons, and each
polygon has its own vertices which depend
ent to another polygon’s vertices. The output of
each process is a Boolean value that indicates collision happens or not for a particular
polygon of the model.



Figure
32

: Class diagram of Model3D structure


According to the
dependency conditions, we can conclude that it is possible to do collision
detection between for this project using parallel computing.


3.5.2.

CPU vs. GPU

In comparison with Central Processing Unit (CPU), Graphic Processing Unit (GPU) is
designed for highly inte
nsive parallel computing (NVIDIA). The figure below shows the
architectures of CPU and GPU. Then green blocks are Arithmetic Logic Unit (ALU). All
arithmetic and logical operations are processed by ALU. The yellow blocks are Control or
Controller blocks. C
ontrollers are responsible for data driven in between Cache and ALU or
between ALU and ALU. Lastly, the caches, which are the intermediate memory for CPU, are
represented in orange blocks.

28


In the GPU’s structure, there are more transistors than in CPU. Thi
s enables GPU
simultaneously execute
s

more processes than CPU and thus, better than CPU
i
n parallel
computing.


Figure
33

: Structure of CPU and GPU

Here in the graphs below were some benchmarks that compared the capability of CPU

and
GPU. The first one was floating point operations per second benchmark and the second one
was memory bandwidth that can be processed in a second. In the graphs, green lines
describe the capability of GPU and blue lines are for CPU’s. As we can see, the

capability of
GPU in both graphs is much higher than CPU.




However, GPU is not always better than CPU.

GPU is not good for algorithms that require
data sharing between threads as it does not have controller in between ALUs and caches.
For example, there is no straight method to do recursion in GPU.

Floating point operations per second

Memory bandwidth

Figure
34

:
Benchmarks of GPU and CPU (Courtesy of NVIDIA)

29


3.5.3.

OpenCL

In 2008, the Khronos Computing Group introduced a f
ramework for parallel computing
called Open Computing Language (OpenCL). OpenCL enables developers to create
hundreds of threads that work simultaneously to solve a problem (NVIDIA).

OpenCL provides a capability to do parallel computing on both CPU and GPU

as user’s
requirement. Basically, OpenCL syntax and language are similar to C++ language. A
program written in OpenCL is not a stand
-
alone program. It is a module that needed to be
loaded to a main C++ program (host program) at runtime called kernel. Afte
r finish
running, kernel will return values to be read back at host program.

Here is an example OpenCL code to calculate sum of two matrices:


__kernel void

Sum
//
function name

(

__global float * a,
// matrix a, float values

__global float * b
, // matrix

b, float values

intnElements, // number of elements


__global float * c // matrix c is the sum of a and b

)

{


intnIndex = get_global_id(0);
// get thread ID



if (nIndex<nElements) // check the range of the array to make sure

// there
will be no out of range value


{


c[nIndex] = a[nIndex] + b[nIndex]; // calculate each element of matrix c


}

}


When the function is executed, hundreds of threads will be created. Each thread has a
ThreadID that

can be retrieved by the built
-
in command
get_global_id(0)
.
Each element of c
matrix will be calculated in an individual thread. Therefore, if there are 10 threads running
simultaneously then the running time could be
10

times faster.



30



V.

IMPLEMENTATION

This chapter goes in the detail of th
e implementation of the system: algorithms,
mathematical explanations.
In the mathematical explanations part, I used LATEX to edit the
formulas and then copied to this document as images, so the font is different to the

default
font.


1.

Ma
nipulation

Algorithm’s Flowchart


Figure
35

: Manipulation Algorithm’s Flowchart

The algorithm runs on two computational platf
orms: CPU and GPU and can be stated as
following steps:



Step 1:

Load a 3D model to a data structure



Step 2:
Get the Pen’s position (or the Stylus)



Step 3:

Start doing Collision Detection by invoking OpenCL module. The input for
OpenCL module is the pen’s position and polygons of the model.



Step 4:
OpenCL creates threads

to check and update positions of vertices



Step 5:

CPU process retrieve
s

data from OpenCL including:
Updated vertices,
Polygons that need to be subdivided.



Step 6:

CPU re
-
draw the model based on the retrieved data using OpenGL.



Step 7:
Go
to step 2.

31




2.

Moving Direction

The updated positions of vertices when user deforms the object are determined by moving
direction of the stylus. Moving directions are different from various positions on the stylus.
Each position on the stylus is affected by a moving dire
ction. To calculate moving direction
for a specific point on the pen, we calculated the transition of the stylus’ poses in 2 adjacent
frames.


Figure
36

: Moving direction at different points on the stylus

The pose of the stylus
is determined by the core of the pen. The core is an abstract line that
connects centers of Top and Bottom of the stylus. The body of the stylus is a cylinder. A
cylinder is a set of circles having same radius and their centers are in the core line. To
ide
ntify the moving direction that affects a point, we have to identify the circle which
contains that point. The moving direction equals the shift of circle’s center position in two
adjacent frames.

As in the figure showed, the moving direction affects M is
vector H
1
H
2
. Here’s the
calculation for identifying H
1
H
2
, symbols used according to the figure above:

32





33


3.

Intersection and Updating Vertex position

The stylus of haptic device is overlaid by a virtual pen. The shape of the virtual pen is a
combination of

cylinder and half of a sphere. The body shape of the pen is a cylinder and
the tip is half
-
sphere.

Each time checking for collision of the pen with a vertex, it is required to check for both the
body and the tip of the pen with the vertex. Therefore, we n
eed 2 times checking for each
vertex. We applied an angle pre
-
check method to determine exactly which part of the pen
could collide with the vertex. The pre
-
check method was based on the angle created by the
core vector of the pen and the vector from tip’s

center to the vertex. The core vector of the
pen is an abstract vector from bottom of the pen to the tip’s center.

If the created angle is
greater than 90 degree then we apply checking for cylinder otherwise, we check for the
sphere.

The mathematical expl
anation of the
pre
-
check method is as following:





34


3.1.

Checking for Sphere

We need to identify the intersection point I of the pen tip and the moving vector at vertex A.





After the intersection I was identified, we update A to the position of I.

35


3.2.

Checking for Cylinder

We need to identify the intersection point I of the pen body and the moving vector at vertex
M.






36






After the intersection I was identified, we update M to the position of I.



37


VI.

CONCLUSIONS

1.

Results

This system was implemented on a desktop computer with the hardware configuration:

CPU

Pentium (R) Duo Core 2.60 GHz

Core Speed 1800 MHz

FSB 800 MHz

RAM

DDR2 4GB

400MHz

Graphic Card

NVIDIA GTX 260

896 MB


Table
2

: System Specification

We tested the system for two functions, painting and deforming, with several 3D models. In
both functions, we used two criteria to evaluate the capacity of the system. The first
criterion is the number of polygons of the tested mode
l. The second is number of frames
that system could render in a second, called Frame per Second (FPS). We rated the
capability of the system in each test into three levels: Good, Average, and Bad. A good level
will be acquired if FPS is bigger or equal to
24. If FPS is in the range of 12 to 23, than the
result will be an Average level. Bad level is for FPS less than 12.

The models used for test in painting function were: Bunny, Cow, Car, and a generated plane.
The generated plane is generated by computer. T
he number of polygons of the generated
plane is set up by user for testing purpose.


Figure
37

: Benchmark for Paint Function

Bunny
Cow
Car
Generated
Plane
Number of Polygons
814
3148
10047
50000
Frames per second
68
54
31
15
0
10
20
30
40
50
60
70
80
0
10000
20000
30000
40000
50000
60000
Number of Polygons

Benchmark for Paint Function

38


According to the received results, the paint function was running quite good with almost
models. There
was only one Average level for the 50.000 polygon plane.

For the deformation function, we also use the same models that tested for painting
function. However, the generated plane we used for deformation only has 20 thousand
polygons. Below is the benchmark

of deformation test.



Figure
38

: Benchmark for Deformation function


In this test, we have got 2 Good, 1 Average, and 1 Bad result. We found that there was a
reason causing bottleneck. The problem was we had to calculate normal

vector for every
polygon each time the model was updated. We did not implement an effective method that
just only updates normal vectors for updated polygons. We have tried to remove the
calculation for normal vectors and the result was much better.


2.

Summ
ary

In summary, we have successfully solved almost the requirements of this project. That is
we could prove the possibility of building an application that can directly manipulate 3D
object on Augmented Reality environment. Using the system, we can benefit

from the
advantage of AR application such as ease of use, tactile feedback, and realistic 3D
interactions. We can paint or deform a 3D model directly.

Bunny
Cow
Car
Generated
Plane
Number of Polygons
814
3148
10047
20000
Frames per second
54
28
15
8
0
10
20
30
40
50
60
0
5000
10000
15000
20000
25000
Number of polygons

Benchmark for Deformation

39



Figure
39

: Paint on a shoe in AR in TEDx Adelaide 2010

www.youtube.com/watch?v=U2YE2lHULWA


In 2010, we have successfully integrated the paint function to AR environment. This system
had shown in TEDx Adelaide and had received a lot of good feedback from audiences.



Figure
40

: Defor
m a 3D model of a bunny


However, we did not have enough time to integrate the deformation part to X3D package as
an X3D node. The deformation part was just finished at the implementation for an OpenGL
application to prove the theory of deformation in this

project. We still continue to work on
the integration of this function to AR environment.


3.

Future work

There are some problems of the system that need to be solved. First, the deformation
function is not yet integrated to AR environment. Second, the
subdivision is not good
enough as it still not good for some particular issues such. For example, the introduced
subdivision method is not effective for narrow triangles. The last thing we hope to upgrade
is to apply some better method for checking collisi
on as the complexity is now still O (n
2
).
Hopefully, if we apply some advance methods such as Nearest Neighborhood, or BSP Tree,
we can significantly reduce the complexity of the current algorithm.

40


REFERENCES


An, W & Cai, Z 2010, 'Collision Detection Tech
nology Based on Bounding Box of Virtual Reality',
paper presented at the E
-
Product E
-
Service and E
-
Entertainment (ICEEE), 2010 International
Conference on, 7
-
9 Nov. 2010.

Azuma, RT 1997, 'A survey of Augmented Reality',
Presence: Teleoperators

and Virtual
Environments,
vol
.
6, no. 4, pp. 355
-
385.

Bernstein, AJ 1966, 'Analysis of Programs for Parallel Processing',
Electronic Computers,
IEEE Transactions on,
vol
.
EC
-
15, no. 5, pp. 757
-
763.

Cutler, B, Dorsey, J, McMillan, L, M
\
, M,
\
#252, ller & J
agnow, R 2002,
A procedural approach
to authoring solid models
, ACM, San Antonio, Texas, pp. 302
-
311.

Gibson, SFF 1996, '3D ChainMail: a Fast Algorithm for Deforming Volumetric Objects',
December 1996.

Hamam, A, Georganas, ND & El Saddik, A 2010, 'Effect o
f haptics on the Quality of
Experience', paper presented at the Haptic Audio
-
Visual Environments and Games (HAVE),
2010 IEEE International Symposium on, 16
-
17 Oct. 2010.

Hanjun, J, Zhiliang, L, Tianzhen, W & Yanxia, W 2009, 'The Research of Collision Detec
tion
Algorithm Based on Spatial Subdivision', paper presented at the Computer Engineering and
Technology, 2009. ICCET '09. International Conference on, 22
-
24 Jan. 2009.

Horan, B, Najdovski, Z, Nahavandi, S & Tunstel, E 2008, '3D Virtual Haptic Cone for Int
uitive
Vehicle Motion Control', paper presented at the 3D User Interfaces, 2008. 3DUI 2008. IEEE
Symposium on, 8
-
9 March 2008.

Kato, H, Billinghurst, M, Poupyrev, I, Imamoto, K & Tachibana, K 2000, 'Virtual object
manipulation on a table
-
top AR environment
', paper presented at the Augmented Reality,
2000. (ISAR 2000). Proceedings. IEEE and ACM International Symposium on, 2000.

Khronos 2008,
OpenCL Programming Guide for the CUDA Architecture
, NVIDIA.

Kim, S, Hasegawa, S, Koike, Y & Sato, M 2002, 'Tension bas
ed 7
-
DOF force feedback device:
SPIDAR
-
G', paper presented at the Virtual Reality, 2002. Proceedings. IEEE, 2002.

Lamousin, HJ & Waggenspack, NN, Jr. 1994, 'NURBS
-
based free
-
form deformations',
Computer Graphics and Applications, IEEE,
vol
.
14, no. 6, pp.
59
-
65.

41


Liu, P, Georganas, ND & Roth, G 2006, 'Handling Rapid Interference Detection of
Progressive Meshes Using Active Bounding Trees',
journal of graphics, gpu, and game tools,
vol
.
11, no. 4, pp. 17
-
37.

M¨uller, M, Dorsey, J, McMillan, L, Jagnow, R & Cut
ler, B 2002, 'Stable Real
-
Time
Deformations', paper presented at the Proceedings of ACM SIGGRAPH Symposium on
Computer Animation.

McDonnell, KT & Qin, H 2008, 'PB
-
FFD: A Point
-
Based Technique for Free
-
Form
Deformation ',
Journal of Graphics, GPU, & Game To
ols,
vol
.
12, no. 3, pp. 25
-
41.

Muller, M, Heidelberger, B, Teschner, M & Gross, M 2005,
Meshless deformations based on
shape matching
, ACM, Los Angeles, California, pp. 471
-
478.

Noble, RA & Clapworthy, GJ 1999, 'Direct manipulation of surfaces using NURBS
-
based
free
-
form deformations', paper presented at the Information Visualization, 1999.
Proceedings. 1999 IEEE International Conference on, 1999.

Page, F & Guibault, F 2003, 'Collision detection algorithm for NURBS surfaces in interactive
applications', pa
per presented at the Electrical and Computer Engineering, 2003. IEEE
CCECE 2003. Canadian Conference on, 4
-
7 May 2003.

Rolland, JP, Baillot, Y & Goon, AA 2001, 'A survey of tracking technology for Virtual
Environments'.

Sandor, C & Klinker, G 2005, 'A rapi
d prototyping software infrastructure for user
interfaces in ubiquitous augmented reality',
Personal Ubiquitous Comput.,
vol
.
9, no. 3, pp.
169
-
185.

Sandor, C, Uchiyama, TKS & Yamamoto, H 2007,
Exploring Visuo
-
Haptic Mixed Reality
,
Human Machine Perception

Laboratory, Tokyo.

Smith, A, Kitamura, Y, Takemura, H & Kishino, F 1995, 'A simple and efficient method for
accurate collision detection among deformable polyhedral objects in arbitrary motion',
paper presented at the Virtual Reality Annual International
Symposium, 1995.
Proceedings., 11
-
15 Mar 1995.

Terzopoulos, D, Platt, J, Barr, A & Fleischer, K 1987,
Elastically deformable models
, ACM, pp.
205
-
214.

Uchiyama, S, Takemoto, K, Satoh, K, Yamamoto, H & Tamura, H 2002, 'MR Platform: a basic
body on which mix
ed reality applications are built', paper presented at the Mixed and
Augmented Reality, 2002. ISMAR 2002. Proceedings. International Symposium on, 2002.

Vallino, J & Brown, C 1999, 'Haptics in augmented reality', paper presented at the
Multimedia Computing

and Systems, 1999. IEEE International Conference on, Jul 1999.

42


Wen, Q 2004, 'A prototype of video see
-
through mixed reality interactive system', paper
presented at the 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.
Proceedings. 2nd
International Symposium on, 6
-
9 Sept. 2004.

Xinyu, Z & Kim, YJ 2007, 'Interactive Collision Detection for Deformable Models Using
Streaming AABBs',
Visualization and Computer Graphics, IEEE Transactions on,
vol
.
13, no. 2,
pp. 318
-
329.

Yanlin, L, Ping, G,
Hasegawa, S & Sato, M 2004, 'An interactive molecular visualization
system for education in immersive multi
-
projection virtual environment', paper presented
at the Image and Graphics, 2004. Proceedings. Third International Conference on, 18
-
20
Dec. 2004.