State of the art of VR modelling

wafflejourneyAI and Robotics

Nov 14, 2013 (3 years and 7 months ago)

61 views



the human factors network for the process industries









Report
(D7.4)
:




State of the art of VR modelling




Author:

Stefan Stüring, Alberto Trasi (IFF)



























(Intentionally left blank)















State of the art of VR m
odelling





Preface

................................
................................
................................
............

1

1

Introduction

................................
................................
.............................

3

2

Human Performance in Virtual Words

................................
....................

5

3

Virtual Reality Hardware

................................
................................
........

7

3.1

Visual D
isplay
................................
................................
...................

7

3.1.1

“Immersive” display
................................
................................
...

8

3.1.2

Spatially Immersive Display

................................
......................

9

3.1.3

Virtual Model Display
................................
..............................

10

3.2

Acoustic sources

................................
................................
.............

10

3.3

Input devices (manipulation)

................................
..........................

11

3.4

Tracking of user location

................................
................................

12

4

Virtual Reality Software

................................
................................
........

13

5

Cost of Virtual Reality

................................
................................
...........

15

6

Virtual Reality applications

................................
................................
...

17

7

Conclusion

................................
................................
.............................

19

Bibliography

................................
................................
................................
.

21


































1

Preface

In the document that describe the aim of the

PRISM project we can read:


...

The overall objective for the PRISM Human Factors Thematic Network
is:



the improvement of safety in the European process industries through
raising awareness of, and sharing experience in, the application of
human factors

approaches and stimulate their development and
improvement to address industry
-
relevant problems in batch and
continuous process industries.”


The aim of the Thematic Network will be to create an extensive forum
within which industry, universities, resear
ch centres and practitioners can
collaborate to improve the flow of fundamental knowledge and practical
experience in human factors and identify areas for improvement by
collaborative effort.

Deep in detail the main objective of this work
-
package is to sel
ect a
methodology, for identification and control of potential failures
associated with Human Factors, suitable for process industry, and to
improve it using both an interdisciplinary approach and VIRTUAL
REALITY techniques.

...



Again in the “Description

of Work” of this project we can find the following
definition of Human Factors:


“environmental, organisational and job factors, and human and
individual characteristics which influence behaviour at work in a way
which can affect health and safety”


No de
finition of Virtual Reality is reported in that document and it can
cause misunderstanding in the next steps of the project.

Because of this leak of definition we would like to provide the partners of
this Network with an explanation of what Virtual Realit
y is (or can be
considered) and then to report about the state of the art in VR modelling.




2

This report is the natural conclusion of the presentation that took place at the
Politecnico di Milano the 10
th

of September 2001. In this report we try to
give onl
y the essential information about VR.

There are several excellent book that address all the different aspect of VR
from the beginner step to the expert level and we report some of them in the
bibliography.

Several information (articles and demos) about VR

are available in a lot of
internet sites. Especially the contents of the first three chapters of this report
are summary of materials available in internet.




3

1

Introduction

Virtual Reality (VR) hit the headlines in the niid
-
1980s, and spawned a
series of c
onferences, exhibitions, television programs, and philosophical
debates about the meaning of reality. During the 1990s also the terms
Virtual Environments
(VE) and
synthetic environments
emerged.

Today, we have virtual universities, virtual offices, virtua
l laboratory,
virtual exhibitions, virtual wind tunnels, virtual actors, virtual studios,
virtual museums, virtual doctors, virtual X, virtual Y, virtual ...

The term Virtual Reality (VR) is used by many different people and
currently has many meanings. Ea
rly VR systems described a computer
technology that enabled a user to look through a special display called a
Head
-
Mounted Display (HMD)
-
and instead of seeing the normal world,
they saw a computer
-
generated world.

There are some people to whom VR is a spec
ific collection of technologies,
that is a HMD, Glove Input Device and Audio. However, the general
concept of the systems goes way beyond that, VR is much more than
immersive systems working with a HMD, a definition can be:

...
“Virtual Reality is a way fo
r humans to visualise, manipulate and interact
with computers and extremely complex data”.

["Silicon Mirage: The Art and Science of Virtual Reality",

S. Aukstakalnis & D. Blatner, Peach Pit Press 1992]

According to this definition we must admit that for so
me people and for
some purposes also “Microsoft Office” is a form of Virtual Reality!

Another definition that is generally accepted is:

...
“Virtual Reality is about creating
acceptable

substitutes for real objects
or environments, and is
not

really about
constructing imaginary worlds that
are indistinguishable from the real world”.

[Essential Virtual Reality
fast
”, J. Vince, Springer Verlag 1998]

The problem with that definition is in the word “acceptable” because of the
subjectivity that this word implies
, but the second part clarifies what is not
Virtual Reality.

Basically, and this is the definition that we prefer:

“VR is about using computers to create images of 3D scenes with which one
can
navigate

and
interact
”.



4

To
navigate

implies the ability to move

around and explore features of a 3D
scene such as a building, an hangar, a plant and so on.

To
interact

implies the ability to select and move objects in a scene, such as
a chair, a wrench, a pipe and so on.

It is better to point out since now that in ord
er to navigate and interact we
require real
-
time graphics, which implies fast computers.

We recognise from the definition of VR adopted that navigation and
interaction are important features of a VR system. According to the fact that
Personal Computer syst
em are capable of displaying real
-
time images of 3D
environments that could be navigated and support interaction, someone
prefers to make a distinction between Immersive VR and non
-
Immersive
VR.

To dictate what VR is or is not, is a difficult task and it i
s strictly related to
the definition that we decide to follow. VR technology will undergo big
developments during the next few years, and what we currently accept as
VR could disappear and be replaced by something equally revolutionary.



5

2

Human Performance
in Virtual Words

Computer speed and functionality, image processing, synthetic sound, and
tracking mechanisms have been joined together to provide realistic
“acceptable” virtual worlds.

A fundamental advance still required for VEs to be effective is to det
ermine
how to maximise the efficiency of human task performance in virtual
worlds. In many cases, the task will be to obtain and understand information
portrayed in the virtual environment. Maximising the efficiency of the
information conveyed in VEs will
require developing a set of guiding design
principles that enable intuitive and efficient interaction so that users can
readily access and comprehend data. It is difficult to gauge the importance
of the various human
-
factors issues requiring attention. It
is clear that if
humans cannot perform efficiently in virtual environments (thereby
compromising the effectiveness of the human virtual environment
interaction or the transfer of training), then further pursuit of this technology
may be fruitless. In order

to determine the effectiveness of a VE, a means of
assessing human performance efficiency in virtual worlds is first required.

This is easier said than done.

Factors contributing to human performance in VEs predictably include the
navigational complexit
y of the VE and the degree of presence provided by
the virtual world.


Increasing Human
Performance in VE

Degree of sense
of Presence

High

High

Low

Low

Navigational
Easiness

Immersion increase
sense of presence



6

If individuals cannot effectively navigate in VEs, then their ability to
perform required tasks will be severely limited.

The degree of presence experienced by an individual may influen
ce human
performance. Presence is a factor of both the vividness of an experience and
the level of interaction. It is commonly considered that operation of a VE
system that provides a high degree of presence is likely to be better
accomplished than one whe
re such perceptions are not present. Little or no
systematic research is available, however, to substantiate this assumption.
This may be due to the lack of systematic methods for evaluating and
defining the presence requirements for different applications
.

Apart from supplying the user with stereoscopic images, the HMD
immersed the user in the virtual world by preventing them from seeing the
real world. Immersion increased the sensation of presence within the virtual
world, and for some people, immersion d
istinguished VR systems from
other types of real
-
time computer graphics systems. For this community, a
VR system had to provide a user with a 'first
-
person' view of the virtual
world. Looking at a workstation screen was not virtual reality
-
it was just
fast

computer graphics!

In order for designers to be able to maximise human efficiency in VEs, it is
essential to obtain an under
-
standing of design constraints imposed by
human sensory and motor physiology. Without a foundation of knowledge
in these areas, th
ere is a chance that VE systems will not be compatible with
their users. VE design requirements and constraints should thus be
developed by taking into consideration the abilities and limitations of
human sensory and motor physiology. The physiological and

perceptual
issues that directly impact the design of VEs include visual perception,
auditory perception, and haptic and kinaesthetic perception.

One important aspect that will directly influence how effectively humans
can function in virtual worlds is the

nature of the tasks being performed
[Stanney, 1995]. In determining the nature of user tasks it can be said that
some tasks may be uniquely suited to virtual representation while others
may simply be impractical. Understanding the relationship between rea
l
-
world task characteristics and their corresponding virtual task characteristics
is key in determining how well a task is suited for VEs.

A key question is, then,
which task characteristics determine whether a
particular task is appropriate for a VE?

Some

of the most frequently
cited objective measures of task performance are task completion time, task
error rate, and task learning time [Hix and Hartson, 1993]. Thus it seems
reasonable to address characteristics which have significant effects on these
meas
ures. One approach is to look at task characteristics which describe
who is performing the task and where the task is being performed, as well as
characteristics inherent in the basic components of tasks.



7

3

Virtual Reality Hardware

Virtual Reality relies o
n specialized hardware to present information to
users. Because of the complexity of human perception the hardware
associated with VE presentation has been specialized to render a single facet
of human senses (especially
visual perception, auditory percept
ion, and
haptic and kinaesthetic perception).

Although the interface components enable the rendering of different,
separate sensory information, they share common characteristics such as the
following:



dimension rendering



spatial resolution



refresh and upd
ate rates



intensity



range



bandwidth



number of users supported



“naturalness" of design and interaction (body
-
centered interaction)



size, weight, comfort, and mobility



portability



cost

3.1

Visual Display

Visual displays come in several different forms, including

head
-
mounted
displays, CAVEs™, counterbalanced displays, and virtual workbenches.
Most display types are perfectly suited for some tasks, sufficient for some
other tasks, and ill
-
suited, impossible, or intractable for others. It is possible
to distinguish

three kind of VE display:



Head Mounted Display (HMD) and Cathode Ray Tube (CRT) based
display



Spatial Immersive Display (SID)



Virtual Model Display (VMD)

One way to determine which is the best kind of display for a specific
task

is
through the type of pre
sence the tas
ks and system intend to convey.

Full immersion

requires an enveloping display, so that all external (outside
the VE) sights and sounds are omitted. Users become immersed in VE
-
generated information only. This is typically achieved through use
of an
HMD.



8

Self
-
presence

is the perception that, from the user's perspective, “I am
here”. Immersion is not required to achieve self
-
presence. Peripheral motion
cues, location cues, and field of view contribute to self presence as typically
experienced th
rough spatially immersive displays (SIDs) such as CAVEs™
and domes.

Object presence

can be thought as the degree to which users believe an
object is present. Object presence is the perception that, from the user's
perspective, “It is there”. A good 3D per
spective and head tracking are
necessary for rich object presence, typically provided through the use of
Virtual

Model Display.

3.1.1

“Immersive” display

A HMD is an advanced stereoscopic system in which separate small
displays are placed in front of each eye,
with special optics to focus and
stretch the perceived field of view. A HMD requires a position tracker in
addition to the helmet.

HMDs are best
-
suited for single, autonomous user activity. Each user wears
a separate display, which must provide a unique pe
rspective depending upon
user location, orientation, activity, and so on. In a multi
-
user setting, each
HMD may need to also present all other users, with accurate location,
orientation, and so on. Coordination of displays among a large number of
users may

be too computationally intensive, resulting in severe latency
problems, and in effect, rendering the system useless. Tasks that require that
multiple users occupy the same physical space are ill
-
suited for HMDs, as
users contend for physical floor and roo
m space without the ability to see
each other. On the other hand, scenarios involving several remote users may
be better off using HMDs. In this fashion, users are able to occupy the same
virtual space without having to rely on sharing the same physical sp
ace.
Coordination of displays among users over a networking real time is not
trivial.



9

In general HMDs are well
-
suited for application where complete visual
immersion or absence of distraction is required. HMDs are usually tethered
by video (and audio) cabl
ing, limiting user mobility to cable length and
support mechanism. To reduce user fatigue associated with HMD size and
weight it is possible to install the display on an armature for support and
tracking (binocular omni
-
oriented monitor
-

BOOM™).

3.1.2

Spatially

Immersive Display

Spatially Immerse Displays (SIDs) provide a balance between immersion
and spatial object rendering by generating stereoscopic images on physical
surfaces viewed by users through liquid crystal display shutter glasses.
Typically the surf
aces envelop the user to some degree, creating a sense of
immersion. However, shutter glasses are necessarily transparent so that
users see anyone or anything which may also be present inside and outside
the computer
-
generated environment. The spatial qual
ity of 3D images
experienced by users of SIDs is far superior to that available through
HMDs. Thus, SIDs are well
-
suited for spatially rich applications such as
environmental walk
-
throughs and flight
simulations. SIDs are typically considered
well
-
suited f
or multi
-
user task and
collaboration but they are not well
-
suited for
multi
-
user Ves that require separate images
per user. The most common example of
SIDs is CAVEs
™. Images generated in a
CAVE™ are presented on some
combination of adjacent walls, floor, and
ceiling of what can be thought of as a simple. By sheer magnitude of the
display surfaces, it provides sufficient but not complete immersion. For
example, in som
e CAVEs™, images are projected only onto three walls and
the floor.

Stereo vision is accomplished by creating two different images of the world,
one for each eye. The images are computed with the viewpoints offset by
the equivalent distance between the ey
es. There are a large number of
technologies for presenting these two
images. The images can be placed side
-
by
-
side and the viewer asked (or assisted) to
cross their eyes. The images can be
projected through differently polarised
filters, with correspondin
g filters placed in
front of the eyes. The two images can be
displayed sequentially and shutter glasses
are then used to shut off alternate eyes in
synchronisation with the display. When the brain receives the images in


10

rapid enough succession, it fuses th
e images into a single scene and
perceives depth.

3.1.3

Virtual Model Display

Virtual Model Displays (VMDs) are a third class of display types providing
three
-
dimensional visualization without complete immersion. In essence,
VMDs are capable of generating virtua
l worlds where the effect is limited to
the volume of space roughly equivalent to just inside and outside the display
surface. The resulting lack of complete
immersion is one of the major distinction
between VMDs and SIDs. A limited form of
immersion can b
e
created by VMDs which
have very large, upright display surfaces.
Another distinction is the fact that, as the
name states, virtual model displays are
particularly well suited for providing
exocentric views of virtual models such as a
virtual. VMDs provi
de excellent object
presence, supporting the notion that “it is there". These distinctions in turn
suggest the types of applications and interactions best suited for VMDs. The
major distinctions among specific instances of VMDs are size, dimension
and pitc
h or tilt of the display. VMDs are well
-
suited for local collaboration,
since multiple users can participate using the single display, and for model
prototyping or other task requiring manipulation of some external model.
Stereo vision is accomplished in t
he same way of SID systems.

3.2

Acoustic sources

Studies have shown that aural feedback effectively improves user
performance of tasks such as three
-
dimensional target acquisition and shape
perception in single
-
user desktop VEs [Mereu and Kazman, 1996]. Thus,
as
the push for more useful VEs ensues, researchers aim to develop more
sophisticated virtual acoustic presentation.

An advantage of acoustic presentation is the increasing of user spatial
awareness.

Studies showed that distance estimation via aural cues a
lone is very difficult
but when aural cues are used in conjunction with visual tasks, target errors
were reduced and task completion times were significantly lower than times
for sound
-
only environment.

While the use of acoustic presentation in VEs appears

helpful, it may not be
necessary in all situations. As with other modes of communication, it is
important to understand the difference between audio as necessarily inherent
in functionality (voicemail, music browser, etc.) and audio as a complement


11

to oth
er sensory functionality. Given the temporal, non
-
persistent nature of
audio, aural information must be presented in a meaningful, timely, and
useful manner. The following list reports some circumstances in which
acoustic presentation is desired [Cohen and

Wenzel, 1995]:



when the origin of the message is itself a sound (voice, music)



when other channels are overburdened (simultaneous presentation)



when the message is simple and short (status report)



when the message addresses temporal events (Your process i
s finished)



when warnings are sent, or when message prompts for immediate action



when continuously changing (dynamic) information is presented
(location, metric, or count
-
down)



when speech channels are fully employed (virtual teleconferencing and
collabora
tion)



when a verbal response is required (compatibility of media)



when illumination or disability limits use of vision (alarm clock)



when the receiver moves from one place to another (employing sound as
a ubiquitous I/O channel).

3.3

Input devices (manipulatio
n)

The simplest control hardware is a conventional
mouse, trackball or
joystick
. While these are two dimensional devices, creative programming
can use them for 6D controls. Today there are a number of 3 and 6
dimensional mice/trackball/joystick devices bei
ng introduced to the market.

Someone who has been asked to describe a VE
will typically include two major devices in their
response: an HMD and a
data glove
. No other
input device is so closely connected with the
perception of VEs. A natural extension of
human
behavior, gloves not only allow VE users to
reach, grab, and touch virtual objects of interest,
but to engage in gesture interaction (e.g.,
pointing to an object as a means of selection).

Here a glove is outfitted with
sensors on the fingers as well
as an overall position/orientation tracker.

To measure finger position relative to the hand, most gloves are equipped to
capture finger
-
joint position through flex sensors. There are generally two
schools of thought on capturing these positions: (1) throug
h optical or
electronic channels mounted within the glove, and (2) through mechanical
linkage mounted out
-
side the glove (a.k.a. exoskeleton). In either case, to
capture the most basic hand and finger positions, gloves typically use two
flex sensors per _f
inger (used on the lower two knuckles). More
sophisticated designs capture flexion in the distal joint (finger's outer most
knuckle) for more detailed gesturing.
Mechanical armatures can be used to


12

provide fast and very accurate tracking and
force
feedback
. Such armatures may look like a desk
lamp (for basic position/orientation) or they may
be highly complex exoskeletons (for more
detailed positions). The drawbacks of
mechanical sensors are the encumbrance of the
device and its restrictions on motion.

Fak
espace's Pinch Glove™ is capable of reliably
recognizing basic gestures without the additional
cost incurred by sophisticated flex sensors. Each
glove contains five electronic sensors (one in each fingertip), designed to be
used in pinching combinations.

C
ontact between any two or more digits completes a unique electrical path
that is then mapped to an application
-
specific meaning. Multigen™ has
successfully developed an entire language of gestural “pinching” for use in
its SmartScene packages. Very natural

gestural interaction may be achieved
through intuitive pinch mappings. For example, pinching with forefinger
and thumb may used to grab a virtual object and snapping between middle
finger and thumb may used to initiate an action.

3.4

Tracking of user location

One of most fundamental pieces of information a VE system must know is
the position of users in three
-
dimensional space. This position is most often
given in terms of location (x,y,z) and orientation (heading, pitch, roll).
One
of the biggest problem for
position tracking is latency, or the time required
to make the measurements and pre
-
process them before input to the
simulation engine.

In many applications, more specific user information is used, such as the
location and orientation of users' hands, head
s, feet, etc., to create more
sophisticated interaction. For example, it is possible track articulated
detailed upper
-
body movements using magnetic trackers placed on users'
wrists, elbows, and shoulders. Three
-
Dimensional Position Trackers Placed
on glove
s, helmets, body joints, and in hand
-
held interaction devices, three
-
dimensional, six DOF trackers are widely used for most every positioning
need, and thus may possibly be considered the backbone of VE interaction.
Many types of three
-
dimensional tracking

techniques exist, including
magnetic, mechanical , ultrasonic, and optical, as well as sophisticated
video
-
imaging techniques.




13

4

Virtual Reality Software

All companies providing VR systems (especially the software) can’t be seen
as already established on
the market, except perhaps Division Ltd.
Therefore, there is a high risk to rely on the specific formats used by these
systems.


We describe here below the characteristic of the VR application we have
been developing at the Frauhofer Institute in the depar
tment of Interactive
Visualisation and Simulation (IVS).


An essential characteristic of our VR application is a considerable degree of
interactivity. In contrast to many other developments, which mainly contain
fly
-
through and only a few user interactions
, our work is focused on user
interaction with the virtual environment.

The technical system must be modelled close to reality in order to attain
realistic conditions. This means, the model should react as the real
equipment does and should also respond to

user actions in the same way. Of
course, it is necessary to enable the user to perform all relevant actions in
the synthetic environment, that he would have to perform in the real world.

In addition to the geometry of objects, a lot more information needs

to be
modelled. This refers, for example, to the hierarchy of objects and possible
parenting relations, movement constraints, causalities, properties and
actions as well as dynamic behaviour of objects.


In general, the information needed to model the tra
ining environment can be
divided into three levels:


Geometry level.

This level includes all nodes of different types (geometry, animation,
trigger, level
-
of
-
detail switches, ...) as they are common to the scenario
structure of most available VR
-
systems. T
hese entities provide the formal
base for the implementation of a scenario in a runtime system. Information
of this level will be imported from other systems, such as CAD applications,
by appropriate converters. This level is normally unknown to engineers
,
instructors and pedagogues.


Object level.

Based on the information from the geometry level, this level specifies the
basic objects that can be utilised in the next level for the definition of
training scenarios. Each objects comprises a defined set of p
roperties. A


14

realistic behaviour of the system is achieved by modelling causalities
between the properties of different objects in case of manipulation.

The object level contains all product specific information that is already
defined within the design pr
ocess. Additionally, it includes characteristics
that are determined by natural constraints, e.g. gravity, collision detection to
avoid interpenetrating of objects, etc..

This level is the level of the design engineer. It contains the system specific
and t
echnological know
-
how.


Instructional level.

Objects defined in the object level, can be utilised here for different
purposes, from designing to training tasks. Design evaluations can be
performed in VR before the real implementation of the product, train
ing
tasks can be used to construct lessons. One or more lessons may be
necessary to attain a certain training objective

All the three levels mentioned above depend on each other. Each level
requires specialists of different areas.

From our point of view, i
t is important to emphasise the distinction between
the levels of content (object level and pedagogue level) and level of the
runtime system (geometry level). This distinction has been made in order to
retain flexibility in terms of the runtime system and
the hardware platform.
Moreover, this separation allows the structures for representing the content
to be focused on the requirements of the specific field of application.
Otherwise, one would be forced to adapt to the structures of the runtime
system whic
h have been developed on totally different premises and
objectives.

Since the information that describes the scenario is processed before the
application is running, this division does not affect the performance of the
actual simulation.


Level 1: GEOMETRY
Level 2:
OBJECT
Level 3: INSTRUCTIONAL
Increasing
level of
abstraction
Aim
of Scenario, Lesson,
Evaluation, Task, Questions
Processes (manipulations, actions),
Status of objects, Properties, Causalities.
Events, Trigger, Links, Animations
Transformation hierarchy of
geometries.


15

5

Cost of Virtual
Reality

To speak about the costs of VR is not easy and first of all it is important to
define what are the results or the reason we want it.

As we already defined in the chapter dedicated to VR hardware, in order to
navigate and interact in a immersive or
non
-
immersive VR section we need
first of all a good
-
fast PC. We all know the cost of a good processor and a
good graphic board and we all know that technology in that direction cause
this kind of hardware to become old in 6 months. The only think that we
want remark is that the faster is the PC the better is for VR purposes.

According to the system we want to build or the aim of the system, we can
decide which kind of input
-
output devices integrate. It can be possible that,
because of our aim, a PC is all
that we need!

There are then a number of specialised types of hardware that have been
developed or used for Virtual Reality applications.


Just to have an idea of the prices of some devices look at the list below


Personal Computer



㌰〰3


5000 €

q
牡c欠ky獴敭



㔰〰5


10000 €

m楮捨⁇汯癥



㈰〰2


3000 €

p桵瑴e爠r污獳ls


††
㔰〠


1000 €

䡥a搠䵯畮de搠䑩獰day



㄰〰〠


12000 €

ma湯牡浩c⁓c牥e渠nBarcoF



㔰〰〠


60000 €

⸮K

⸮K

Development (2
-
3 month)



㈵〰〠


40000 €


te 牥a汩獥

瑨慴Ⱐ 楮i o牤e爠 瑯t 潢oa楮i a ce牴r楮i 摥杲ge 潦 業浥m獩潮s a湤n
楮ie牡c瑩潮o扵楬摩湧 a sys瑥洠扡獥搠潮oa 瑲tc歩湧 sys瑥洬t㈠灩湣栠g汯癥猠an搠
an HMD, we need to invest (PC excluded) more than 20000 €.

To have a complete idea of the costs of VR we can not forget

the cost of the
development of a virtual “scenario” in term of cost of the software and cost
of the man
-
power. We report this voice in the last line of the table.


Is then VR expensive?

Compared to the standard hardware and software that we usually use at

work or at home we must recognise that VR is a serious investment.


When are VR application convenient?

In the design phase of a project VR can help engineers in visualising the
behaviour of what they are creating or updating, this saves time and money


16

du
ring the test phase. It is in fact not necessary to develop a real prototype
or to begin the real construction and adjust it in a later step.

Besides the time and cost saving during the design and test phase there are
other advantages: reduced immobilisati
on and loss by damage of the real
hardware for/by training purposes, reduced need for assistance from
manufacturers staff in user’s facility by using network solution,
simplification of the development of training means and tools to match them
with the pro
gress made on the real system, lower risks for the personnel
(prevention of accidents).


Compared to the cost of making errors in the design phase or to the cost of
damages during the training phase (especially for expensive hardware), VR
can be considered

cheap and desirable.





17

6

Virtual Reality applications

The benefits of VEs over physical environments several. No space is
required, they can be very accurate and realistic, animated, illuminated,
copied, shared, navigated, and one can interact with them.

VR has application in visualising structures developed using CAD. The
benefits of seeing a product as a true 3D object, the ability to explore issues
of operator visibility, maintenance, manufacture, and physical simulation
before anything is built are
imm
ense.

VR has significant benefits in
training, especially where it is
expensive or dangerous to
undertake the training using
real system such as planes,
ships, power stations, oil
rings, etc.

VR can be used in surgical training, where a surgeon can practic
e surgical
procedures on virtual organs, without endangering the lives of real patients.

VR can be used for engineering, education, design, training, entertainment,
and for many other application.


In the field of plant layout, today the transition to 3D C
AD/CAE
environments has been completed to the greatest extent. Upon completion of
the plant, the client has not only the plant as such (the product) at his
disposal but also, on request, the detailed, accurate image of the plant,
developed in the design an
d implementation process, in the form of a data
model. By simulating a control console and by using the simulation model
to display all functional features, the control room personnel can be trained
under very realistic conditions. In contrast to training
on the real object,
simulation allows training personnel in dangerous situations without risk.


Today a design engineer makes use of virtual
environments when factories are newly built,
restructured or expanded. The engineer's
planning activities centre o
n the 3D factory
model which, on the basis of results inflowing
from integrated factory planning tools, exhibits a
dynamic behaviour and, as a result, all in all
reflects the changes arising in the planning
process. Thus, for example, a plant can be tested

under certain load


18

situations by using a material flow simulation. Manufacturing bottlenecks
can be identified and plants can be dimensioned in accordance with the
desired performance parameters.


The assembly of complex products requires highly qualifie
d personnel and a
corresponding outlay for training, so that the job can be performed
efficiently and with high quality. To a considerable extent, the time
necessary from the conclusion of the design phase up to reaching efficient
production of a new produ
ct depends on
how quickly the personnel can become
qualified for the tasks on the new product.
Training systems using virtual reality make
it possible to train personnel before the real
new product or even only prototypes of the
same exist. As relates to t
ime and cost
savings as well as the improvement of
quality, this holds great potential, for
instance, for companies in the automobile
industry. The changeover from one model to its successor could be
considerably better organised, i.e. more smoothly and ec
onomically, if the
assembly workers could be trained in time and appropriately and could
familiarise themselves with the new product. The employees would already
have acquired extensive knowledge and practical skills before the
production for the new produ
ct begins, because they were already able to
train under realistic conditions in a virtual environment.





19

7

Conclusion

In relation with the increased performance of computers, especially
concerning 3D graphics, progress is achieved very rapidly in the field

of
Virtual Reality. As there are now already PC’s with high performance
graphic boards available at reasonable prices, interactive 3D applications
become increasingly attractive to different application areas. However,
progress made concerning the develop
ment of highly interactive synthetic
environments is not very large. Commercially available VR provide
comprehensive sets of functionality but they are multi
-
purpose applications.

In general, one can determine that existing VR applications provide only
ver
y limited interactivity that don’t let the user be a real active user.

Most system’s architectures and data structures provide only poor support
for training applications. Additionally, all companies providing VR systems
can’t be seen as already establishe
d on the market, except perhaps Division
Ltd. Therefore, there is a high risk to rely on the specific formats used by
these systems.






20

For any question on VR please contact us or visit our web
-
sites.



http://www1.iff.fhg.de/iff/pvt/pvtseiten/ivs/english/indexe.html







21

Bibliography

Aukstakalnis S., Blatner D.. (1992). “Silicon Mirage: The Art and Science
of Virtual Reality”. Peach Pit Press.

Cohen M., Wenzel E. (1995). “T
he design of multidimensional sound
interfaces”. In Virtual Environment and Advanced Interface Design,
chapter 8, Oxford University Press.

Gabbard J., Hix D. (1997). “A Taxonomy of usability Characteristics in
Virtual Environments”. Deliverable to Office o
f Naval Research.
(http://csgrad.cs.vt.edu/~jgabbard/ve/taxonomy/)

Hix D., Hartson H. (1993) “Developing User Interfaces”. John Wiley &
Sons, Inc.

Isdale J. (1993). “What Is Virtual Reality? A Homebrew Introduction”.
(Document available online at:
http://s
unsite.unc.edu / pub / academic /
computer
-
science / virtual
-
reality / papers / whatisvr.txt)

Mereu S., Kazman R. (1996). “Audio enhanced 3D interfaces for visually
impaired users”. In Human Factors in Computing Systems, CHI’96
Conference Proceedings.

Stan
ney K. M. et al., “Human Factors Issues in Virtual Environments: A
Review of the Literature” Presence, Vol.7, No. 4, August 1998, 327
-
351

Vince J. (1998). “Essential Virtual Reality fast”
.

Springer Verlag.

(http://www.essential
-
series.com/essential_virtual
reality_chapter.htm)


The following bibliography is a short list of books or articles related to VR
and it is not strictly related to this report.

Badler, N. I., Phillips, C. B., and Webber, B. L. (1993). Virtual Humans and
Simulated Agents. New York, NY:
Oxford University Press.

Barfield, W. and Furness, T. (Eds.). (1995). Virtual Environments and
Advanced Interface Design. Oxford, UK: Oxford University Press.



22

Best, K. (1994). The Idiot's Guide to Virtual World Design. Seattle, WA:
Little Star.

Biocca, F.
and Levy, M. R. (Eds.). (1995). Communication in the Age of
Virtual Reality. Hillsdale, NJ: Lawrence Erlbaum Associates.

Boff, K. R., Kaufman, L. and Thomas, J. P. (Eds.). (1986). Handbook of
Human Perception and Human Performance. New York, NY, USA:
Wiley
.

Burdea, G. (1996). Force & Touch Feedback for Virtual Reality. New York:
John Wiley & Sons.

Cotton, B. and Oliver, R. (1993). Understanding Hypermedia: From
Multimedia to Virtual Reality. London, UK: Phaidon Press.

Earnshaw, R., Jones, H., and Gigante, M
. (1993). Virtual Reality Systems.
Acedemic Press.

Earnshaw, R., Vince, J. and Jones, H. (1995). Virtual Reality Applications.
London: Academic Press.

Eddings, J. (1994). How Virtual Reality Works. Emeryville, CA: Ziff
-
Davis Press.

Hollands, R. (1996). T
he Virtual Reality Homebrewer's Handbook. New
York, NY: John Wiley & Sons.

Iovine, J. (1995). Step into Virtual Reality. Blue Ridge, PA: Tab Books.

Kalawsky, R. (1993). The Science of Virtual Reality and Virtual
Environments. Reading, MA: Addison
-
Wesley.

L
angdell, T. (1994). Virtual Reality Beyond Imagination. Indianapolis, IN:
Sams Publishing.

Levy, J. R. (1994). Create Your Own Virtual Reality System. New York,
NY: McGraw.

Loeffler, C. E. and Anderson, T. (Eds.). (1994). The Virtual Reality
Casebook. New
York, NY: Van Nostrand Reinhold.

MacDonald, L. and Vince, J. (1993). Interacting with Virtual Environments.
New York, NY: John Wiley and Sons, Inc.



23

McLellan, H. (1994). Virtual Reality: Case Studies in Design for
Collaboration & Learning. Westport, CT: Mec
kler Corp.

Perelman, B. (1993). Virtual Reality. New York, NY: Segue Books.

Rix, J., Haas, S. and Teixeira, J. (Eds.) (1995). Virtual Prototyping: Virtual
Environments and the Product Design Process. London, UK: Chapman
Hall.

Rothman, P. (1994). Intellige
nt Agents, Artificial Intelligence & Virtual
Reality. Indianapolis, IN: Sams Publishing.

Shafer, R. (1993). Creating Virtual Reality: The Affordable Way to Explore
Cyberspace. Indianapolis, IN: Sams Publishing.

Shneiderman, B. (1992). Designing the User In
terface: Strategies for
Effective Human Interaction (2nd ed.). Reading, MA: Addison
-
Wesley.

Stampe, D., Eagan, J. and Roehl, B. (1993). Virtual Reality Creations:
Create & Program Virtual Worlds on Your PC. Corte Madera, CA:
Waite Group Press.

Stuart, R.
(1996). The Design of Virtual Environments. McGraw
-
Hill Series
in Visualization. New York, NY: McGraw
-
Hill.

Warwick, K., Gray, J. and Roberts, D. (Eds.) (1993). Virtual Reality in
Engineering. Piscataway, NJ: IEEE.

Wilson, J.R., D'Cruz, M.D., Cobb, S.V.G.

and Eastgate, R.M. (1995).
Virtual Reality for Industrial Application: Opportunities and
Limitations. Nottingham: Nottingham University Press.

Wodaski, R. (1995). Absolute Beginner's Guide to Virtual Reality.
Indianapolis, IN: Sams Publishing.