64.

Collisio
ns
3D Item
-
to
-
3D Item

or user
-
to
-
3D Item

shall trigger a signal/event.

[H]


65.

3D Items

shall be rendered in the following ways: solid or wireframe with or
without texture (default), or turned off.

Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

37

Alenia Spazio S.p.A.

[H]


66.

The visual properties of the
3D items

shall be derived
from measures done in
reality.

[L]


67.

The visual properties of the
3D items

shall be able to be validated (correspond to
reality).

[M]


68.

A VR system shall support the reflection of the environment in a planar or
spherical surface (rear view mirror).

[H]


69.

A

VR System shall allow the
3D Items

behaviours to react realistically when the
user perform a specific gesture. (example: window opens if the user turn the lever)

[L]


4.4.5.

Immersive devices Requirements

70.

A VR system shall render images at minimum 30 frames per
second.

[H]


71.

A VR systems rendering stage shall have a delay less than 33 milliseconds. See
also 39.

[H]


72.

A VR system shall render images of minimum 1280 by 1024 pixels.

[H]


73.

A VR system shall render
3D items

with higher quality* than traditional real
-
ti
me
graphics applications (Gouraud shading and one
-
layered texture maps).

[H]

*) Means: Multipass rendering allowing bump
-

and lightmaps, cube
-
mapped
environments simulating BRDF reflectance


74.

A VR system shall provide to the user wireless input devices.

[M]


4.4.6.

Interaction Requirements

76.

A VR System shall allow to reach at least the Level 5 of the ANNEX 1.4
described interaction Levels.

[H]


77.

A VR System shall allow a “first person” ergonomic analysis.

[H]


78.

A VR System shall allow commanding via voice recogniti
on/speech input.
(default: English)

Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

38

Alenia Spazio S.p.A.

[H]


79.

A VR system input devices (motion tracking) shall have six degrees of freedom.

[H]


80.

A VR system input devices (motion tracking) shall have a spatial resolution of 1
millimetre and an angular resolution of 0.5 degre
es.

[M]


81.

A VR systems input devices (motion tracking) shall have a transport delay of less
than 33 milliseconds.

[H]


82.

A VR system shall provide a way for the user to directly interact with the
environment with natural means (the users’ hands/fingers in th
e vehicle interior,
looking around with head movements).

[H]


83.

A VR system shall allow the user to get haptics
-

and force
-
feedback from
interaction with the environment.

[L]


84.

A VR system shall allow the user to get sound feedback from the environment.

[
M]


94. VR System shall use standardised VR immersive devices according to TBD
Standard.

[M


NEW]


4.4.7.

Portability Requirements

85.

A VR System should allow visualizing its GUI inside wireless PDA.

[M]


4.4.8.

Output Requirements

86.

In A VR System all the generated report
s shall be in a MS Word compatible
format.

[H]


87.

A VR System shall allow exporting functional algorithms.

[H]


88.

A VR System shall allow reporting task instances.

[H]


89.

A VR System shall allow extracting the task timeline in a MS
-
WinProject/ Sure
-
Track compati
ble format.

[H]


Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

39

Alenia Spazio S.p.A.

90.

A VR System shall allow reporting list and graphing representation of 3D
items/relations present in a 3D Scenario together with their function and
constraints.

[H]


91.

A VR System shall allow exchanging all the information with other similar
VR
installation. A list of changes w.r.t. to the already present info should be reported.

[H]


92.

A VR System shall allow to record the user and avatar movement in video format
(e.g. AVI, Mpeg II, VRML)

[M]


93.

A VR System shall allow to record the user and avat
ar speaking in audio format
(e.g. WAV, MP3)

[H]


4.4.9.

Help Requirements

75.

A VR System shall allow HELP on line (on procedures and possible commands).

[H]



Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

40

Alenia Spazio S.p.A.

5.

Conclusions

Within this first version of the User Requirements document a first approach to the
end
-
users
common requirements was attempted. In more detail two validation
scenarios were selected to serve as guidelines for the next versions of the URD. The
requirements set out in this document are common along the two scenarios and are
further analysed in the a
nnexes that follow.


This first approach will be further expanded in order to group and synthesize common
requirements coming from all INTUITION end
-
users as well as UF members. Then
we will be able to perform a vertical analysis and go in depth in more W
G
-
oriented
requirements as well.




Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

41

Alenia Spazio S.p.A.

References

Badler, Norman; Rama Bindganavale; Juliet Bourne; Martha Palmer; Jianping Shi;
and William Schuler. (1998). A Parameterized Action Representation for Virtual
Human Agents. In
Proceedings of the First Workshop

on Embodied Conversational
Character
s, 1
-
8.


Barfield W, Zeltzer D, Sheridan T, Slater M. (1995). Presence and performance with
virtual environments. In: Barfield W, Furness III TA, editors. Virtual environments
and advanced interface design. New York: Ox
ford Press, 473
-
513.


Bloom, Benjamin S. (1984). The 2 sigma problem: The search for methods of group
instruction as effective as one
-
to
-
one tutoring.
Educational Researche
r, 13(6):4
-
16.


Gil, Y. (1992). Acquiring Domain Knowledge for Planning by Experimen
tation.
Ph.D. diss.,
School of Computer Science
, Carnegie Mellon Univ.


Delin, Judy; Anthony Hartley; Cecile Paris; Donia Scott; and Keith Vander Linden.
(1994). Expressing Procedural Relationships in Multilingual Instructions.
Proceedings
of the Seventh I
nternational Workshop on Natural Language Generation.


Holmes, W.M. (1991). Intelligent tutoring systems for critical skills development in a
real
-
time environment.
Proceedings 1991 Conference on Intelligent Computer
-
Aided
Training
.


Huffman, S. B.; and La
ird, J. E. (1995). Flexibly instructable agents.
Journal of
Artificial Intelligence Researc
h, 3:271
-
324.


Kennedy RS, Lanham DS, Drexler JM, Mssey CJ, & Lilienthal MG. (1997) A
comparison of cybersickness incidences, symptom profiles, measurement technique
s,
and suggestions for further research.
Presence
-
Teleoperators and Virtual
Environments

6:638
-
644.


Kennedy RS, Stanney KM. (1996) Postural instability induced by virtual reality
exposure: Development of a certification protocol. I
nternational Journal of
Human
-
Computer Interaction

8:25
-
47.


Osborne S., Ware C. (1990) Exploration and virtual camera control in virtual three
dimensional environments. In
Proceedings of the 1990 Symposium on Interactive 3D
Graphics, Special Issue of Computer Graphics, Vol. 13,
175
-
183.


Rickel, Jeff; and W. Lewis Johnson. (1999). Animated Agents for Procedural Training
in Virtual Reality: Perception, Cognition, and Motor Control.
Applied Artificial
Intelligence
13: 343
-
382.


Rickel, Jeff; and W. Lewis Johnson. (2000). Task
-
Orien
ted Collaboration with
Embodied Agents in Virtual Worlds.
Embodied Conversational Agent
s. Boston: MIT
Press.


Rizzo AA, Buckwalter JG, Neumann U, Kesselman C, Thiebaux M. (1998) Basic
issues in the application of virtual reality for the assessment and reha
bilitation of
Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

42

Alenia Spazio S.p.A.

cognitive impairments and functional disabilities.
CyberPsychology and Behavior

1:59
-
79.


Rose FD, Attree EA, Johnson DA. (1996). Virtual reality: an assistive technology in
neurological rehabilitation.
Current Opinion in Neurology

9:461
-
467.


Rose FD, Johnson DA, Attree EA. (1997). Rehabilitation of the head
-
injured child:
basic research and new technology.
Pediatric Rehabilitation

2:3
-
7.


Regian, J.W. (1986) An assessment procedure for configurational knowledge of large
-
scale space. Unpublis
hed doctoral dissertation,
University of California
, Santa
Barbara.


Regian, J. W., Shebilske, W., and Monk, J. (1992) A preliminary empirical evaluation
of virtual reality as an instructional medium for visual
-
spatial tasks.
Journal of
Communication
, 42 (
4), 136
-
149.


Ruddle R.A., Payne S.J. (1997) Navigating buildings in "desk
-
top" virtual
environments: Experimental investigations using extended navigational experience.
Journal of Experimental Psychology 3:
143
-
159.


Sleeman, D. and J. S. Brown, eds. (1982
).
Intelligent Tutoring System
s. Academic
Press.


Stanney KM, Salvendy, Deisinger, Ellis, Ellison, Fogleman, Gallimore, Hettinger,
Kennedy RS, Lawson, Maida, Mead, Mon
-
Williams MA, Newman, Piantanida,
Reeves, Riedel, Singer, Stoffregen, Wann JP, Welch RB,
Wilson, Witmer BG. (1998)
After effects and sense of presence in virtual environments: Formulation of a research
and development agenda. Report sponsored by the Life Sciences Division at NASA
Headquarters.
Intern Journal of Human
-
Computer Interaction

10:13
5
-
187.


Stoaklei R., Conway M. J., and Pausch R. (1995) Virtual reality on a WIM:
interactive worlds in miniature. In
Proceedings of ACM CHI’95 Conference on
Human Factors in Computing Systems 265
-
272.


Wang , X. (1996). Learning Planning Operators by Obs
ervation and Practice
.

Ph.D.
diss.,
School of Computer Science
, Carnegie Mellon Univ.


Wenger, Etienne. (1987). Artificial Intelligence and Tutoring System
s.
Los Altos, CA:
Morgan Kaufmann.


Wickens CD, Baker P. (1995). Cognitive issues in virtual reality
.

In: Barfield W,
Furness III TA, editors.
Virtual Environments and Advanced Interface Design
. New
York: Oxford Press, 514
-
541.


Wilson JR. (1996) Effects of participating in virtual environments: a review of current
knowledge
.

Safety Science

23:39
-
51.


You
ng, R. Michael. (1997). Generating Descriptions of Complex Activities. Ph.D.
thesis, University of Pittsburgh.

Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

43

Alenia Spazio S.p.A.

6.

ANNEXES

6.1.

Training vs. VR Utilisation Theoretical Considerations


6.1.1.

Annex n. 1.1: HFE Training vs VR analysis

For training purposes, VR certainly of
fers a degree of flexibility in
presentation greatly exceeding that of other forms of computer
-
assisted tools. A
virtual training environment allows for total control of the presentation by the trainer.
For the trainee as well, the ability to customize the

virtual environment to the
individual opens up a range of new possibilities.

In addition to its versatility, VR offers a number of other features that are
thought to make it a particularly effective teaching and training tool. Foremost is the
active invol
vement of the users. Indeed, particularly in immersive VR, participation in
the virtual environment is to some extent inescapable (Rose et al., 1997). Actually VR
can provide benefits that are not only economic, such as savings in staffing, time and
equipm
ent, but also psychological, since a well
-
designed VR simulation can be
motivating and enjoyable to the trainee (Barfield and Weghorst, 1993, cited in
Barfield et al., 1995) as it is entertaining, enjoyable, or competitive aspects
incorporated into the lea
rning situation can increase acceptance and enhance
motivation.

Through VR, involving the use of advanced technologies and with the aid of
specially designed transducers and sensors, users can interact with displayed images,
moving and manipulating virtual

objects and performing other actions in a way that
produces a feeling of actual presence (immersion) in the simulated environment.

The unique features and flexibility of VR give it extraordinary potential for
use in learning and working related applicatio
ns. It permits users to experience and
interact with a life
-
like model or environment, in safety and at convenient times, while
providing a degree of control over the simulation that is usually not possible in the
real
-
life situation. Trainees can use virt
ual reality to learn to perform routine tasks
without pressure, to learn simple components of more complex tasks and to react to
infrequently occurring situations such as the preferred response to dangerous events.
Teaching strategies such as errorless lea
rning are easily implemented, and the ability
to make mistakes without negative consequences in the benign, forgiving virtual
environment, all can have positive implications for learning (Rizzo et al., 1998).

Because of its many advantages, VR seems to be
an ideal training medium.
However, inherent to VR training and education is the assumption that the training
that takes place within a virtual environment transfers to the real world. According to
Rose et al. (1997), most reports regarding transfer are ane
cdotal and there has been
insufficient effort expended toward demonstrating under what conditions transfer
takes place, if at all.

The essence of the virtual environment is that participants can interact with it
in a way that allows them to relate the expe
rience to the real world. This is the
attribute of "presence", a subjective impression of "being there" that is presumed to
enhance the transfer of knowledge and skills (Stanney et al., 1998). The determinants
of presence include the extent of sensory info
rmation, the quality of the display, the
ease of navigation, the ability to modify the environment, and the how comfortable
the user’s feels using a computer (Barfield et al., 1995). Given the importance
ascribed to achieving presence in virtual environmen
ts, it is noteworthy that necessary
details concerning these determinants have yet to be identified and quantified. Indeed,
Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

44

Alenia Spazio S.p.A.

according to Barfield et al. (1995), it is not at all clear when presence is a help and
when it is, in fact, a distraction.

Among th
e studies that have attempted to examine transfer from computer
-
generated to real environments, many have focused on flight simulation which, until
recently, has been the best known and most sophisticated commercial application of
computer
-
simulated traini
ng. In general, simulator training leads to better flight
performance, but this depends on the type of task. In terms of spatial learning and
navigational rehearsal, for example, Williams and Wickens (1993) concluded that
greater realism in navigational re
hearsal flights did not lead to better performance in a
transfer environment. Thus, if a helicopter pilot rehearsing a dangerous rescue mission
wants to learn the specific features of the approaching ground terrain and the rescue
site, simply studying a 2
-
D map can lead to retention and performance that is as good
or superior to training in a virtual environment (Wickens and Baker, 1995). It may be
that the mental effort allocated towards other aspects of on
-
line performance takes
away from the mental effor
t that is directed solely toward navigational learning
(Wickens and Baker, 1995).

The findings from flight simulation research have also been demonstrated in
more general studies of spatial learning. Wickens et al. (1994, cited in Wickens and
Baker, 1995)
examined how well people understood the shape of a complex 3
-
D
surface. Previous experience with a highly realistic presentation (3
-
D, stereo) of
aspects of the surface was better than a less realistic presentation (2
-
D, mono) at
helping subjects answer qu
estions about it as they were viewing it. However, it didn’t
help much for later memory and understanding of the surface shape. Furthermore, the
addition of stereo to the 3
-
D perspective view had no added benefit to later
understanding.

A more recent study

found that passive, monocular viewing of a complex
layout of objects from a single vantage point led to greater ability to reproduce the
array than active, binocular exploration of a virtual replica (Arthur et al., 1997).
Again, these studies demonstrate
that VR visualization is not always an advantage,
and richer or more complex representations do not necessarily provide any greater
benefit for the long
-
term transfer of learning.

This is not to say that significant transfer of spatial knowledge from a vir
tual
to the real world does not occur. More research is needed to determine when transfer
to real life situation occurs, and to what extent. Wickens and Baker (1995) examined
the cognitive factors influencing VR learning in relation to the substantial body

of
research on the psychology of learning. They argue that the most successful way to
teach concepts is by having VR exposure in conjunction with and related to,
alternative, more abstract representations of the same material.

Thus, teaching physics conce
pts by VR alone is better than by lecture, but the
best results occur when VR training is complemented with presentations of the same
phenomenon in a variety of other forms, including verbal descriptions, graphs, and
symbols or equations. Wickens and Baker

(1995) examined another important point
that could have repercussions also in the present work: the VR side effects.


VR Lateral Advantages

One important lateral advantage due to the applications of VR to the training
activity is the analysis of a job sit
e and its component tasks. The representation of a
specific work site as a virtual environment enables the ergonomist to view it from a
variety of angles and approaches, providing a greater understanding of how the tasks
Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

45

Alenia Spazio S.p.A.

are performed. Moreover, it is easy

to make changes in the location and orientation of
virtual support surfaces and the placement of virtual equipments or work tools.

One can then test the effect these changes will have on both trainee
performance and on the usability of the finished produc
t. Applicant suitability for the
task can also be assessed during the initial screening of the mission candidate. VR can
then be a valuable addition to the screening and interviewing process (Rizzo et al.,
1998), enabling the simultaneous, objective measur
ement of characteristics as diverse
as spatial memory, curiosity, ability to stay focused on a goal, learning curves,
orientation to unfamiliar settings, reflexes, as well as evaluating the trainees’
performance in simulations of the actual tasks.

Beside l
ateral positive advantages of the VR utilisation, there are also potential
negative effects that could have repercussions on the present VR application. They
need therefore to be investigated and addressed.


VR Side Effects

Many users experience physical s
ide effects during and after exposure to
virtual environments. Effects noted while using VR include nausea, eyestrain and
other ocular disturbances, postural instability, headaches and drowsiness. Effects
noted up to 12 hours after using VR include disorie
ntation, flashbacks, and
disturbances in hand
-
eye coordination and balance (Wilson, 1996). Many effects
appear to be caused by incongruity between information received from different
sensory modalities, and to the lag time between the user’s movement and t
he resulting
change in the virtual display (Kennedy and Stanney, 1996).

These problems are expected to improve with the development of faster
workstations and the modification
-

or elimination
-

of headsets in immersive VR
systems. For example, how can the
design and fit of head mounted displays and other
tracking devices be improved in order to provide better feedback and minimize user
discomfort and irritation? Cognitive factors, particularly those parameters affecting
the interaction between the person an
d the virtual environment, need to be
investigated, and the substantial body of knowledge on human perception, cognition,
and performance needs to be incorporated into the design and use of the system.
Which sensory modalities (visual, auditory, tactile, v
ibration, force, vestibular) are
most useful to the VR participant, and when is the provision of multiple simultaneous
sensory feedbacks more of a hindrance/nuisance than a help/benefit?

Given the wide variety of technologies available with a considerable
range in
price/cost, we need to know how realistic virtual environments have to be, and to
what degree a sense of presence is required, in order to accomplish the objectives of
the virtual experience for training purposes.

Another point that needs to be ad
dressed is the identification of the types of
knowledge/skills we need to transfer to the trainees in order to cover the specific
training tasks. To achieve this goal, we should identify a cognitive taxonomy that
allows decomposition of tasks in terms of r
equisite knowledge/skill types and, starting
from this point, consider a general learning theory that specifies how those
knowledge/skill types are acquired by humans.

Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

46

Alenia Spazio S.p.A.

6.1.2.

Annex n.1.2: A Cognitive Taxonomy and Learning Theory

Designers and evaluators of immer
sive virtual reality systems have many ideas
concerning how virtual reality can facilitate learning. However, there are little
information concerning which of virtual reality's features provide the most leverage
for enhancing understanding or how to custom
ize those affordances for different
learning environments. In part, this reflects the truly complex nature of learning.
Features of a learning environment do not act in isolation; other factors such as the
concept to be learned, individual characteristics,

the learning experience, and the
interaction experience all play a role in shaping the learning process and learning
outcomes.

Since high
-
performance VR systems are relatively expensive today, as compared to
other platforms for computer
-
based instruction,

it is important to know what types of
tasks benefit from VR as an instructional technology, and which tasks are just as well
taught in other ways. Data are sparse with regard to the task
-
specific utility of VR as
an instructional medium. Regian, Shebilske

and Monk (1992) demonstrated that
subjects can learn and then perform two kinds of tasks in a VR.

The tasks studied were:

a)

navigation in virtual
-
spatial environments,

b)

procedural console operations tasks on virtual consoles.


Navigational tasks require in
dividuals to learn and then navigate through a
large
-
scale spatial environment. Because configurational knowledge of spatial layout
is so important to navigational skill acquisition (Regian, 1986), and because VR
supports simulated immersion in a spatial e
nvironment, we believe that VR may be an
excellent interface for teaching large
-
scale spatial navigation skills (Regian et al.,
1992). Procedural tasks (e.g., Holmes, 1991) require the learning of a series of steps,
often contextualized in a small
-
scale sp
atial environment.

Because practice is so important to procedural skill acquisition (Anderson,
1982), it has been suggested that VR may provide an excellent interface for training
procedural skills (Kreuger, 1991). The Regian et al. (1992) data suggest tha
t it will be
possible to design virtual training environments and that trainees will learn to perform
tasks in these environments.

But VR alone is merely a display medium for a simulation, which is not the
same as an instructional system. The effectiveness

of VR as a medium for instruction
depends on the pedagogy that is applied in the context of the VR. We need then to
postulate a well
-
supported theory of human learning. This theory shell includes a
comprehensive taxonomy of the categories of knowledge and

skills that support
performance and an appropriate theory of how these knowledge/skill types are
acquired.

Cognitive Taxonomy categorizes tasks in terms of representational primitive
knowledge and skill types that support human performance. The taxonomy w
e
propose in this context is limited to tasks performed by individuals and includes
perceptive, cognitive, and motor task bases along one categorical dimension, and
desired task performance levels along the other. The performance dimension defined
in the p
resent taxonomy matches the Training Flow established by the International
Training Control Board (ITCB) that regulate and addresses the Training rules to be
followed by each ISS International Partner in Crew Training.


Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

47

Alenia Spazio S.p.A.

The levels of Crew Training are thr
ee:

1.

Basic Training

provides to the candidate astronauts and cosmonauts basic
knowledge on space technology and science, basic medical skills and basic
skills related to their future operational tasks including those related to the
Station systems and opera
tions. It also includes the training of special
capabilities, e.g. SCUBA diving. Basic Training is given by each Station
partner to its class of candidates who have been recruited together. The
training contents will be harmonised amongst the partners acco
rding to ITCB
guidelines [5] as a prerequisite for the following training phase and has a
duration of up to 1 (one) year.

2.

Advanced Training

provides to Station crew members knowledge and skills
related to operations of the Station space elements, payloads,

transport
vehicles and related interaction with the ground. It builds up on Basic Training
and is normally generic in nature and does not focus on increment specific
tasks. It is given to international classes of crew members from all the partners
and wil
l take place at all partner's facilities to enable crew familiarity with
partners' flight elements and operations. Upon successful completion of
Advanced Training a crew member is eligible for assignment to a mission. The
duration of Advanced Training is a
pproximately 1 (one) year. Training during
the advanced phase will be job orientated concentrating on the tasks and
systems knowledge associated with a single job involving single or multiple
students [3]. Station crew members will be trained to use all sy
stems and may
receive a subset of specialties, i.e. resource and data operations, robotics,
navigation, maintenance, inter
-

and extra
-
vehicular activities, medical aspects
and payload operations for long term on
-
orbit payloads.


3.

Increment Specific Training

provides to assigned Station crew (and to a
backup crew if applicable) the knowledge and skills required to perform the
planned and contingency onboard tasks of that increment. To enable good
crew integration, the crew is trained together as far as possib
le. The duration
of Increment Specific Training is about 1.5 (one and one half) years. The
Station crew will train as an assigned unit in the Increment specific time
-
frame.


The following table summarizes the taxonomy individuated in the TRAIN
(Training Re
search for Automated Instruction


Regian et al., 1992) project, and
currently adopted by Armstrong Laboratory, that well suites to our purposes.




Perceptive
Basis

Cognitive

Declarative

Cognitive

Procedural

Motor

Basis

Basic

Knowledge

Existing

General
P
erceptive
Skills

Facts

Propositions organized as
Associations

Rules

Propositions organized
as Productions

Existing

General

Motor

Skills

Basic

Performance

Emerging

Task
-
Specific
Perceptive

Skills

Concepts

Facts organized as Simple
Abstractions

Procedures

P
roductions organized
as Goal
-
Seeking
Systems

Emerging

Task
-
Specific

Motor

Skills

Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

48

Alenia Spazio S.p.A.

Advanced
Performance

Reliable

Task
-
Specific
Perceptive

Skills

Schemata

Facts/Concepts organized
as Abstract Explanatory
Systems

Skills

Production Systems
sufficiently refined

for
Reliable Application

Reliable

Task
-
Specific

Motor

Skills

Increment
Specific

Performance

Expert

Task
-
Specific
Perceptive

Skills

Mental Models

Facts/Concepts/Schemata
organized as Autonomous
Models

Automatic Skills

Production Systems

fully refined fo
r
Autonomous Execution

Expert

Task
-
Specific

Motor

Skills


Perceptual, Cognitive, and Motor Bases of Human Task Performance


The purpose of this taxonomy is to describe the performance bases of any
individually
-
performed task at a level of analysis that su
pports

1.

predictions about instructional approaches and VR media characteristics that
will optimise task acquisition and transfer,

2.

generalizations about instructional approaches and VR media characteristics
that will work across tasks.


The approach reflects

a generic cognitive (information processing) approach to
learning and performance, but recognizes that all tasks involve one or more sequences
of perception, cognition, and action. That is, the person must perceive the situation,
decide what to do, and do

it with this sequence possibly repeating until the task is
complete. At any stage in learning to perform a task, the learner’s continued progress
may be limited by perceptual skill, cognitive
-
declarative, cognitive
-
procedural, or
motor skill deficits. In
order to be able to define, starting from user requirements, the
proper VR tools and techniques to adopt for performance optimisation, it is important
to recognize and address all trainable factors which limit performance improvements.

While nearly all tas
ks involve
perceptual

skills that must be available to reach
expert performance, for training purposes it is useful to focus on special perceptual
skills that are not acquired outside of training, or for any reason are not available to
the individual. We c
an conceptualise
cognition

as operations on an internal
representation of task
-
relevant information. These representations may be first
-
order
derivatives that maintain critical information about percept, second
-
order
representations that combine selected i
nformation from spatially or temporally distal
percept, or abstract representations with no perceptual basis.

We focus on the type of representation that is developed in working memory
and maintained over time, and the extent to which the operation draws o
n perceptually
available information or information in long term memory. Every task requires some
response, implying
motor activity
. We focus on those motor responses that are
required to do the task but are not already available to the individual.

With re
gard to the tasks described in the user scenarios chapter, all of them
require in different ways task
-
specific perceptual and/or motor skills. The main cause
of this perceptual, cognitive, and motor knowledge/skills is the particular environment
conditions

where the User will perform such tasks. In fact most of the tasks proposed
in the scenarios wouldn’t demand any particular knowledge or skills capability if
performed in normal conditions. On the contrary, in a 0G condition, very simple tasks
as pushing a

button or turning a knob will involve perceptual, cognitive, and motor
knowledge/skills that are not pre
-
known/experienced by the users.

For this reason the different training scenarios have been decomposed into
basic tasks and each one of them have been
analysed in order to identify the different
knowledge/skills required for its execution.

Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

49

Alenia Spazio S.p.A.

6.1.3.

Annex n. 1.3: Teaching Versus Performing the Task

Education researchers have long recognized that one
-
on
-
one tutoring is a
particularly powerful method of instruction

(Bloom 1984). Unfortunately, it is a
highly expensive one as well. Intelligent tutoring systems (Wenger 1987, Sleeman
1982) attempt to make widespread one
-
on
-
one instruction possible by filling in for
human instructors and providing some of the same types

of interaction. Trainees can
watch a tutoring system demonstrate how to perform a task or they can practice the
task under the supervision of an automated tutor. Such systems should ideally allow
the same kind of interactivity that a human instructor does
. A trainee should be able to
ask the system to suggest or perform a step when they become stuck and should be
able to take over for the system when they feel confident with their ability to finish a
task being demonstrated.

However, the need for the abili
ty to respond to a potentially large set of user
actions means that scripting all of the system’s actions is highly impractical at best. A
better way to equip such tutorial system with the necessary knowledge is to provide it
with a model of the task, enab
ling it to plan how to complete the task and to both
dynamically adapt the plan to new situations and to provide explanations of these
adaptations. Unfortunately, formalizing this knowledge can be both difficult and time
consuming. Researchers have explore
d a number of ways to facilitate providing
procedural knowledge to an intelligent agent. Systems have been designed that use
higher
-
level instruction languages to make knowledge engineering quicker and more
easily understood (Badler, at al. 1998). Alternat
ively, some systems seek to eliminate
traditional programming through instruction (Huffman 1994), example solutions
(Wang 1996) or experimentation (Gil 1992). Most of this work focuses on providing
agents with the knowledge to perform tasks as opposed to t
eaching the tasks. While
learning how to do and how to teach how to do are similar problems, an agent that is
to be an instructor has an extra requirement


it must be able to explain the rationale
behind what it learns.

Being the astronauts training mainl
y based on the transmission of
knowledge/skills related with procedural tasks, a very interesting training method
utilizing VR technologies could be the one adopted, as instance, by STEVE (Soar
Training Expert for Virtual Environments), and an intelligent
agent that cohabits
virtual environments with students to teach them procedural tasks (Rickel 1999,
2000).

Trainees can interact with STEVE Virtual Environment by taking actions in a GUI
(Graphic User I/F)
that change the state of the domain. This same do
main simulation
can be used by the tutoring system to acquiring the domain knowledge to tutor
students. The domain simulator is responsible for providing an interface (e.g. the same
GUI human Trainee use) with which the human instructor can take actions wi
thin the
domain. The simulator is also responsible for providing the learning system with
records of actions (action observations) taken by the human expert (or by the
intelligent tutoring system). Each action observation consists of a pre
-
state, the actio
n
taken, and the effects of the action. Finally, the Virtual Domain Simulator should be
responsible for accepting commands to execute actions in the simulated environment
(action commands), so that the same actions can be executed in the VR as performed
by

the instructor.

There are two main requirements for our procedure representation. First, the
representation must be sufficient to choose the next appropriate action while
Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

50

Alenia Spazio S.p.A.

demonstrating a task or watching a trainee and, if asked, to explain the role of tha
t
action in completing the task. Second, the representation must be sufficient to allow
the system to adapt procedures to unexpected trainee actions. To meet these
requirements, we could represent procedures with a procedural plan consisting of a set
of st
eps, a set of end goals, and a set each of ordering constraints and causal links
between steps. Each step may either be a primitive action (e.g., press a button) or a
composite action (i.e., itself a procedure).

The end goals of the task simply describe a
state the environment must be in
for the task to be considered complete. Ordering constraints impose binary ordering
relations between steps in the procedure. Finally, causal links represent the role of
steps in a task; each causal link specifies that one
step in the task achieves a
precondition for another step in the task (or for termination of the task). For example,
pulling out a dipstick achieves the goal of exposing the level indicator, which is a
precondition for checking the oil level. Such a repres
entation is not uncommon in the
AI planning
(Artificial Intelligence)

community and has proven effective in a wide
variety of research on task
-
oriented collaboration and generating procedural
instructions (Delin et al. 1994, Young 1997). Used with partial
order planning
techniques this representation is sufficient to choose the next appropriate action while
demonstrating a task or watching a trainee, and it also enables the system to explain
the role of that action in completing the task. (Rickel 1999).

Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

51

Alenia Spazio S.p.A.

6.1.4.

An
nex n. 1.4: VR Interaction Levels Identification


In order to be able to identify, later on within the VR training application, the
functional and performance system requirements, it is helpful to define the different
interaction levels achievable among th
e VR systems and their users.


Level 1:

The environment and the equipment, on which the User has to execute
the different operative steps foreseen by the object of the training
session, are previously designed and developed during the
implementation phase
of the VR scenario. Thus the product includes
only visualisation and no interaction among the User and the system is
provided. The VR environment may also include a mannequin
representing an operator during the accomplishment of the selected
task. The fina
l result consequently is similar to a computer based
video.


Level 2:

The VR environment shows to the User the visualisation of the
environment and the equipment foreseen as object of the training
session, offering also the capability to accept inputs rega
rding the
observation position point through the zoom and observation angle
variation.


Level 3:

As level 2 but, together with the environment and the equipment
included in the scenario there are mannequins representing the
operators during the accomplishm
ent of the tasks foreseen by the
object of the training session (similar therefore to the level 1 but with
the capability to move the User point position).


Level 4:

As level 3 but, with the capability to accept inputs, from the User,
concerning the variat
ion of the mannequin movements and operations
during tasks execution.


Level 5:

The VR environment shows to the User, together with the visualisation
of the environment and the equipment foreseen by the object of the
training session, the visualisation of
the results coming from his limbs
movements. Such results can be utilised for the interaction between the
User and the scenarios environment/equipment.


Level 6:

Maintaining the above two levels characteristics the User can interact,
both with the scenario
s environment/equipment, and with the
mannequins during the accomplishment of the tasks foreseen by the
object of the training session.


Level 7:

All the characteristics of the above levels are included. Moreover there
is the capability to simulate the 0G
conditions, related to the spatial
position of both the User and the utilised tools/equipment within the
scenario environment.

Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

52

Alenia Spazio S.p.A.

6.1.5.

Annex n. 1.5: Computer Simulation versus VR

The main characteristics that make possible to define a three
-
dimensional computer
s
imulation as a VR environment are the followings:

1)

Interaction
: the possibility for the User to control in some ways the
simulation along all its execution.

2)

Autonomy
: the existence in the model of objects containing some interaction
rules in order to make a

realistic simulation (i.e.: to make the animation of an
object “believable” in a normal environment without constrains, it should fall
down under the effect of the gravity low).

3)

Presence
: it is fundamental to give to the User the sensation of “presence”.
The presence is defined by Loomis as the perception that a person has to be in
a certain place. This perception is certainly conscious; therefore it regards the
cortical structure activation.


Looking to the interaction levels previously defined and to the

above VR
definition, we have to consider that the first three interaction levels cannot be
considered part of the Virtual Reality but they should be defined as three
-
dimensional
environment computer simulation.

Moreover, being the spatial awareness a very

complex and multifactorial process,
it is the case to look a bit deeper into this problem in order to be able to find out
relations between human spatial awareness and a way to design and realize more
suitable VR systems for the International Space Statio
n Crew Members Training.

We could identify mainly three types of spatial awareness:

1)

Awareness or learning of the
Landmark
. This is the capability to distinguish
the characteristics and the orientation of specific objects within an
environment.

2)

Awareness o
r learning of the
Route
. Knowledge based both on the connections
between our own “route” within the environment and on the procedural
components of this exploration. Route knowledge is characterized by the
navigator's ability to move from position A to pos
ition B along a specific
learned route. The navigator may not have any knowledge of the relative
spatial locations of A or B and is unaware of alternate routes linking A and B.
In general, route knowledge is egocentric. While the navigator may be able to
m
ove between two points, he/she may not be able to easily traverse the reverse
path or take an exocentric view of the situation to draw or otherwise describe
the path to another person. A navigator with strong route knowledge has
knowledge of a particular l
earned path through the environment.

3)

Awareness or learning of the
Survey
. It is also defined as the learning of the
map or configuration of a general environment based on the knowledge of the
relations and spatial distances among the landmarks. Survey kno
wledge stems
from an awareness of the space as a whole. Survey knowledge is characterized
by the navigator's knowledge of the relationships between different locations
and her ability to view the location from an exocentric viewpoint. The
navigator is able

to determine routes between locations in the space which
he/she may not have previously traversed, and is able to describe the physical
characteristics of the space. A navigator with a strong configurational
knowledge of an environment has:

a.

Knowledge of
the structure of the environment.

b.

The ability to draw a map of the environment.

Deliverable N.
D1.5_2

Dissemination Level
-

PU

Contract N. IST
-
NMP
-
1
-
5
07248
-
2


10/12/2004

53

Alenia Spazio S.p.A.

c.

The ability to find the optimum route between two points within the
environment.

d.

The ability to estimate distance and direction to a particular landmark
or location within the
environment.


The
Survey

awareness can be directly acquired trough the study of a general
environment map or either trough a repetitive environment exploration that allow the
Route

knowledge (the first to be elaborated and stored) to become, after some tim
e,
Survey

knowledge. Each one of these specific components has been analyzed in
different experimental VR paradigm.

One of the first studies (when the VR technology wasn’t existing jet) has been
oriented to the distinction between “
passive
” versus “
active


navigation. For active
navigation is intended the exploration of a real environment, while the passive
navigation is the specific modality of exploration accomplished only through the
visual observation (using as instance a monitor or a screen and in some

occasions
utilizing an input device as a mouse or a joystick) without any motory involvement..

During the early researches on this field, an important difference between the
two specific exploration modalities was discovered. The active navigation genera
tes
significant positive effects in spatial memorization tasks, while for landmarks
localization tasks the performances between the two methods are similar.

When from the ’90 the first VR systems turned into the market, the general
theoretic frame for acti
ve and passive navigation become as follow:

1)

The
Landmak

awareness is focused on the existence and the position of
distinctive spatial characteristics.

2)

The
route

awareness can include a verbal sequence of the directions as well as
the procedural codes for s
ingle paths.

3)

The
survey

awareness, as the spatial map, follows the calculation of new
routes, orientations and distances in the space.

If the critical difference between passive and active conditions was on the
route knowledge foundation (that also include

a procedural component), then the
elaboration of the landmark orientation shouldn’t involve any effect on the spatial
learning among the two specific groups (active and passive).

The latest researches on VR needed however to investigate not only this last

hypothesis (active versus passive), but also the presence of another important
dichotomy: the difference between
large scale

and
small scale

experiments.

The large scale experiments are represented by situations in which the subject
has a wide visual fiel
d prospective, in the sense that he can turn his had, he can move,
he can widely change visual fields, while the small scale situation presents just the
opposite state.

This further distinction allowed discovering real differences existing not only
among a
ctive and passive navigation, but also differences between real and virtual
navigation. Today virtual environments have developed to such a point of
sophistication that it is possible to attain an unprecedented sense of immersion in a
computer simulation.
Actually Ruddle (1997) discovered a narrow similarity in
mental representations of spatial awareness between real and virtual navigations.
With the availability of such systems comes a new way of information representation.

The objective of this work is t
o apply state
-
of
-
the
-
art virtual reality simulation
technologies to investigate issues in ISS Crew Members training in the framework of
their Advanced/Incremental Training.