1
AICC
Management & Processes
Subcommittee Activities
Orlando, FL
June 2009
Bruce Perrin
2
Management & Processes Subcommittee
Charter:
Provide recommendations and guidelines to the Computer
-
based
Training community that identifies the attributes of "Good CBT" processes
and product.
Topics for this Meeting
1.
AICC
-
sponsored survey
•
Development, fielding, and response
–
What evaluation practices are we using?
–
How will current technologies and approaches impact training?
•
Findings
–
where do we find a polarization of opinion on a
technology or approach?
2.
Recommendations on the use of 3D models (e.g., virtual reality, virtual
environments, etc.) in training
•
Initial taxonomy based on “subject” of 3D content
•
Relevant research
3.
Discussion and a request
3
Survey Development, Fielding, Response
•
Introduced (and edited) survey at AICC meeting in San Jose
–
What are our typical evaluation practices?
–
Are there CBT issues where recommendations might help?
•
Refined survey over several AICC Executive Committee
teleconferences
•
Worked with QuestionMark to put survey online
–
Announced on AICC News Blog
–
Publicized on AICC Website
–
Discussed at AICC Meetings in Hamburg, Germany & Louisville,
KY
–
Hosted by QuestionMark from 5/18/2008 to 11/18/2008
•
Thirty
-
two responses representing approximately 25
organizations
4
Findings
-
Fields Represented in Sample
0.00
10.00
20.00
30.00
40.00
50.00
60.00
70.00
Training Fields
Percent
Pilot
Maintenance
Cabin Crew
Dispatch
Other Aviation
Regulatory Training
Corporate Training
Academia
Other, Non-aviation
Survey: What are your major fields of training/learning interest
(Choose one or more)?
5
Formative & Summative Evaluation Use
0
10
20
30
40
50
60
70
80
90
Formative Evaluation
Summative Evaluation
Percentage
Survey:
On what percentage of your training systems do you conduct any formative
evaluation, e.g., measurement of training methods/processes, so that needed
changes or modifications can be made in the early stages of development?
On what percentage of your training systems do you conduct any type of summative
evaluation, e.g., measurement of final training system outcomes or results?
"When the cook tastes the soup, that’s formative; when the guests
taste the soup, that’s summative."
(Robert Stakes)
6
Types of Evaluation Criteria Used
•
Most training is evaluated
•
Reaction measures still
predominate
•
Compared to national studies
–
Use of reaction and behavior
measures similar
–
Use of learning measures
slightly higher
–
Use of results measures
slightly lower
0
10
20
30
40
50
60
70
80
90
Reaction Measures
Learning Measures
Behavior Measures
Results Measures
Percent
Survey: On what percentage of your training systems do you use each of the following types of
evaluation criteria?
•
Reaction
–
how much the trainee liked the program or thought it would benefit him/her on the job
•
Learning
-
how much knowledge and skill changed in the training setting
•
Behavior
-
how much behavior changed in the work place
•
Results
-
how much organizational factors were affected
7
Opinions on Current Trends/Claims
Survey sought opinions on 9 trends/claims in the Industry
1.
The current generation “learns differently” than older adults did when they were that age.
2.
Gaming technology for training is applicable across a wide range of training tasks.
3.
Nearly everyone can learn effectively from gaming technology.
4.
Providing training in a format that is consistent with an individual’s “learning style” will
significantly increase learning performance.
5.
Computerized methods to adjust training content according to performance (e.g., scores on
embedded tests, actions taken in a simulation) will significantly increase learning
performance.
6.
Three
-
dimensional environments (virtual reality, virtual environments) represent an important
extension to current training technologies, i.e., they are effective and applicable in a variety
of training
7.
Effective training cannot be built from context
-
independent, re
-
usable (sharable) learning
objects.
8.
The need for maintenance training will subside over time as self
-
testing equipment and job
-
aiding technology becomes better.
9.
The disciplined use of meta
-
data will end up saving the training community substantial costs
in development compared to the cost of developing the meta
-
data initially given current
technology.
8
What Are Our Concerns?
•
Of most interest (in my opinion) are technologies that elicit polarized beliefs
–
Almost as many think the statement is true (definitely or probably true) as think
that it is false (definitely or probably false)
–
Few people have no opinion (unsure; do not know)
•
Examples
–
The need for maintenance training will subside over
time as self
-
testing equipment and job
-
aiding
technology becomes better
–
Three
-
dimensional environments (virtual reality,
virtual environments) represent an important
extension to current training technologies, i.e.,
they are effective and applicable in a variety of
training
9
Use of 3D Models in Training
•
Why develop recommendations for the use of 3D models in
training?
–
Somewhat polarized opinions on utility (40% unsure or do not believe
3D models have a widespread role in training)
–
Considerable interest level at AICC meetings and in the training
community in general
–
Significant promise
–
lower
development and lifecycle
cost; greater throughput;
easier distribution
–
Modest research base
10
Taxonomy and Studies Reviewed
Taxonomy of the use of 3D models in training
1.
3D content trains a task performed within a single visual scene or across
independent scenes
2.
3D content trains parts of a task in separate visual scenes and knowledge/skill
must be integrated across them*
a.
Desktop (includes all 2D cues to depth, e.g., motion parallax, texture, interposition,
linear perspective, etc.)
b.
Immersive (all above plus stereopsis)
3.
3D environment in which training occurs (e.g., Second Life)
*Difference between 1 & 2 is continuous
Study
Type 1
Type 2
Type 3
Criteria
Buck, Perrin, et al. (1997
-
2003)
-
Virtual
Maintenance Training
X
X
Knowledge test and
behavior demonstration
Waller (1999)
–
Individual differences in
learning spaces in a VE
X
Knowledge test and
behavior demonstration
11
Virtual Maintenance Training Research
•
Domain: Maintenance training involving re
-
use of 3D CAD models
•
Tasks:
–
Remove & install, in single visual scene
–
Remove & install, integrated across visual scenes
–
Troubleshooting, in several independent visual scenes
•
Interventions
–
Low and high detail desktop
–
Active vs. passive involvement
–
Immersive (head
-
mounted displays) & desktop
–
Training on physical mockup provide control
condition
•
Method
–
Over 200 participants (Boeing & US Navy)
–
Several independent replications of effects
–
Criteria: knowledge test, performance accuracy
–
Measured experience with computers, 3
-
D games, “hands
-
on”
activities
–
Measured spatial visualization aptitude (ETS paper
-
folding test)
12
Virtual Maintenance Training Research
(cont.)
Training Type
Findings
Single or Independent
Scenes
•
Modest drop in overall learning performance compared to control
•
Similar variability in performance among trainees
Scenes that must be
integrated
-
Desktop
•
Significant drop in learning performance compared to control
•
High variability among trainees in the amount learned (variance
often 5 times greater or more)
Scenes that must be
integrated
-
Immersive
•
Extreme drop in learning performance compared to control
•
Extreme differences among trainees in the amount learned
(variance often 10 times greater or more)
-
6
-
5
-
4
-
3
-
2
-
1
0
1
2
Learning Performance
Single/Independent
Scenes
Multiple Scenes
Hardware
3D Models
3D Models
Hardware
Learning Performance
•
10
th
Percentile
•
Mean
•
90
th
Percentile
13
Virtual Maintenance Training Research
(cont.)
Background factors examined to explain variability in
learning
•
Prior experience with tools, repairs
–
generally faster performance, but
effect on VE and hardware training is the same
•
Exposure to 3
-
D
computer/video games
–
no significant effect
•
Immersive tendencies
–
no significant effect
•
Extended practice with
3
-
D interface
–
no
significant effect
•
Spatial visualization
aptitude, ETS paper
-
folding test
0
5
10
15
20
25
30
35
40
0
5
10
15
20
Visualization Aptitude
Test Score
Hardware Mockup
Immersive VE
14
Virtual Maintenance Training Research
(cont.)
•
Tested hypothesis that extreme variation following 3D model
-
based training results from lack of visual access
–
Changed the location of the
part being removed
–
Repeated the study
–
Findings consistent with other
tasks trained in the visual field
•
Interventions that “expanded”
visual access also improved
learning performance
Original Modified
15
Spatial Navigation Training Research
•
Domain: Training spatial knowledge in a virtual environment
•
Tasks:
–
Pointing, mapping, or navigating the real
-
world or virtual environment
–
Included both small (room size) and large (campus wide) settings
–
In all cases, separate scenes must be integrated to form “survey map”
of the environment (e.g., all type 2 situations)
•
Interventions
–
Desktop VE
–
Compared to training in the physical environment
•
Method
–
Series of studies on individual differences
–
Experimental
-
control group, post
-
test only design
–
Correlation (latent structural) designs
16
Spatial Navigation Training Research
(cont.)
Study Description
Findings
Maze training in VE
(experimental) & physical
maze (control)
•
Error in pointing to unseen locations more than 18
times greater after VE
-
based training
Factors correlating with
spatial learning in a VE
•
Strongest
-
Spatial ability (ETS paper folding was
primary measure)
•
Second
-
Practice time and maneuvering speed
•
Factors not significantly correlated
–
Verbal ability
–
Computer use
–
Gender
–
Spatial accuracy of real world learning
(measured as pointing, map making, and
navigating Univ. of Washington campus)
17
Initial DRAFT Recommendations
Type of 3D
-
Based Training
Recommendations
3D content trains a task
performed within a single visual
scene or across independent
scenes
•
Follow standard design, development, and
evaluation procedures
•
Variability in trained performance may be
compared against a control
3D content trains parts of a task
in separate visual scenes and
knowledge/ skill must be
integrated across them
•
Variability in trained performance should be
compared against a control (even an untrained
group)
•
Correlation between criterion and a measure of
the visualization aptitude should be examined
•
Immersive environments should be avoided
unless validated
•
Techniques that increase visual access (e.g.,
transparent a/c skins) should be considered
3D environment in which training
may occur
TBD
18
Discussion
And A Request
Please forward any published research on use of 3D models in
training that have a learning or behavior measure
–
Studies that show impacts on speed, cost, throughput, etc., without
equivalent or better learning/performance are not of interest
19
Enter the password to open this PDF file:
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Preparing document for printing…
0%
Comments 0
Log in to post a comment