Virtual Tutor Application: Report

undesirabletwitterAI and Robotics

Oct 25, 2013 (4 years and 8 months ago)


Virtual Tutor


Submitted to: Dr.
Jie Yan Assistant
Professor Computer
Science Department Bowie State

Submitted by
Ruth Agada
Research Assistant


P a g e

Table of Contents





Overview Animated Pedagogical agents

Aim of Virtual Tutor

System Analysis, Design and Implementation


Hardware Specification

1.2 Software Specification

1.3 Design Implementation




Java API


Autodesk 3DS Max


CU ani





P a g e


The development of effective animated pedagogical agents is a topic of rising interest in the
computer science community. Studies have shown that effective individual tutoring is the most
powerful mode of teaching. These
agents are designed to be lifelike autonomous characters that
support human learning by creating rich, face


face learning interactions. Animate

pedagogical agents offer great promise for broadening the bandwidth of tutorial communication
and increa
sing learning environments’ ability to engage and motivate students. It is becoming
apparent that this generation of learning technologies will have significant impact on education
and training.
Successful outcomes of this research
will provide a new procedure for developing
more engaging and natural dialogs and narrations by pedagogical agents, which are expected to
lead to more effective learning outcomes. Successful outcomes
of this research

will provide a
strong theoretical and
empirical foundation as well as pilot data to support subsequent research to
be described in an NSF career proposal. Subsequent research will aim to extend the knowledge
gained in the proposed experiments and to automate many of the procedures that are
plemented manually in the proposed work.


P a g e



Animated Pedagogical Agents

There is plenty of research being conducted in the development of an effective animated
pedagogical agent. Animated pedagogical agents are designed to

be lifelike autonomous
characters that support human learning by creating rich, face


face learning interactions
[20]. These agents are capable of taking full advantage of verbal and non

communication reserved for human interactions [1]. Th
ey have been endowed with human
like qualities to make them more engaging, making the learning experience more beneficial,
and prevent distracting behaviors (unnatural movements) [20, 21]. They take full advantage
of the face


face interactions to ex
tend and improve intelligent tutoring systems [20,

Studies have shown that effective individual tutoring is the most powerful mode of
teaching. However, individual human tutoring for each and every student is logically and
financially impossible, henc
e the creation and development of intelligent tutoring systems
[36] to reach a broader audience. According to Suraweera and Graesser [39, 14] there have
several intelligent tutoring systems that have been successfully tested, have shown that this
new devic
e does in fact improve learning.

Researchers face a single problem of how to create an effective user interface to provide
the user with a believable experience. The idea is to create a system that will use intelligent
and fully animated agents to engage

its users in natural face



To use agents most powerfully, designers can incorporate suggestions from
research about agents concerning speech quality, personality or ethnicity of the agent, or the
frequency and verbosity of reward. Designers can also incorporate what research says ab
effective human teachers or therapists into the behavior of their agent [10]. In the
development of
the Virtual Tutor
, both kinds of research were incorporated to make this
agent more
effective and powerful. The v
irtual tutor

lectures on
subject matter
chosen by the

during quizzes the agent provides the student with feedback based
on specific errors made during the quiz.


Aim of Virtual tutor Application

The objectives of the year project are:

To develop a powerful new experime
ntal approach for investigating engaging and effective
communication by lifelike animated characters through speech, head movements and facial
expression of emotions, and

To conduct experiments to gain new insights about how voice and facial expressions c
an be
combined in optimal ways to enable pedagogical agents to provide more believable and effective
communication experiences.


P a g e

We aim t
o get better comprehension, high retention rates and improved learning experience
by developing original narratives that must contain six basic emotion targets namely
emotions happiness, surprise, fear, anger, sadness and disgust.

System Analysis, De
sign and Implementation

This section defines the parameters that the software product must follow while interacting with
the outside world.


Hardware Specifications


: intel core 2 duo / dual core



Hard Disk Required
: 3


Software Specifications

Operating System

: Windows XP

JDK toolkit

: JDK version 1.6.0_18

Other libraries


vht.jar, blt.jar, CSLR.jar


Design Implementation

Design of software involves conceiving, planning out and specifying the externally
observable characteristics of the software product. We have data design, architectural design
and user interface design
in the design process. These are explained in the following section.
The goal of design process is to provide a blue print for implementation, testing and
maintenance activities.

Index Files

The index files are a text
files that
list each aspect of the lecture divided into subsections for
ease of reference.
The notepad application was used to create these files.
The format for an
index file is as follows:

<title of subjec
tion> | <file path

QUIZ | <file path

A lecture comprises of the title of that lecture and the various subsections included in that
lecture. As such the primary index file notes the course that the student is registered for and
for each course an outline of topics to be taught that semester.
he primary index file also
follows the similar format described above. In addition it contains several more features to
allow the application to differentiate each lecture and course. It is as follows:


P a g e



<topic1> | <file path>

<topic2> | <file




<topic1> | <file path>

<topic2> | <file path>

Lecture files

Based on the outline topic the user selected (reference to screenshots of lecture window),
discussion for that section would be displayed in a box below the agent while t
he agent
describes the text. If an image is associated with the discussion another window pops up with
the image and corresponding text. The topic file contains several references for the agent to
works as needed. The format for the lecture file is as foll


<phoneme file (.txt)> | <.wav file >

<discussion text>

LoadImage | <file path>

Explanation | <file path>

In future versions of this application, video files will be added to lectures and the agent will
be able to describe the contents of the vi

Quiz files

The quiz file contains questions and answers

the correct answer is indicated with the
question, whatever comments the instructor has for the user and the option to shuffle both
question and answer

The format is as follows,

please note to the commands listed below
shuffle the question and answer of the quiz:



Shuffle Answers.

Don't Shuffle Answers.

Shuffle Questions.

Don't Shuffle Questions.

Shuffle These Answers.

Don't Shuffle These Answers.

tion. <question>

Answer. <answer>

Correct answer. <answer>


P a g e

Question. <question>

Adding the keyword “these” shuffles or unshuffles only the current question and answer.



Java API

ava API is not but a set of classes and interfaces that comes with the JDK. Java API is
actually a huge collection of library routines that performs basic programming tasks such
as looping, displaying GUI form etc.

In the Java API classes and interfaces a
re packaged in packages. All these classes are written in
Java programming language and runs on the JVM. Java classes are platform independent but
JVM is not platform independent. You will find different downloads for each OS.

The Java comprises three com

Java Language

JVM or Java Virtual Machine and

The Java API (Java programming interface)

The Java language defines easy to learn syntax and semantics for Java programming
language. Every programmer must understand these syntax and semantics to
program in Java language.

Type of Java API

There are three types of API available in Java Technology.

Official Java Core API

The official core API is part of JDK download. The three editions of the Java
programming language are Java SE, Jav
a ME and

Java EE.

Optional Java API

The optional Java API can be downloaded separately. The specification of the API is
defined according to th
e JSR request

Unofficial APIs

These API's are developed by third parties and can
downloaded from the owner

mple java gui code


P a g e

import java.awt.*;

import java.awt.event.*;

import java.awt.image.ImageObserver;

import java.awt.image.BufferedImage;

import javax.swing.*;



* The DukeAnim class displays an animated gif with a transparent back


public class DukeAnim extends JApplet implements ImageObserver {

private static Image agif, clouds;

private static int aw, ah, cw;

private int x;

private BufferedImage bimg;

public void init() {


clouds = getDemoImage("clouds.jpg");

agif = getDemoImage("duke.running.gif");

aw = agif.getWidth(this) / 2;

ah = agif.getHeight(this) / 2;

cw = clouds.getWidth(this);


public Image getDemoIma
ge(String name) {

URL url = DukeAnim.class.getResource(name);

Image img = getToolkit().getImage(url);

try {

MediaTracker tracker = new MediaTracker(this);

tracker.addImage(img, 0);


} catch (Exception e) {}

return img;


public void drawDemo(int w, int h, Graphics2D g2) {

if ((x
= 3) <=
cw) {

x = w;


g2.drawImage(clouds, x, 10, cw, h
20, this);

g2.drawImage(agif, w/2
aw, h/2
ah, this);


public Graphics2D createGraphics2D(int w, int h) {

Graphics2D g2 = null;

if (bimg == null || bimg.getWidth() != w || bimg.getHeight() != h) {

bimg = (BufferedImage) createIma
ge(w, h);


g2 = bimg.createGraphics();



P a g e



g2.clearRect(0, 0, w, h)

return g2;


public void paint(Graphics g) {

Dimension d = getSize();

Graphics2D g2 = createGraphics2D(d.width, d.height);

drawDemo(d.width, d.height, g2);


g.drawImage(bimg, 0, 0, this);


// overrides imageUpdate to control the animated gif's animation

public boolean imageUpdate(Image img, int infoflags,

int x, int y, int width, int height)


if (isShowing() && (infoflags & ALLBITS) != 0)


if (isShowing() && (infoflags & FRAMEBITS) != 0)


return isShowing();


public static void main(String argv[]) {

final DukeAnim demo = new DukeAnim();


ame f = new JFrame("Java 2D(TM) Demo


f.addWindowListener(new WindowAdapter() {

public void windowClosing(WindowEvent e) {System.exit(0);}


f.getContentPane().add("Center", demo);


f.setSize(new Dimension(400,300));;




Autodesk 3ds Max
, formerly
3D Studio MAX
, is a modeling, animation and rendering package
developed by Autodesk Media and Entertainment. It has modeling capabilities, a
flexible plug
architecture and is able to be used on the Microsoft Windows platform. It can be used by video
game developers, TV commercial studios and architectural visualization studios. It is also used
for movie effects and movie pre


P a g e

n addition to its modeling and animation tools, the latest version of 3ds Max also features
shaders (such as ambient occlusion and subsurface scattering), dynamic simulation, particle
systems, radiosity, normal map creation and rendering, global illuminati
, a customizable
interface, and its own scripting language


3D Modeling

Autodesk 3ds Max and Autodesk 3ds Max Design software have one of the richest 3D modeling
toolsets in the industry:

Efficiently create parametric and organic objects
with polygon, spline, and NURBS
based modeling features.

Liberate your creativity with more than 100 advanced polygonal modeling and freeform
3D design tools in the Graphite modeling toolset.

Precisely control the number of faces or points in your object

with ProOptimizer
technology and reduce a selection’s complexity by up to 75 percent without loss of detail.

Articulate minute details and optimize meshes for both interactive manipulation and
rendering using subdivision surfaces and polygon smoothing.

hading & Texturing

Access a vast range of texture painting, mapping, and layering options, while more easily
keeping track of your assets within a scene:

Perform creative texture mapping operations, including tiling, mirroring, decal
placement, blurring,
spline mapping, UV stretching, relaxation, Remove Distortion,
Preserve UV, and UV template image export.

Design and edit simple to complex shading hierarchies with the Slate material editor,
taking advantage of extensive libraries of textures, images, ima
ge swatches, and
procedural maps.

Bake each object’s material and lighting into new texture maps with the Render to
Texture functionality.


Create intelligent, believable characters and high
quality animations by tapping into a
sophisticated tool

Leverage procedural animation and rigging with CAT (Character Animation Toolkit),
biped, and crowd
animation functionality.

Use the Skin modifier and CAT Muscle to help achieve more precise, smoother control
of skeletal deformation as bones move.

ig complex mechanical assemblies and characters with custom skeletons using 3ds Max
bones, inverse kinematics (IK) solvers, and customizable rigging tools.


P a g e

Wire one

and two
way relationships between controllers to create simplified animation

Animate CAT, biped, and 3ds Max objects in layers to tweak dense motion capture data
without compromising underlying keyframes.


Achieve stunning image quality in less time with powerful 3D rendering software capabilities:

Create high
fidelity pre
visualizations, animatics, and marketing materials with the
innovative, new Quicksilver high
performance renderer.

Quickly set up advanced photorealistic lighting and custom shaders with the mental ray®
rendering engine.

Take adv
antage of idle processors to finish rendering faster with unlimited batch
rendering in mental ray.

Visualize and manipulate a given region in both the viewport and Framebuffer with
Reveal™ functionality.

Output multiple passes simultaneously from supported rendering software, including high
dynamic range (HDR) data from architecture and design materials, for reassembly in 3ds
Max® Composite.

3ds Max SDK

The Autodesk 3ds Max SDK (Software Developer Kit) ca
n be used to help extend and
implement virtually every aspect of the Autodesk 3ds Max application, including scene
geometry, animation controllers, camera effects, and atmospherics. Create new scene
components, control the behavior of existing components,
and export the scene data to custom
data formats. Developers can leverage a new managed .NET plug
in loader, making it easier to
develop plug
ins in C# or other .NET languages. With more than 200 sample plug
in projects,
3ds Max's comprehensive SDK offers
both deep and broad access to satisfy even the most
demanding production scenarios.

Track View: Curve Editor and Dope Sheet.


P a g e

Spline and 2D modeling tools.

Subdivision surfaces and polygon smoothing


CU Animate System

Under support from National Scien
ce Foundation Information Technology Research and
Interagency Education Research Grants, additional modalities have been developed to enable
conversational interaction with animated agents.

1) Character Animator: The character animation

module receives a
string of symbols (phonemes,

control commands) with start and end times from the TTS

server, and produces visible
speech, facial expressions,

and hand and body gestures in synchrony with the speech

Our facial animation system, CU Animat

is a toolkit designed for research, development,

and real
time rendering of 3
D animated characters. Eight

engaging full
characters and Marge, the dragon

shown in Fig. 3, are included with the toolkit. Each character


P a g e

has a fully art
iculated skeletal structure, with sufficient

polygon resolution to produce natural
animation in regions

where precise movements are required, such as lips, tongue,

and finger
joints. Characters produce lifelike visible speech,

facial expressions, and gestu
res. CU Animate
provides a

GUI for designing arbitrary animation sequences. These

sequences can be tagged (as
icons represen
ting the expres
sion or movement) and inserted into text strings, so that

will produce the desired speech and gestures

le narrating text or conversing with the user.

Accurate visible speech is produced in CU Animate

characters using a novel approach that uses
motion capture

data collected from markers attached to a person’s lips

and face while the person
is saying words th
at contain all

sequences of phonemes (or the visual configuration of the

phonemes, called visemes) in their native language. The

motion capture procedure produces a set
of 8 points on the

lips, each represented by an
, and
coordinate, captured

at 30
frames per
sec. These sequences are stored as “diviseme”

sequences, representing the transition from the
middle of

one visually similar phoneme class to the middle of another

such class. To synthesize a
w utterance, we identify the de
sired phoneme sequen
ce to be produced (exactly as done in

TTS synthesis systems), and then locate the corresponding

sequences of viseme motion capture
frames. Following

procedures used to achieve audio diphone TTS synthesis,

we concatenate
sequences of divisemes

intervals of

from the middle (most steady
state portion) of one

to the middle of the following phoneme. By mapping the

motion capture points from
these concatenated sequences

to the vertices of the polygons on the lips and face of the

model, we can c
ontrol the movements of the lips of the

D model to mimic the movements of
the original speaker

when producing the divisemes within words. This approach

looking visible speech, which we are now

evaluating relative to videos of human talke


P a g e


Below are screenshots of the look and feel of the virtual tutor application. Shown below is what
the user sees after starting up the application. The virtual introduces herself

then informs the user
of what she will instructing him/her on and lets them know that there is a quiz that is to be taken
at the end of the instruction.


P a g e

In this screenshot we have a diagram that is being displayed and Julie explaining the contents of
the diagram (shown on the next page).


P a g e


P a g e

The next
screenshot is of the quiz window. After the lecture is concluded the user clicks the quiz
circled below.


P a g e

Once the quiz is loaded, the user can complete the quiz and as they answer questions it provides
them with positive or negative audio feedback.

After the quiz is completed the agent will tell the
user if they passed or failed the quiz as well as there being a count being displayed on how well
they performed (portion in circle).


P a g e


There is great need for accessible and effective int
elligent tutoring systems that can improve
learning by children and adults. The proposed work will inform the design of pedagogical agents
that can produce more engaging and effective learning experiences.


P a g e



R. Atkinson, “Optimizing Learning From Examples Using Animated Pedagogical Agents,”
Journal of
Educational Psychology
, vol.
, no. 2, p.416, 2002. [online] Academic Search Premier Database
[Accessed: August 11, 2009].


A. L. Baylor, R. Cole, A. Graesser a
nd L. Johnson,
Pedagogical agent research and development: Next
steps and future possibilities

in Proceedings of AI
ED (Artificial Intelligence in Education),
July, 2005


A. L. Baylor and S. Kim, “Designing nonverbal communication for pedagogic
al agents: When less is
Computers in Human Behavior
, vol.25 no.2, pp.450
457, 2009.


A. L. Baylor and J. Ryu, “Does the presence of image and animation enhance pedagogical agent persona?”
Journal of Educational Computing Research
, vol. 28, no. 4, pp
395, 2003.


A. L. Baylor and R. B. Rosenberg
Interface agents to alleviate online frustration
Conference of the Learning Sciences
, Bloomingto
n, Indiana, 2006.


A. L. Baylor, R. B. Rosenberg
Kima and E. A. Plant,
Interface Agents as Social Models: The Impact of
Appearance on Females’ Attitude toward

Conference on Human Factors in Computing
Systems (CHI) 2006
, Montreal, Canada, 2006.


J. Cassell, Y. Nakano, T. Bickmore, C. Sidner & C. Rich,
Annotating and generating posture from
discourse structure in embodied conversational agents, in

kshop on representing, annotating, and
evaluating non
verbal and verbal communicative acts to achieve contextual embodied agents,
Agents 2001 Conference,

Montreal, Quebec, 2001.


R. E. Clark and S. Choi, “Five Design Principles for Experiments o
n the Effects of Animated Pedagogical
J. Educational Computing Research
, vol.
32, no. 3
, pp.209
225, 2005.


R. Cole, J. Y. Ma, B. Pellom, W. Ward, and B. Wise,

Accurate Automatic Visible Speech Synthesis of
Arbitrary 3D Models Based on Concatenat
ion of Diviseme Motion Capture Data,”
Computer Animation &
Virtual Worlds
, vol. 15, no.5, pp.485
500, 2004.


R. Cole, S. van Vuuren, B. Pellom, K. Hacioglu, J. Ma, J. Movellan, S. Schwartz, D. Wade

Stein, W.
Ward and J. Yan, “Perceptive Animated Interface
s: First Steps Toward a New Paradigm for Human
Computer Interaction,”
Proceedings of the IEEE: Special Issue on Human Computer Interaction
, vol. 91,
no. 9, pp.1391
1405, 2003.


M. J. Davidson, (2006). “
PAULA: A computer

Based Sign Language Tutor for
Hearing Adults,”


[Accessed: June 15, 2008]


P a g e


D. M. Dehn and S. Van Mulken, “The impact of animated interface agents: a review


empirical research,”
International Journal of Human
Computer Studies
, vol. 52, pp.1

22, 2000.


A. Graesser, K. Wiemer
Hastings, P. Wiemer
Hastings and R. Kreuz, “
AutoTutor: A simulation of a
human tutor,”
J. Cognitive Syst. Res.
, vol. 1, pp. 35



A. C. Graesser and X. Hu, “Teaching with the Help of Talking Heads,”
Proceedings of the IEEE
International Conference on Advanced Learning Techniques,

pp. 460
461, 2001.


A. C. Graesser, K. VanLehn, C. P.Rosé, P. W. Jordan and D. Harter, “Intelligent
tutoring systems with
conversational dialogue,”
AI Mag,

vol. 22, no.4, pp. 39
51, 2001.


A. Graesser, M. Jeon and D. Dufty, “
Agent Technologies Designed to Facilitate Interactive Knowledge
Discourse Processes
, vol.
, pp.298


Greenfield, P.M. and Cocking, R.R.
Interacting with video: Advances in applied developmental
vol. 11, Norwood, NJ: Ablex Publishing Corp. 1996, p.218.


X. Hu and A. C. Graesser, “Human use regulatory affairs advisor (HURAA): Learning about rese
ethics with intelligent learning modules,”
Behavior Research Methods, Instruments, & Compute
, vol.

no. 2, pp. 241
249, 2004.


W. L. Johnson, “Pedagogical Agents,”

Proceedings in the Six International Conference on
Computers in Education
, China, 1998.[online] Available

[Accessed: June 15, 2008]


W. L. Johnson and J. T Rickel. “Animated Pedagogical Agents: Face
Face Interaction in Interactive
Learning Environments,”
International Journal o
f Artificial Intelligence in Education
, vol. 11, pp. 47


Y. Kim and A. Baylor, “Pedagogical Agents as Learning Companions: The Role of Agent Competency and
Type of Interaction,”
Educational Technology Research & Development
, vol.
54, no.
3, pp.223
243, 2006.


A. Laureano
Cruces, J. Ramírez
Rodríguez, F. De Arriaga, and R. Escarela
Pérez, “Agents control in
intelligent learning systems: The case of reactive characteristics,”
Interactive Learning Environments
, vol.
, no. 2, pp.95
118, 200


M. Lee & A. L. Baylor, “Designing Metacognitive Maps for Web
Based Learning,”
Technology & Society
, vol. 9, no.1, pp.344
348, 2006.


J. C. Lester, S. A. Converse, S. E. Kahler, S. T. Barlow, B. A. Stone, and R. S. Bhogal, “The persona
ct: Affective impact of animated pedagogical agents,” in
Proceedings of CHI '97
, pp.359
366, 1997.


P a g e


J. C. Lester, B. A. Strone and G. D. Stelling, “Lifelike Pedagogical Agents for Mixed
Initiative Problem
Solving in Constructivist Learning Environments,

ser Modeling and User
Adapted Interaction
, vol. 9,
44, 1999.


J. C. Lester, J. L. Voerman, S. G. Towns and C. B. Callaway, “Deictic Believability: Coordinated Gesture,
Locomotion, and Speech in Lifelike Pedagogical agents,”
Applied Artificial Intellig
vol. 13, no. 4, pp.
414, 1999.


M. Louwerse, A. Graesser, L. Shulan and H. H. Mitchell, “Social Cues in Animated Conversational
Applied Cognitive Psychology,

vol. 19, pp. 693
704, 2005.


J. Ma, J. Yan and R. Cole,
CU Animate: Tools for
Enabling Conversations with Animated Characters, in
International Conference on Spoken Language Processing (ICSLP)
, Denver, 2002.


J. Ma, R. Cole, B. Pellom, W. Ward and B. Wise, “Accurate Automatic Visible Speech Synthesis of
Arbitrary 3D Models Based on
Concatenation of Di
Viseme Motion Capture Data,”
Journal of Computer
Animation and Virtual Worlds
, vol. 15, no. 5, pp. 485
500, 2004.


Ma, J. and Cole R., “Animating Visible Speech and Facial Expressions,”
Visual Computer
, vol. 20, no. 2
pp. 86


V. Mallikarjunan, (2003) “Animated Pedagogical Agents for Open Learning Environments,”[online]

[Accessed December 9, 2009]


S. C. Marsella and W. L. Johnson,
An instructor's assi
stant for team
training in dynamic multi
virtual worlds in
Proceedings of the Fourth International Conference on Intelligent Tutoring Systems (ITS
, no. 1452 in Lecture Notes in Computer Science, pp. 464
473, 1998.


D.W. Massaro,
Symbiotic value
of an embodied agent in language learning,
proceedings of the 37th
Annual Hawaii International Conference on System Sciences (HICSS'04)

Track 5

vol. 5, 2004.


“Animated 3
D Boosts Deaf Education; ‘Andy’ The Avatar Interprets By Signing”
2001, [online]

, Available:

[Accessed April 11, 2008]


A. Nijholt, “
Towards the Automatic
Generation of Virtual Presenter Agents,

informing science journal

vol. 9 pp.97
110, 2006


M. A. S. N. Nunes, L. L. Dihl, L. C. de Olivera, C. R. Woszezenki, L. Fraga, C. R. D. Nogueira, D. J.
Francisco, G. J. C. Machado and M. G. C. Notargiacomo, “Anima
ted Pedagogical Agent in the Intelligent
Virtual Teaching Environment,”
Interactive Educational Multimedia
, vol. 4, pp.53
61, 2002.


P a g e


L. C. de Olivera, M. A. S. N. Nunes, L. L. Dihl, C. R. Woszezenki, L. Fraga, C. R. D. Nogueira, D. J.
Francisco, G. J. C. M
achado and M. G. C. Notargiacomo, “Animated Pedagogical Agent in Teaching
Environment,” [online] Available:

[Accessed: June 30, 2008]


N.K. Person, A.C. Graesser, R.J. Kreuz, V. Pomeroy, and the Tutoring Research Group, “Simulating
human tutor dialog moves in AutoTutor,”
International Journal of Artificial Intelligence in Education
, in
press 2001.


P. Suraweera and A. Mitrovic, “An Animated Pedagogical Agent for SQL
tutor,” 1999, Available:

[Accessed: August 11,


J. Ma, J. Yan, and R. Cole, “CU ani
mate: Tools for enabling conver
sations with animated characters,”
presented at the Int. Conf. Spoken

Language Processing, Denver, CO, 2002


Autodesk 3ds Max. “3ds Max co
re features.” 2010 Available:

[Accessed :

September 26, 2010]