A Reflective Learning Framework for Programming

spongereasonInternet και Εφαρμογές Web

12 Νοε 2013 (πριν από 3 χρόνια και 4 μήνες)

132 εμφανίσεις




A Reflective Learning Framework for Programming


Lichao Li




A Thesis S
ubmitted for the

D
egree of
Master of Science

(
Research
)


School of Information

Technologies

The University of Sydney


April, 2007





i


Abstract

Learner reflec
tion

has been found to be important and valuable in learning, especially in
cognitively demanding tasks, such as learning programming. This thesis presents an online
learning system, Reflect, which supports learning to solve small problems in this demandin
g area
and emphasizes on encouraging learner reflection in the learning process.


Reflect is essentially a learning system that supports problem solving based on synthesis as well
as learning from examples. It incorporates a knowledge layer to support seve
ral novel reflective
elements to promote different types of learner reflection. These elements are embedded in
each
major part

of the system. The knowledge layer
enables

Reflect to build learner models, which
were based on the main learning objectives defi
ned in an ontology by teachers. Student
interaction with Reflect can be used to provide the evidence to build a model of the learner's
knowledge of each of the learning goals. This information, when visualized provides students
with informative feedback on

their learning progress. The knowledge layer also enables c
reation
of reflective elements as part of the Reflect task specification, process of submitting solutions and
provides linkage between the reflective elements and automated grading of students’ so
lutions.


We conducted both quantitative and qualitative analysis to evaluate the system and our approach
to promote learner reflection in the Reflect system. An investigation of the general system usage
reveals that, within the context of an authentic lea
rning environment, students used the system as
expected and
sixty percent of all students

found it helpful for preparing for exams. A study of
students’ attitudes to each of the key reflective elements of Reflect was also
conducted
. The
result
s

indicate

th
at most students had favourable opinions of
all
these elements.

Notably, both
the higher
achieving

group and the lower
achieving

group saw greatest value on automatic testing
aspects. However, the lower
achieving

group placed greater value on Reflect’s sup
port for
learning from examples.

In addition, we conducted a detailed qualitative study of the ways
students used several of the reflective elements of Reflect, based on a selected group of students
with diverse
level of
achievement
.
This points to differe
ntial benefits to both in student attitudes
and in behaviour for higher versus lower
achieving

groups.


ii

Contents

Chapter 1 Introduction

................................
................................
................................
.....................

1

1.1 Motivation

................................
................................
................................
.............................

3

1.2 Context of the Research
................................
................................
................................
.........

3

1.3 A Definition of Terms

................................
................................
................................
...........

4

1.4 Thesis Aims and Research Contribution

................................
................................
...............

7

1.5 Thesis Content

................................
................................
................................
.......................

7

Chapter 2 Background

................................
................................
................................
.....................

9

2.1 Student Self
-
assessment

................................
................................
................................
........

9

2.2 User Modeling in Intelligent Tutoring Systems

................................
................................
..

10

2.3 Le
arner Reflection, Scrutable Learner Models and Promoting Learner Reflection in ITS

.

14

2.4 Diagnosis of Students’ Solutions in ITS for Programming

................................
.................

23

2.5 Learning from Examples

................................
................................
................................
.....

26

Chapter 3 System Overview: Learner Experience

................................
................................
........

28

3.1 Problem Statement and Le
arning Objectives

................................
................................
......

30

3.2 The Writing Phase

................................
................................
................................
...............

32

3.3 The Read Phase

................................
................................
................................
...................

35

3.4 Feedback and Student Profile

................................
................................
..............................

37

Chapter 4 Teacher’s Use of Reflect

................................
................................
...............................

43

4.1 Aggregate Information

................................
................................
................................
........

43

4.2 Individual Information
................................
................................
................................
.........

44

Chapter 5 Author’s View of Reflect

................................
................................
.............................

48

5.1 Defining Lea
rning Objectives

................................
................................
.............................

49

5.2 Creating New Tasks

................................
................................
................................
............

50

5.3 Stage One: Create Task Statement and Associate Learning Concepts

................................

51

5.4 Stage Two: Edit marking criteria

................................
................................
........................

53

5.5 Stage Three: Create example solutions

................................
................................
...............

54

5.6 Stage Four: Setup automatic student solution assessment

................................
...................

58

Chapter 6 Evaluation

................................
................................
................................
.....................

62

6.1 Overview of the Evaluation
s

................................
................................
...............................

63

6.2 General System Usage

................................
................................
................................
.........

65

6.3 Survey

................................
................................
................................
................................
..

69


iii

6.4 Student self
-
assessment behaviour analysis

................................
................................
........

86

6.5 Summary

................................
................................
................................
.............................

93

Chapter 7 Conclusions

................................
................................
................................
...................

94

7.1 Further Directions

................................
................................
................................
................

98

Bibliography

................................
................................
................................
................................

101

Appendix A

................................
................................
................................
................................
.

106

Appendix B
................................
................................
................................
................................
..

112


iv


List of Figures

Figure 1 Skill meter of the LISP tutor

................................
................................
...........................

15

Figure 2 Course progress

visualization of ELM
-
ART II

................................
...............................

16

Figure 3 Visualization of the learner model of the SQL
-
tutor
................................
.......................

17

Figure 4 Interface of e
-
KERMIT d
isplaying the student model (The screen is divided into three
sections, the top section is the problem statement, the middle section is the working space where
students can design entity relationship data models and the bottom section provides feedback to
students including the four skillometers)

................................
................................
......................

18

Figure 5 A screenshot of ViSMod showing a fragment of a Bayesian student model

..................

20

F
igure 6 A textual display of the learner model of Mr. Collins and its negotiation process

.........

22

Figure 7 Student Homepage

................................
................................
................................
..........

29

Figure

8 Learning Objectives and Their Meaning in the Course Glossary

................................
...

30

Figure 9 Problem Statement

................................
................................
................................
..........

31

Figure 10

Write First

................................
................................
................................
.....................

33

Figure 11

Read First

................................
................................
................................
......................

33

Figure 12 Self
-
assessment

................................
................................
................................
.............

33

Figure 13 Example

Solution

................................
................................
................................
..........

36

Figure 14. Comparison of Student’s (Yours) and Teacher’s (Ours) Assessments of an Example
and a Measure of the Discrepancy between Them

................................
................................
........

37

Figure 15 Previous Solutions

................................
................................
................................
........

38

Figure 16 SIV Display of the Student Model

................................
................................
................

40

Figure 17. More Useful Sta
tistics

................................
................................
................................
..

40

Figure 18 The HTML Version of the Learner Model Display

................................
......................

41

Figure 19 Students’ View of Reflect

................................
................................
.............................

42

Figure 20 Aggregate Class Information

................................
................................
........................

44

Figure 21 Summary of a Student’s Work

................................
................................
......................

45

F
igure 22 A Student’s Submitted Solution

................................
................................
....................

46

Figure 23 A Student’s Assessed Example Solution

................................
................................
......

46

Figure 24 The Author's Homepage
................................
................................
................................

48

Figure 25 Edit Learning Objectives

................................
................................
..............................

50

Figure 26 New Task Creation
................................
................................
................................
........

50

Figure 27 Stage One of Task Creation

................................
................................
..........................

52

Figure 28 The Problem Statement in a Popup Window

................................
................................

52


v

Figure 29 Edit Marking Criter
ia

................................
................................
................................
....

54

Figure 30 Edit Example Solutions

................................
................................
................................
.

54

Figure 31 Create New Example

................................
................................
................................
....

55

Figure 32 Edit Additional Marking Criteria

................................
................................
..................

56

Figure 33 Assess Example Solution

................................
................................
..............................

58

Figure 34 Upload Testing Material

................................
................................
...............................

59

Figure 35 View Entire Task

................................
................................
................................
..........

60

Figure 36 Task Creation Process in Reflect

................................
................................
..................

60

Figure 37 Students’ Submissions from Week 2 to Exam

................................
..............................

67

Figure 38 Students’ Submissions and Exam Marks

................................
................................
......

68

Figure 39

The Number of Students who Explored Their User Profiles

................................
........

71

Figure 40 The Number of Students that Agree/Disagree with Their User Profiles

.......................

73

Figure 41 Students’ Opinions on the Usefulness of Self
-
Rating

................................
...................

75

Figure 42 Students’ Opinions of Reflect’s Self
-
assessment Approach

................................
.........

77

Figure 43 Students’ Opinions of the Value of Example Solutions

................................
................

79

Figure 44 Students’ Opinions of Reflect’s Approach to Automatic Evaluation of Programming
Code

................................
................................
................................
................................
..............

81

Figure 45 Comparison of Stronger and Weaker Students’ Answers to Questions 3, 4 and 6

.......

83

Figure 46 Stronger Students’ Answers to Questio
ns 3, 4 and 6

................................
....................

83

Figure 47 Weaker Students’ Answers to Questions 3, 4 and 6

................................
.....................

83



vi


Table 1 Number of Tas
ks Published per Week

................................
................................
.............

65

Table 2
Students’ Answer to Survey Question: Have you explored your user profile?

................

70

Table 3 Students’
Answer to Survey Question: Have you explored your user profile? (2)

..........

70

Table 4 Students’ Answers to Survey Question: Do you think the user profile matches your own
beliefs of your knowledge?

................................
................................
................................
...........

72

Table 5 Students’ Answers to Survey Question: Do you think the user profile matches your own
beliefs of your knowledge? (2)

................................
................................
................................
......

72

Table 6 Students’ Answers to Survey Question: Do you find self
-
rating of knowledge helps you
realise the relevant things to focus on?

................................
................................
..........................

74

Table 7 Students’ Answers to Survey Question: Do you f
ind self
-
rating of knowledge helps you
realise the relevant things to focus on? (2)

................................
................................
....................

74

Table 8 Students’ Answers to Survey Question: Do you find self
-
assessment of solutions helps
you think ab
out the quality of your answer?

................................
................................
.................

76

Table 9 Students’ Answers to Survey Question: Do you find self
-
assessment of solutions helps
you think about the quality of your answer? (2)

................................
................................
............

76

Table 10 Students’ Answers to Survey Question: Did you find that the sample solutions help you
learn?

................................
................................
................................
................................
.............

78

Table 11 Students’ Answers to Survey

Question: Did you find that the sample solutions help you
learn? (2)
................................
................................
................................
................................
........

79

Table 12 Students’ Answers to Survey Question: Did you find automated grading helpful?

.......

80

Table 13 Students’ Answers to Survey Question: Did you find automated grading helpful?

(2)

.

80

Table 14 Students’ Answers to Question 6 and Their Average Exam

Marks

...............................

85

Table 18 Second and Third Marking Criteria of the Shortstrings Task

................................
........

88

Table 19 Fourth, Fifth, Sixth and Eighth Markin
g Criteria of the Shortstrings Task

....................

88

Table 20 Test Cases and the Related Marking Criteria

................................
................................
.

89

Table 21 Conscientious Students’
Submissions to the Shortstrings Task

................................
.....

90
Introduction


1


Chapter 1

Introduction

This thesis explores methods to improve support for
learner reflection

in an online learning
system that supports learning to solve small problems in a demanding area su
ch as computer
programming.


Learning is a process of gaining understanding
throu
gh the acquisition of knowledge and

skills
through study and experience
. It is a very complex cognitive process. There are many factors that
contribute to a successful learnin
g experience. It is common knowledge that, in both formal and
informal education, there are always learners who learn more efficiently than others. This is, of
course, affected by the learners’ prior knowledge in the subject. However, a good learning
strat
egy also contributes to efficient learning. Knowing how to learn is often a valuable skill that
differentiates expert learners from novice learners
(Ertmer & Newby, 1996)
.


Here is an example to illustrate the difference between a more effective learner and a less
effective learner. It illustrates some of the huge body of work on what differentiates more
effective learners from others,
for example
(Boud, Keogh, & Walker, 1985; Ertmer & Newby,
1996; Weinstein & Stone, 1993
)
. Two students are preparing for an end of semester history
examination. One student starts revision by reading the text book from chapter one to the end and
doing every exercise in the book. A few days before the exam, the student finds himself/herself
running out of time. Then he/she starts to read a lot faster, skipping random content and exercises.
The day before the exam, the student finds himself/herself barely remembering anything he/she
has read. The other student, however, starts revision by thin
king through the following questions:



What is my goal;



What do I already know about the subject;



How much time will it take me to study and;



What strategies work best for me to study for the exam (Will it be more efficient to
read more example solutions or

solve more problems myself)?

Introduction


2

This student refers to these questions repeatedly as they began each study session and plan their
next stage of study. They will also make a strategic, possible effective, revision plan at the
beginning and they will always be

aware of the learning goals, what they have learned, how much
time is left and the current learning strategy. Although both students work hard for the exam and
both are dedicated to their study, the second student has greater control over the process, is
more
strategic, with greater potential to find the learning experience much more efficient and enjoyable
and, seems more likely to achieve a better result in the exam.


In summary, more efficient learners tend to take conscious control of their learning, p
lanning and
selecting strategies, monitor the progress of learning, correcting errors, analyzing effectiveness of
learning strategies and changing learning strategies

and behaviors

when needed. On the other
hand, less efficient learners often fail to stop
to evaluate their comprehension of the material.
They generally do not examine the quality of their work or stop to make revisions as they go
along. Furthermore, better learners are more aware of when they need to check for errors, why
they fail to compreh
end and how they need to redirect their efforts
(Ertmer & Newby, 1996)
.


The type of activities that expert learners do is often desc
ribed as metacognitive. Metacognition
(Flavell, 1987)

consis
ts of two basic simultaneous processes: monitoring the learning progress
and making changes and adapting learning strategies if one is not doing well. It is about self
-
reflection, self
-
responsibility, initiative, goal setting and time management.
The capac
ity to
use
such metacognitive skills

is developed to different stages in different people
.
Not all learners
apply these activities in their learning. In fact, many learners are not aware of them. The benefit
of developing metacognitive activities is that,
as learners become more skilled at applying
metacognitive strategies in their learning, they gain confidence and become more confident
learners, leading to pursuing their own intellectual needs without persuasion of a teacher. The role
of the teacher is th
erefore to acknowledge, develop and enhance the metacognitive capabilities of
all learners.


There is a wide range of metacognitive activities. In this thesis,
we concentrate on helping
learners to become aware of the value of reflection and develop their
ability to self
-
reflect as a
means to enhance their learning
of
a cognitively demanding synthesis skill, like programming
.
Self
-
reflection is an important element in metacognition. It allows a learner to be aware of his
knowledge state, so he/she can make
adaptations to their learning strategy. W
e believe that if
Introduction


3

learners are promoted using the right method to self
-
reflect in learning, they will become more
self
-
reflective and thereby, enhancing their learning efficiency.


1.1

Motivation

In both the literature
and our experience in teaching programming, it is clear that some students
are unable to comprehend topics that are considered easy by other students, even though both
groups try hard and dedicate the appropriate amount of time to study these topics. This
suggests
that the low achieving may have used unproductive learning strategies. It seems promising to
explore the possibility that learning effectiveness can be enhanced by using more efficient
learning strategies. Developing the ability to self
-
reflect is

a key factor in becoming an efficient
learner.


There is a large body of research evidence suggesting that learning effectiveness can be enhanced
when students pay attention to their own learning by reflecting on the state of their knowledge
and the lear
ning process
(Schön 1983; Boud, Keogh et al. 1985; Schön 1987; Pirolli and Recker
1993)
.
As described by
(Boud, Keogh, & Walker, 1985)
,
Learner reflection

is
a generic term for
those intellectual and
a
ffective activities in which individuals engage to explore
their experiences
in order to lead to a new understanding and appreciation
. This ability to self
-
reflect is developed
to different degrees in different people. Some learners are able to apply their relevant previous
experiences at the right moment, while o
thers cannot. However, if a teacher can actively promote
learners to correctly apply their existing knowledge in learning at the correct moment, this may be
beneficial for a student who lacks the ability do so themselves. Moreover, the student will have
th
e opportunity to learn how to self
-
reflect, thereby becoming more capable and proficient in
learning.


Therefore, the main motivation for this research project is to help student’s enhance their learning
efficiency by using a number of methods to promote
learner reflection in an online learning
system.


1.2

Context of the Research

The research described in this thesis is conducted in the context of programming education. This
is mainly because learner reflection is particularly valuable for cognitively demandi
ng subjects.
Introduction


4

Learning programming, even at the most introductory level is cognitively challenging
(Bonar &
Soloway,

1983)
. Moreover, recent research has found that a majority of students in introductory
programming courses do not learn how to program at expected skill level
(Lister, et al., 2004;
McCracken, et al., 2001)
.


For this research project, a

system, called Reflect has been developed as a prototype system
designed to teach students programming techniques and to promote learner reflection in the
teaching/learning process. There are several earlier implementations of this system
(Kay, Li, &
Fekete, 2007; Li & Kay, 2005b)
; these versions were more advanc
ed forms of a
student self
-
assessment

tool that
enable
d

students to self
-
assess their knowledge, monitor their learning
progress and judge if they have achieved required learning outcomes
. These were self
-
assessing
and self
-
reflective in nature.


Earlier v
ersions of Reflect were called Assess which began as a tool for supporting reading of
example answers to design tasks. By the time of this thesis, the changes in functionality and
software engineering pragmatics meant that the new Reflect system needed a n
ew design and was
a re
-
implementation of the earlier systems.

While still retaining the earlier systems’ goals and

some of the earlier ideas and features, it introduces many new functionalities and a set of new
user interfaces. The most important improveme
nt in the new system is the introduction of a
consistent focus on applying reflection throughout use of the system. The system design centered
on the notion of learner reflection. It employs various techniques such as user/learner modeling,
student self
-
ev
aluation and computer aided assessment. These will be discussed in more detail in
later chapters.


Reflect has been used in a second year C programming subject as a learning aid. We gathered
valuable feedback from students and conducted several qualitative

studies to evaluate our system.


1.3

A Definition of Terms

Descriptions of some terms commonly used in this thesis that may be unfamiliar to the reader are
provided here.


Introduction


5

User Modeling

A user model is a

representation of a set of beliefs about the user, part
icularly their characteristics,
knowledge,
beliefs and
/or

preferences.
Early u
ser modelling was often performed by the
application system,

but not
by

a distinct logical entity

(Wahlster & Kobsa, 1989)
.
There was o
ften
no clear distinction between system components that served

user model
l
ing purposes and
components that performed other tasks.

Information about a user was extracted from normal
interaction between the

user and the system. This information was used later to adapt the system
to the user’s knowledge, characteristics, beliefs and/or preferences. For example, a program can
ask whether a user knows about X, and at a later stage, it decides that if the user k
nows about X,
he/she would probably want to learn about Y. Such implicit user models are not visible for
inspection at any level.


Explicit user models, on the other hand, can either be a standalone application dedicated to the
task of user modeling such
as a user modeling shell system
(Kobsa, 2000)
, or they can be distinct
modules within applications. There ar
e two main types of explicit user models,
cognitive

and
pragmatic

(Kay, 1999)
.
Cognitive user modeling attempts to match the way that people actually
think and know. It
is of much interest to psychologist and educators. An important early example
is the knowledge tracing module
(Corbett & Anderson, 1995)

in the LISP Tutor
(Anderson &
Reiser, 1
985)

that is based on the Adaptive Control of Thought (ACT) theory of knowledge
acquisition
(Anderson, 1983)
. This type of user modeling is a well
-
defined artificial construction
of the psychologist
-
programmer. However, cognitive user modeling is computationally expensive
and the model is difficult to
construct. Therefore, in many cases, a more pragmatic user modeling
approach is simpler and sufficient. The user model in this thesis is a pragmatic one. An explicit
user model allows a system to store relevant information about a user and use this accumul
ated
information to adapt to the user’s needs. Without it, decisions about adaptation of the system can
be made only on the basis of observed learner behaviour snapshots. Another major advantage of
an explicit user model over an implicit one is that it can

be scrutinized, thereby offering potential
educational value. This will be discussed in
Chapter 2
.


Learner Reflection

There are many different types of self
-
knowledge that can facilitate learning. The student could
profitably

seek to answer questions lik
e these

(Kay, 1997)



What do I know?

Introduction


6



How well do I know a particular aspect, X?



What do
I want to know? or Do I want to know a particular aspect, Y?



How can I best learn X?

A good teacher would

promote students to think about these questions, as

educational theory
suggests that learning efficiency can be enhanced when students pay attention t
o their own
learning by reflecting on the state of their knowledge and the learning process.
Learner refle
c
tion
is
a type of metacognitive activity
(Flavell, 1971)
.

It is a cognitive mode of consciously thinking,
analyzing and learning and

a form of response to the learner’s experience.
There

is a large body
of evidence suggesting that learning effectiveness can be enhanced when learners pay attention to
their own lear
n
ing experience
s

by reflecting on the state of their know
l
edge and the le
arning
process
(Boud, Keogh, & Walker, 1985; Ertmer & Newby, 1996; Schön, 1983; Schön, 1987)
.


Sch
ö
n

identified two main types of learner refle
c
ti
on, namely “
reflection
-
in
-
action
” and

refle
c
tion
-
on
-
action

(Schön, 1983; Schön, 1987)
, that
may
occur in any learning experience.
Reflection
-
in
-
action

is the type of reflection that is triggered by breakdowns (i.e. unique
situations where learners cannot apply any known theories or techniques) that occur in any
learning activity. For example, suppose a novice C programming student is making a first atte
mpt
to write code that traverses a linked list. Suppose the student does not know how to determine that
the traversal is complete (i.e. when it reaches the end of the linked list): this could cause the
student to stop, analyze the situation and think about

a solution. It is likely that the student will
repeat this process several times when trying different approaches and eventually reach a correct
solution. Reflection
-
on
-
action refers to the type of thinking and analyzing after a learning
experience has oc
curred and gain new knowledge from the process. For instance, continuing from
the previous example, after the student reaches a correct solution for the linked list traversal task,
he/she can reflect on this particular problem solving experience and learn
the best way to solve
this type of problem.
It has been found that such metacognitive activities can facilitate problem
solving
, see for example

(Davidson, Deuser, & Sternberg, 1994)
,

and researchers argue that
students can enhance their learning by becoming aware of their own thinking as they read, write
an
d solve problems
(Paris & Winograd, 1990)
.


Introduction


7

1.4


Thesis Aims and

Research Contribution

The overall objective of this research is to
study

how to
develop

learners
’ ability

to self
-
reflect,
using the Reflect system. While accomplishing this goal, we have made the following research
contributions:

1.

A new learning environme
nt, which we call Reflect, designed to support learning to solve
small problems in a demanding area such as computer programming;

2.

Creation of reflective elements as part of the Reflect task
specification
;

3.

Creation of reflective elements as part of the Refl
ect process of
submitting solutions
;

4.

Linkage between the reflective elements and automated grading;

5.

Exploration of the use of an open learner model;

6.

A comprehensive set of resources for tutors and teachers to use in reviewing overall class
progress as well

as the work of individual students;

7.

Support for web
-
based authoring and updating tasks in Reflect;

8.

Within the context of an authentic learning environment, a study of students


attitudes to
each of the key reflective elements of Reflect as well as;

9.

A deta
iled qualitative study of the ways students use the elements of Reflect, based on
forty students with diverse ability.


1.5


Thesis Content

The remainder of the thesis describes the Reflect system, the design and evaluations. It contains
the following chapters
:


Chapter 2 provides a background of existing research studies that are most relevant to the thesis.
It addresses previous work in the areas of student self
-
assessment, learner reflection, intelligent
teaching systems, learner modelling for reflection, au
tomatic assessment of students’ solutions in
programming and programming education.


Chapter 3

provides a walkthrough of the student’s view of the Reflect system, illustrating how the
system can be used by students to learn and especially how the reflectiv
e features of the system
can support their learning.


Chapter 4 presents the teacher/tutor view of the system. It shows how teachers/tutors can use the
system to monitor students’ learning progress.

Introducti
on


8


Chapter 5 describes Reflect from a task author’s viewpoi
nt. It presents the set of interfaces that
are used to create the teaching materials. Especially, it explains how the reflective elements seen
in Chapter 3 are created for each problem.


Chapter 6 describes how the Reflect system was used by students durin
g a period of four months
and various qualitative studies and surveys conducted to gather students’ feedback to evaluate the
system. It discusses the experimental procedures and presents observations and results. Most
importantly, this chapter describes ho
w each of the research contributions
was

evaluated.


Chapter 7
outlines possible future improvements to Reflect and presents conclusions
.

Background


9

Chapter 2

Background

This thesis describes the Reflect system, which aims to promote learner reflection. Reflect
incorporates
five main aspects
:



Student self
-
assessment;



User/Learner modeling in Intelligent Tutoring Systems (ITS);



Promoting learner reflection with scrutable learner models in ITS;



Diagnosis of students programming solutions and;



Learning from examples.

This chapte
r provides background on each of these, both to put this research work into context
and as a basis for describing the design of Reflect in subsequent chapters.


2.1

Student Self
-
assessment

Student self
-
assessment is a critical part of this thesis work. Therefo
re, a brief overview of the
topic is important. It has been widely acknowledged that one of the characteristics of efficient
learners is that they tend to have a realistic sense of their own strengths and weaknesses and they
use this knowledge to direct th
eir efforts into productive directions
(Boud, 1986; Schön, 1983;
Schön, 1987)
. More able students tend to be effective self
-
assessors. However, in order to
develop

this skill more widely among learners, explicit attempts need to be made to develop this
ability. The term
student self
-
assessment

refers to the involvement of students in identifying
standards and/or criteria to apply to their work, and making judgments
about the extent to which
they have met these criteria and standards
(Boud, 1991)
.
Self
-
assessment can be formative, in that
it contributes to the learning process and assists learners to direct their energies to areas for
improvement. It may also be summative, either in the sense of learners deciding that they have
learned as much as th
ey wish in a given area, or, it may contribute to the grades awarded to
learners
(Boud & Falchikov, 1989)
. Researchers argue that by teaching students self
-
assessment,
they can become better self
-
asses
sors and better learners
(Boud, 1986)
.


Background


10

Student self
-
assessment has been introduced and used in many different situations and its
procedures and effects have been examined
.
For example, it has been applied in foreign language
learn
ing
(Oskarsson, 1980)
, writing
(Butle
r, 1982)
, engineering education
(Cowan, 1988)

and
college science teaching (in biology, mathematics, physics and chemistry)
(Zoller, Tsaparlis,
Fatsow, & Lubezky, 1997)
. A
quantitative literatu
re
survey
by
(Boud & Falchikov, 1989)

found

that, in most studies,

relatively able students
were shown to be able to

assess themselves in a way
which is
more or less
identical to the way in which they
would be assessed by their teachers
, if
they are asked to rate themselves on a suitable marking scale. Boud and
Falchikov

(Boud &
Falchikov, 1989)

summarised several studies which indicate that student
s may improve their
ability to rate themselves over time or with practice. Thus, the ability to self
-
evaluate can be
developed even for the weaker students.
Another study by
(Boud & McDonald, 2003)

found that
the introduction of self
-
assessment practices was well accepted by teachers and by students. Of
particular interest and importance is that they found self
-
assessment training had a significan
t
impact on the performance of those who had been exposed to it. On average, students with self
-
assessment training outperformed their peers who had been exposed to teaching without such
training in all curriculum areas.


This research provides an importa
nt foundation for Reflect: it means that there is real value in
providing a learning environment that helps learners to self
-
assess. It establishes that there are
substantial potential benefits in
explicitly developing students’ ability to self
-
assess.
Our

system,
Reflect, provides one promising

approach to
give learners

opportunities for self
-
assessment
, so
these skills can be developed and

practiced in order to make them better self
-
assessors and, in
turn, more efficient learners
.

Reflect is based on an e
arlier system, Assess
(Li & Kay, 2005a)
,
which was initially built for this purpose and it is still one of the principal goals of Reflect. We
now move to the literature which provides the basis for the knowled
ge layer that distinguishes
Reflect from its predecessor, the Assess system.


2.2

User Modeling
in

Intelligent Tutoring Systems

It has long been acknowledged that one
-
to
-
one tutoring has the potential to be a highly effective,
though costly, instruction method

(Bloom, 1984)
. The research aim of intelligent tutoring systems
(ITS) is to build machine tutors that are as efficient as a human teacher in one
-
to
-
one tutoring
conditions,

but more affordable and accessible. An early field survey
(du Boulay & Sothcott,
Background


11

1987)

identified the following difficulties when designing and implementing a machine tutor that
is as efficient as a human tutor:



Repre
senting teaching goals and constructing a plan to achieve these goals;



Monitoring student’s actions and deciding what to do next;



Recognizing a student solution and errors in the solution and providing appropriate
help;



Determining a teaching strategy and
executing that strategy as a sequence of short
-
term teaching tactics;



Solving student’s problems and answering unanticipated questions and;



Building and maintaining a knowledge model for each student.

These are still challenging and unsolved research goals

today.


As the Reflect system builds and maintains a user model to monitor individual user’s knowledge
state and make it
scrutable

to promote learner reflection, this section and Section
2.3

will review
the literature in the
area of user/learner modeling and learner modeling for reflection in the
context of ITS. As the Reflect system also evaluates students’ solutions, Section
2.4

describes the
different approaches to diagnosing student solutions i
n various ITSs.


User
Modelling

in ITS

Human tutors can often provide a tailored teaching style and strategy to an individual student
based on what they know about the student. For a machine tutor that tries to imitate human
actions, such functions of adap
tive instructions require the dynamic modeling of the student’s
knowledge state. This is often achieved by the use of explicit user models (See User Modeling in
Section 1.3). User models or learner models, as they are often called in ITS,

emphasize modelin
g
learners’ knowledge, rather than their characteristics, preferences or goals. They are

often at the
core of
an
intelligent tutoring sy
s
tem
,

as
they

enable

the

ITS to provide
individualized

instructions and
a personalised

learning experience.

An architect
ure proposed in
(Self, 1974)

put
the learner model with domain knowledge and tutoring strategy together to form a tripartite
architecture, which is now part of
the encyclopedia definition of what an ITS is
(Self, 1999)
.


There are many types of learner models in different ITSs. In an authoritative review and overview
(Holt, Dubs, Jones, &
Greer, 1994)
, learner models were classified into these broad categories:

Background


12



Overlay models
(Carr & Goldstein, 1977)
, in which the learner’s knowledge is
modeled strictly as a subset of an expert’s knowledge model;



Differential models
(Burton & Brown, 1979)
, which are a modification of the
overlay model and focus on the difference between the student’s knowledge and the
expert’s knowledge and;



Perturbation and bug models
(Kass, 1989)
, which normally combine the standard
overlay model with a representation of the learner’s misconceptions.

It is important to real
ize that there is no standard version of a learner model. They are always
context
-
determined and content
-
specific.
There are potential benefits in different ITSs sharing
information. This is reflected in the IMS LIP
(IMS, 2001)

specifications which have the potential
to improve interoperability of learner models across Internet
-
based teaching systems
.
In this
section, t
he
learne
r

models of two well
-
known systems are described. Both demonstrate the
classical role of us
er model in ITS, although one system was developed in the middle 1980s and
the other one was developed ten years later.


The Lisp Tutor and Knowledge Tracing

The Lis
p tutor
(Anderson & Reiser, 1985; Corbett & Anderson, 1992b)

is an ITS that teaches
programming with the LISP programming language. It is an implementation of the

ACT theory of
knowledge acquisition
(Anderson, 1983)
. The system provides
an environment, similar to a text
editor, in which the student can

do exercises. It is intelligent, in that it can recognize student
actions and if the student makes a mistake, it can immediately interrupt and provide intelligent
feedback and explanations.
It attempts to model each student’s knowledge state and sometimes

misconceptions using K
nowledge Tracing
(Corbett & Anderson, 1995)
. The term describes the
process of monitoring and remediating the student’s knowledge when they use the system. The
tutor maint
ains a knowledge model for each student. It monitors each student’s performance in
completing exercises and arranges the curriculum to suit the student’s rate of progress based on
the information held in the model. It is essentially an overlay model that c
onsists of a list of
production rules from the expert model. For each rule, the tutor keeps an estimate of the
probability that the student has learnt the rule. An initial estimate of the probability that a student
will learn the rule just from reading tex
t is assigned to each rule when an overlay model is created
for each student. This probability is updated every time the student has an opportunity to apply
the rule in exercises. The new estimate of whether the student has learnt the rule depends on
wheth
er the student responds correctly to exercise questions. A probability value of 95% has been
Background


13

adopted for concluding that the student knows a rule. Only after the student has brought all rules
above the 95% criterion, does the tutor allow the student procee
d to the next section. This is to
ensure mastery learning, a fundamental concept underlying the philosophy of the LISP tutor.
Recall that each rule in the learner model is a piece of procedural knowledge; therefore it ensures,
with a high probability, that

the student learns all the required knowledge. The expert model in
this system has another important role: it allows the system to solve exercises along with students,
and provide assistance as necessary. This will be elaborated in Section
2.4
.


ELM
-
ART II and Curriculum Sequencing

Episodic Learner Model


Adaptive Remote Tutor II (ELM
-
ART II)
(Weber & Brusilovsky,
2001; Weber & Specht, 1997)

is a intelligent interactive web
-
based educational syst
em that
teaches LISP programming. It is a combination of an electronic textbook and an ITS. The system
has two types of learner models, an overlay model and an episodic learner model
(Weber, 1996)
.
The two main features of ELM
-
AR
T II, which are curriculum sequencing and interactive problem
solving support, are based on these two types of models. Curriculum sequencing describes the
order of presentation of new knowledge units and concepts and corresponding teaching
operations (i.e.

exercise and tests). This can maximize learning efficiency as the learner only
reads and practices what he/she needs to.
ELM
-
ART II represents knowledge about units to be
learned in terms of a conceptual network. Units are

organized hierarchically into le
ssons, sections,
subsections, and terminal pages. Each unit is an object containing slots

for the text to be presented
and for information that can be used to

relate units and concepts to each other.
For each unit,
there are s
tatic slots
with

information o
n prerequisite concepts,

related concepts, and outcomes
(these

are the concepts that the system assumes to be

known when the user
has worked through

that unit successfully). Units for terminal pages have a tests slot

that may contain the description
of a g
roup of test items the learner has to perform. When test

items have been solved successfully
the system can infer that the user possesses the knowledge

about the concepts explained in this
unit. Problem pages have a slot for storing a description of a

prog
ramming problem.

Dynamic
slots are stored with the individual learner model that is built up for each user. This

user model is
updated automatically during each interaction with ELM
-
ART

II
. For each page visited

during
the course, the corresponding unit is

marked as visited in the user model. Moreover,

when

a
programming problem
is

solved correctly, the outcome

concepts of this unit are marked as known
and an inference process is started. In the inference

process, all concepts that are prerequisites to
this

unit (and recursively all prerequisites to these

units) are marked as inferred. Information from
Background


14

the dynamic slots in the user model
is

used to

annotate links individually and to guide the learner
optimally through the course.

On the interface, t
he system

provides a NEXT button that allows
the student to ask the system for the most suitable next step. The design aims to ensure that
optimal learning path can be followed if the student always uses the NEXT button to navigate
through the course. ELM
-
ART II ha
s interactive problem solving support, and this employs
another type of user model as will be described in Section
2.4
.


In summary, the examples above illustrate how learner models enable different ITSs to present
individualiz
ed sequence of learning materials to learners.

This is the traditional functionality of
the learner model. The mid 1990s saw the emergence of a new research area, named learner
modelling for reflection (LeMoRe). Researchers in this field argue that the rol
e of the learner
model in an ITS is not restricted to providing personalized learning experience: learner models
can contribute to learning directly by being available for the learner to scrutinize.


2.3

Learner Reflection, Scrutable Learner Models and Promoti
ng Learner
Reflection in ITS

The term
Learner Reflection

is defined in Section 1.3. There are currently two main methods to
improve learning by promoting learner reflection in an ITS. One is to make the learner model
scrutable. The other widespread
method
is to support collaboration between learners and the
system. Some systems support one of these techniques, while others support both.


Typically, a learner model in ITS gathers and maintains relevant information regarding the
knowledge state of a student.
By reflecting on this information, students can compare the
system’s beliefs about their knowledge with their own beliefs. In this way, reflection is
encouraged, especially if the system’s beliefs and the student’s own beliefs differ. Therefore,
making lea
rner models inspectable to students

can promote reflection and contribute directly to
learning
(Bull & Pain, 1995)
. There are many possible ways to scrutinize the learner model to
support reflection. It

can be done by simply exposing learners to a simple textual des
cription of
the content of the model, to a 2D or 3D graphical representation of the model, or even a virtual
space in which students can interact with abstract representations of their cognitive states
(Zapata
-
Rivera & Greer, 2001)
. There has been no clear evidence that one particular way of
visualization yields more learning value than the other, although students do have different
preferences
(Mabbott & Bull, 2004)
.
Many existing ITSs have
implemented
, to different levels,
Background


15

inspectable

learner models to support learner reflection
; see, for example, the reviewed systems in
(Bull & Kay, 2006)
.

To demonstrate the diversity of these approaches, several systems are
presented here.


Skill Meter of the LISP Tutor

As mentioned earlier in Section
2.2
, the LISP tutor keeps an estimate of the probability that a
student has l
earnt each concept of the domain knowledge in the student’s model. This estimation
is shown in a Skill Meter (
Figure
1
) to provide learner reflection. As shown in the figure, each
concept has a progress bar next to it. The shaded
area in each bar reveals how much the student
understands that particular concept. If the whole meter is shaded, it means the student has
mastered that concept. It is very straightforward for learners to understand.



Figure
1

Ski
ll meter of the LISP tutor
1


Course Progress Chart of ELM
-
ART II

ELM
-
ART II also visualizes learners’ progress in a similar way to the LISP tutor’s skill meter. It
adopts a traffic light metaphor to annotate hyperlinks

pointing to different topics
. A scree
n shot of
the overview of the LISP course is displayed in
Figure
2
. There are three kinds of colored balls
next to the links. A green ball means this page is ready to be visited and concepts taught on this
page are

ready to be learned. A red ball means that this page is not ready to be visited, since one



1

Corbett, A. T., & Anderson, J. R. (1995). Knowledge tracing: Modeling the acquisition of procedural
knowledge.
User Modeling and User
-
Adapted Interaction, 4
, 253
-
278.


Background


16

of its prerequisite concepts is not understood by the user. An orange ball means that the system
infers that the user has learned all the concepts contained in that

page. This simple metaphor is
also very easy to understand.



Figure
2

Course progress visualization of ELM
-
ART II


SQL
-
Tutor and Skillometers

SQL
-
Tutor
(Mitrovic & Martin, 2002; Mitrovic & Ohlsson, 1999)

is an intelligent educational
system aimed at university
-
level students

learning
the
S
equential

Q
uery
L
anguage (SQL), a
database programming language
. SQL
-
Tutor consists of an i
nterface, a pedagogical module
and a
student modeller. It use
s Constraint
-
Based Modeling
(Ohlsson, 1994)

to m
odel the knowledge of
its students.
The system contains definitions of several databases, and a set of problems and the
ideal solutions to them. SQL
-
Tutor contains no problem

solver. To check the correctness of the
student’s solution, SQL
-
Tutor compares it to the correct solution, using domain knowledge
represented in the fo
rm of more than 500 constraints.
The student model in SQL
-
Tutor is
implemented

as an

overlay on the constraint base.
As t
here are more than 500 constraints
,
it
would be difficult to present

information about each

constraint. Instead, the student model
is
compressed
into a simple

structure that resembles the structure of the SELECT s
tatement

of SQL
.
The student is shown

six
skillometers
, which represent the student model in terms of the six
clauses of the

SELECT statement. For each clause,
the tutor

find
s

all the relevant constraints, and
Background


17

compute
s
the percentage of constraints that th
e student has

correctly understood, the percentage
of constraints the students is currently learning (hence, there is incorrect understanding) and the
percentage of constraints that is yet to be covered
.

These
three

percentages

are visualized as
shown in
Figure
3
.



Figure
3

Visualization of the learner model of the SQL
-
tutor

A study in
(Mitrovic & Martin, 2002)

evaluted the impact of open learner modelling on learning
with the SQL
-
Tutor.
It was found

that the open model
may

have improved the

performance of the
less able students, and that it
may

have boosted the self
-
confidence

of the more able students
. It
also suggests that the more able students considered the learner model to be benefi
cial. These
results are encouraging, given that the visualization is very simple.


e
-
KERMIT

e
-
KERMIT (extended Knowledge
-
based Entity Relationship Modelling Intelligent Tutor)
(Hartley & Mitrovic, 2002)

is an ITS that teaches students conceptual database design and has an
open learner model. It provides a problem solving environment in which students can practice
database design using the entity relati
onship data model. Similar to the SQL
-
Tutor, it also
employs Constraint
-
Based Modeling to model the knowledge of its students. Consequently, due
to the nature of the constraint
-
based modeling technique,
t
here are
a large number of

constraints
in

the system
, and therefore it
would be difficult to provide meaningful

information about each

Background


18

one of them
. The open student model in e
-
KERMIT is summary of the real student model,
illustrated in a form of hierarchy. The constraints are grouped according to the pedago
gically
important domain categories. A screenshot of the e
-
KERMIT system is in
Figure
4
. Summary
statistics of the student model are shown at the bottom left part of the interface as four
skillometers.


Figure
4

Interface of e
-
KERMIT displaying the student model (The screen is divided into
three sections, the top section is the problem statement, the middle section is the working
space where students can design entity relationship data models and the botto
m section
provides feedback to students including the four skillometers)

An experiment assessed whether students learn more with an open model, whether they inspect
the models and feel that the open model contributed to their learning. Test subjects were d
ivided
into two groups. One group used the system with the open student models and the other group
used the system without the open models. Each test subject spent about 110 minutes in the
evaluation session. Results showed that the majority of students wh
o examined their models
considered it to be useful, but there was no statistically significant difference between the two
groups in terms of knowledge gained after the subjects used the system. It was also found that the
system interaction benefits the les
s able students more than the more able students.


Background


19

ViSMod

The ViSMod (
Visualization of Bayesian Student Models
) tool
(Zapata
-
Rivera & Greer, 2004)

is a
visualization of Bayesian student model based on the Bayesian Belief Network (BBN)
(Pearl,
1988)

to help students understand how they are modelled in a teaching
system and to promote
learner reflection.
It provides a flexible architecture where students and teachers can create their
own views by choosing nodes from the Bayesian model. Using ViSMod, students can understand,
explore, inspect, and modify Bayesian stu
dent models.
ViSMod is built upon earlier work of
another tool, VisNet (
Visualizing Bayesian Belief Networks
), for explaining BBNs in an intuitive
manner. In VisNet, the BBN is displayed along with size, proximity and colour of nodes,
representing strength

of relationship, marginal probability and probability propagation. ViSMod
improves on this by allowing learners to select nodes of interest on the Bayesian model to
visualize. In the tool, each node is a concept that has a score representing the system’s
belief
about the student’s knowledge of that concept. Different sizes and colours are used to represent
different scores. An example is shown in
Figure
5
. In the figure, each node is a biology concept.
Nodes are connected based on

the relationships between the concepts in the BBN. Each node is
also attached to two “opinion nodes”. One is the system’s opinion of the student’s understanding
of the concept and the other is the student’s own opinion. ViSMod allows the student to intera
ct
with the learner model (i.e. the BBN) by providing interfaces that allow him/her to explicitly set
their knowledge levels if they disagree with the system through (i.e. the widgets available at the
control panel in the lower part of the window
)
. The stu
dent is also encouraged to provide an
explanation of why they think the concept value is wrong or different.


Background


20


Figure
5

A screenshot of ViSMod showing a fragment of a Bayesian student model

An exploratory study assessed

the

intera
ction
between users and the system,
and especially by
focusing on the number and quality of the explanations provided by students and the self
-
assessment information obtained
. It indicated the interface supported

students' engagement,
motivation, and knowl
edge reflection.


There is a considerate body of other work in learner modelling for reflection including
(Brusilovsky, Sosnovsky, & Shcherbinina, 200
4)
,
(Dimitrova, 2003)

and
(Mazza & Dimitrova,
2004)
. These systems utilize inspectable learner models to promote learner reflection. Another
way to promote learner reflection in an intelligent learning system is to provide various reflective
elements in guid
ed collaboration between the learner and the system. Researchers argued that
engaging in reflective activities in interaction, such as explaining, justifying and evaluating
problem solutions and the problem solving process can be potentially productive to
learning
(Boud, Keogh, & Walker, 1985; Ertmer & Newby, 1996; Schön, 1983; Schön, 1987)
. Several
systems that encourage reflection through system an
d user dialogue and interactions are presented
here.


Background


21

Mr. Collins

Mr. Collins (COLLaboratively maintained, INSpectable learner model)
(Bull & Pain, 1995)

is a
system that encourages collaboration between users and the system
, and whose contents may be
jointly negoti
ated by the student and system.
The way it

improv
es

learning thr
ough promoting
reflection is to have
the student defend his
/her

views to the system by discussing and arguing
against the system's assessment of his knowledge and beliefs.

The system allows bo
th the student
and the system to influence the other through valid argument and defend their own standpoint.
Learner reflection is promoted through this process. In order to support this negotiation process,
Mr. Collins maintains two separate confidence me
asures

of a student
.

The first reflects the
student's own confidence in his performance, and is provi
ded by the student
. The second
confidence measure is the system's evaluation of the student, which is based on the learner's
actual performance.
This infor
mation is made available to students by opening the learner model.
Mr. Collins provides students with statistical information about each concept they attempted in a
textual format, as shown in
Figure
6
. In the figure, the display

indicates that the system is very
sure that the student knows the rule for pronoun placement in negative clauses, but the student
himself is unsure. The system is unsure that the student knows the rule for the positioning of
pronouns in affirmative main cl
auses, but the student is more confident.
T
he
system’s
confidence
levels are based on a learner's five most recent attempts to use a
concept. Furthermore, if the
student disagrees with the system, he/she can perform tests to affect the system’s confidence.


A

preliminary study to determine whether learners would actually benefit from the user
-
system
collaboration
was conducted
.
The result suggested

that learners accept
ed

this approach: they did
inspect the system’s knowledge about them, and did suggest chan
ges and argue when they
disagreed with the system. The majority of learners in the study also liked the system to challenge
them when it disagreed with their actions in the student model
.


Background


22


Figure
6

A textual display of the learne
r model of Mr. Collins and its negotiation process


LuCy

LuCy
(Goodman, Soller, Linton, & Gaimari, 1998)

is
a mo
dule of

an intelligent tutoring system,

PROPA
(Cheikes, 1998)
, w
hose purpose is to encourage student reflection and articulation of
past actions and future intentions
.

It is best described as a learning companion.
LuCy is intended
to provide a b
roader and more collaborative interaction with the student than a huma
n tutor
typically provides. Its

purpose

is not to coach the student by reporting mistakes, or reveal
solutions to the exercises, but instead to provide a more supportive environment for
the student by
encouraging reflection and articulation and discussing future intentions and their consequences.

Human tutors often

provide expert advice to a student but the interaction is primarily in one
direction, i.e. information is provided from the
t
utor

to the student in response to a student

s
need.
In PROPA, the student can request a hint or ask a specific question using an inquiry
component. In either case, LuCy promotes a dialogue with the student that requires
communication in both directions. L
uCy’s dialogues encourage the student to reflect on thinking,
and evaluate past actions. LuCy does this by prompting the student to explain the reasoning
behind actions, and justify decisions leading to the actions. For example, when the student asks,
“LuC
y, do you think my last action was appropriate?”, Lucy asks the student to reflect on why
the action was chosen and helps the student understand why it might have been an appropriate or
inappropriate action to take at the time. When the student takes an ac
tion in PROPA, LuCy
sometimes prompts the student to reflect on the reasoning behind this activity by asking the
Background


23

student questions such as, “Why do you think that is a valid hypothesis?” Although there was no
report of evaluation of LuCy, it does contribu
te some interesting ideas to the design of an ITS.


Intelligent learning systems that adopt either open learner models to promote learner reflection or
encouraging reflection by providing guided collaborative interaction between the system and the
user, or

both have been shown to be effective.
In Reflect, s
t
u
dent
s


interactions with the system
are recorded in
their

learner model
s
, based on an a
c
cretion representation
(Kay, 2000)
.

T
he
se

model
s

always hold

the system’s belief
s

of

how well the students
have been

performing
.
It makes

this information available to students by scrutinising their user models with
the
Scr
u
table
Inference Viewer (SIV)
(Kay & Lum, 2005)
, thereby promoting reflection
-
on
-
action. There are
also many other features in Refl
ect that are designed to promote reflection. They will be
described in detail in the next few chapters.


2.4

Diagnosis of Students’ Solutions in ITS for Programming

As listed at the beginning of Section
2.2
, some of the challenges
for an ITS include recognizing
any correct student solution, identifying errors in a solution and providing appropriate help. Of
particular difficulty is the identification of errors and underlying misconceptions and helping
students recognize them in term
s of the domain knowledge and correcting them. There have been
several different approaches to evaluate students’ programming solutions. Some of them are
reviewed here.


The Model Tracing Approach in the LISP Tutor

Performance modelling, also known as Mode
l Tracing
(Corbett & Anderson, 1992a)

in the LISP
tutor aims to provide individualized problem solving support. The LISP tutor provides continuous
int
eractive problem solving support as the student works on exercises.
It interprets each of the
student’s actions and follows the student’s step
-
by
-
step path through the problem solving
practice. When monitoring student actions, it is necessary to have, at a
ll times, a pattern against
which students’ actions can be measured. That is, as the student generates a LISP program step by
step, the tutor must be able to assess whether each step is on the path to a successful solution. In
order to achieve that, the LI
SP tutor is supplied with over one thousand production rules. Each
rule represents one piece of procedural knowledge. The complete set of correct rules for writing a
particular piece of code is referred to as the ideal student model or the expert model. Th
e model
Background


24

also includes a set of incorrect rules that reflects misconceptions. These allow the tutor to
generate solutions. While the student is working, the tutor simultaneously simulates the steps that
a correct solution requires by executing the appropria
te production rules. In addition, it models
errors that students might make at each step on the basis of incorrect rules. By comparing the
student’s step and the possible correct or erroneous steps the tutor generates, the tutor can
recognize whether the s
tudent is on a right path for a right solution. If not, the tutor can
immediately interrupt and correct the student. While this approach can provide immediate and
meaningful feedback to students, it has a significant limitation. Since it always has predete
rmined
paths to correct solutions, it cannot accept a good solution if it does not recognize it, thereby
restricting creativity in students’ solutions.


Interactive Problem Solving Support in ELM
-
ART II

ELM
-
ART II
(Weber & Br
usilovsky, 2001; Weber & Specht, 1997)

supports example
-
based
programming. That is, it encourages students to re
-
use the code of previously analyzed examples
when solving a new problem. As an important feature, ELM
-
ART II can predict the student’s
way of
solving a particular problem and find the most relevant example from
a

student’s
individual learning history.
When a student requests for

help, ELM
-
ART II selects the most
helpful examples

the student has already seen
, sorts them corresponding to their rel
evance, and
presents them to the student as an ordered list of hypertext links. The most relevant exa
mple is
always presented first. Students

can
also
ask the system to diagnose the code of the solution in its
current state. The system gives feedback by pr
oviding a sequence of help messages with
increasingly detailed explanation of an error. The sequence starts with a very vague hint on what
is wrong and ends with a code
-
level suggestion of how to correct the error

or complete the
solution
. The student can
use this kind of help as many times as required to solve the problem
correctly.
This

ensures that all students will

ultimately solve the problem
.

Both the ind
ividual
presentation of examples
and the diagnosis of program code are based on the episodic learn
er
model

(ELM)

(Weber, 1996)
. ELM is a type of user or learner model

t
hat stores knowledge about
the user (learner) in terms of a collection of episodes. To construct the

learner model, the code
produced by a learner is analyzed
in terms of the domain knowledge on

the one hand and a task
description on the other hand. This cognitive diagnosis results in a derivation

tree of concepts and
rules the learner might have used to solve the problem. These concepts

and rules are instantiat
ions
of units from the knowledge base. The episodic learner model is made

up of these instantiations.

In ELM only examples from the course materials are pre
-
analyzed and the resulting explanation

Background


25

structures are stored in the individual case
-
based learner m
odel. Elements from the

explanation
structures are stored with respect to their corresponding concepts from the domain

knowledge
base, so cases are distributed in terms of instances of concepts. These individual

cases, or parts of
them,
can be used for two

different purposes. On one hand, episodic instances

can be used during
further analyses as shortcuts if the actual code and plan match

corresponding patterns in episodic
instances. On the other hand, cases can be used by the analogical

component to show u
p similar
examples and problems for reminding purposes.


Ludwig

The previous systems try to match students’ solutions against previously setup rules or cases.
This can potentially limit students’ creativity, which is widely recognized as an important aspec
t
in programming. A more obvious method to evaluate students’ programming solutions, which can
avoid this problem, is to simply run them. We now describe this approach with an example:
Ludwig
(Shaffer, 2005)

is a web
-
based assessment system which allows students to edit their
programming code in a controlled text editor, offer code for analysis, then submit it for grading.
In the system, s
tudents submit programs

to be

run

against sample data set and the output is
compared to

that developed by the instructor. The sample data set may

be prepared by the
instructor or created by the

student.
The
instructor has to create a
correct
solution program
that is
not accessible to s
tudents
.

When t
he student
’s program is run,
the output is compared

to the
instructor's program
’s output
; differences are displayed to the

student. The student then has the
option of changing the

program or submitting the program

for grading
.

Once the stude
nt submits
the program, the system will

create its own new input data set and re
-
run the student

program
against the instructor's program and display the

results to the student. This helps to prevent
students from

submitting programs which are little more
than print

statements.

The Ludwig
system is designed assuming that programming is a task
-
based discipline rather than knowledge
-
based, in comparison to the approval of the ITS systems described earlier. Consequently, the
system cannot interpret the results

from running students’ code
in terms of the domain
knowledge.

However, it offers small programming tasks for students as Reflect does, which
makes it possible to design test data to evaluate particular elements of the solution.


Reflect, as will be shown

in the next chapter, will run students’ code as Ludwig but can also link
the results to components of the domain knowledge. This allows Reflect to not only tell students
Background


26

their errors but also the possible underlying misconceptions. Furthermore, students’
learner
models can be updated.


2.5

Learning from Examples

Research has

established

that learning from examples is of major importance for the initial
acquisition of cognitive skills in well
-
structured domains.
For decades, most mathematics and
mathematics
-
rel
ated curricula, including programming, place a heavy emphasis on conventional
problem solving as a learning device. However, there seems to be no clear evidence that
conventional problem solving is
more
effective

than other forms of learning
. On the contra
ry, it
normally leads to problem
-
solution, not to schema acquisition (i.e. learning). This is because the
cognitive load (i.e. the total amount of mental activity on working memory at an instance in time)
required during the problem solving process may be
excessive, occupying most of the already
very limited processing capacity of the working memory, thus, leave very little for schema
acquisition (i.e. learning), even if the problem is solved
(Sweller, 1988)
. This points to the
benefits of using more examples in t
eaching/learning. It has also been found

that students prefer
learning from examples
(LeFevre & Dixon, 1986; Pirolli & Anderson, 1985)
. However, in order
to achieve the desired effect of schema acquisition from studying example
s, another cognitive
process, namely self
-
explanation, is required.
G
enerating

explanations to oneself (self
-
explaining)
has been shown to improve the acquisition of problem solving skills. It has been found that more
competent students generate many self
-
explanations regarding the novel parts of the example
solutions and relate these to principles; by contrast less competent students do not generate
sufficient self
-
explanations, monitor their learning inaccurately and subsequently they rely
heavily on exam
ples
(Chi, Bassok, Lewis, Reimann, & Glaser, 1989)
.
Systems that place
emphasis on this aspect of cognition have
been created and results are positive
, for example

(Aleven, Popescu, & Koedinger, 2001)
.
Another study has shown that when students are
explicitly encouraged to
self
-
explain when learning from examples, they will do so and increase
their learning
(Chi, Leeuw, Chiu, & LaVancher, 1994)
.


Learning from examples is an important ele
ment of the Reflect

system. Students are always asked
to view example solutions and rate examples with supplied criteria that emphases on the novel
parts of the example. In this way, they are encouraged to generate self
-
explanations on the
example and cons
equently learn from it.


Background


27

In conclusion, this chapter presents the background of the main elements of the Reflect system.
The next few chapters will present the design of Reflect with system walkthroughs.

System Overview
: Learner Experience


28

Chapter 3

System Overview: Learner Experience

This thesis de
scribes and evaluates Reflect, as a method to promote learner reflection using
learner modelling in the context of student self
-
assessment. This chapter and the next two present
Reflect, first with a walkthrough of the system from a learner perspective in
this chapter and then
with the teachers’ and authors’ perspectives respectively in the next two chapters. This chapter
illustrates how students can use the system to learn and especially how the reflective features of
the system can support their learning.


The system is called Reflect to emphasis its reflective features. According to

(Boyd & Fales,
1983)
,
merely explicitly informing the users that
they were expected to
use

the
system

to
utilize
reflect
ion

on the
ir actions,

may have helped some users
benefit more from the reflective elements