Oxford_Brookesx - Higher Education Academy

noiseboliviaSecurity

Nov 5, 2013 (3 years and 7 months ago)

130 views


1


CONTENTS

Introduction
................................
................................
................................
................................
..............

2

Methodology

................................
................................
................................
................................
............

2

Scope and focus of

the literature review

................................
................................
................................
.

3

Assessment

................................
................................
................................
................................
.........

3

E
-
assessment

................................
................................
................................
................................
......

3

Assessment feedb
ack

................................
................................
................................
.........................

4

E
-
feedback (assessment feedback enhanced by technology)

................................
......................

6

Benefits of e
-
feedback

................................
................................
................................
.........................

6

ICT
tools for e
-
assessment
................................
................................
................................
..................

7

Diversity of ICT tools

................................
................................
................................
........................

7

GradeMark and Electronic Feedback: Exeter University

................................
................................
.

7

Intelligent Assessment
Technology (IAT): Open University

................................
............................

8

WebPA: Loughborough University

................................
................................
................................
...

8

Formative assessment and e
-
feedback

................................
................................
..............................

8

MCQ an EVS

................................
................................
................................
................................
...

9

Peer and SELF asse
ssment and feedback

................................
................................
.........................

9

Web based systems for self and peer assessment feedback

................................
.......................

10

Peer feedback and performance

................................
................................
................................
...

11

Digital feedback

................................
................................
................................
................................
.

11

HEA project

................................
................................
................................
................................
....

12

JISC Sounds Good

Project, Bob Rotheram (2009)

................................
................................
.......

12

E
-
Portfolios

................................
................................
................................
................................
........

13

Dissertation supervision

................................
................................
................................
....................

14

Discussion and Further research

................................
................................
................................
..........

15

Reference list

................................
................................
................................
................................
........

17






















2


INTRODUCTION

When used effectively, Information and Communication Technology (ICT) can provide a
unique learning environment that enhances the different aspects of teaching, learning and
assessment. The application of technology in education is especially significant i
n the
current environment, where the Higher Education sector must to respond to the challenges
of increased student numbers, limited public funding and demands from students who often
regard themselves as customers requiring a high
-
quality learning experie
nce.

Therefore, it is important to review the current literature with regard to the use of technology
in teaching and learning in order to share the best practices and guide future research.
This
paper reports on selected aspects of the use of electronic
tools in
e
-
feedback

(
Assessment
Feedback Enhanced by Technology).

This report extends the previous literature review
s by providing references to 109

sources,
and covers following themes:

benefits of e
-
feedback, ICT tools for e
-
feedback, formative e
-
feedb
ack (peer and self
-
directed learning), digital feedback, feedback in e
-
portfolios,
feedback in dissertation supervision, and areas for further research.


METHODOLOGY

The literature review used a rapid evidence assessment approach
(Slavin 2003).

This
involves establishing criteria for selection and inclusion of studies to be reviewed; followed
by analysis and comparison of the studies included.

Several combinations of search terms (
web
-
based + assessment + feedback; internet +
assessment + feedb
ack; feedback + assessment + technology; computer assisted +
feedback; e
-
feedback + assessment) in major educational Databases (ERIC, SCOPUS,
British Library Integrated Catalogue, Dissertations& Theses)

resulted in over 6000

titles.
Searches were
further
l
imited
by date (2000


2010
)
.
Using the abstract as guides the
potential number of articles for
initial
review was reduced to
200

articles, and then following
further reading and analysis
to
130

papers

in total.

The main group of articles
excluded

was tha
t of studies relating to: school education,
general e
-
assessment, and technical discussion of technologies. Excluding these studies
was felt to be appropriate given that the aim of the review was to focus on HE, technology
and feedback, rather than on e
-
as
sessment in general.

The final

sources were selected based on journal impact factor, reputation and relevance to
the study.
Detailed reading of the

articles, led to a further reduction to

the 109

sources

referenced

in this paper, as the most relevant to this analysis.

This report starts with a brief overview of the literature on e
-
Assessment, and Assessment
Feedback to place the literature review on e
-
Feedback in a wider context.





3


SCOPE AND FOCUS OF T
HE LITERATUR
E REVIEW

ASSESSMENT

Recently, we have observed the proliferation of research projects and papers on
assessment as a response to the current challenges in the HE sector fuelled by: increasing
student numbers, reduced resources in the HE sector and the cons
umerism of HE, where
students are more vocal regarding their learning experience.

Assessment is an essential part of the teaching and learning process as it
‘defines

the actual
curriculum’
(Ramsden, 1992, p.187)
,

frames student learning, and
determines ‘wh
at students
regard as important’
(Brown et al, 1994, p.7)
. Despite its significance educators often fail to
recognize or apply methods to improve assessment process. As a result
students’
perception of the assessment process in Higher Education (HE) expres
sed in the National
Student Survey (NSS) is rather negative and seems to be
‘the Achilles’ heel of quality

(Knight, 2002, p.107).

E
-
ASSESSMENT

The importance of e
-
assessments, e
-
feedback and related issues is reflected by the edition
of recent special issues by selected journals.

The special issue on Computer
-
assisted Assessment (CAA) from the
Assessment &
Evaluation in Higher Education Journal

(2009) covers a wide range of topics including:
rationale for making CAA more inclusive for students with special needs (Ball, 2009);
sophisticated e
-
assessment tasks addressing summative and formative assessment
purposes (Boyle and Hutchinson, 2009); peer

assessment (Davis, 2009); formative feedback
enabling students to develop self
-
directing skills (Nicol, 2009); Framework Reference Model
for Assessment, FREMA (Wills, 2009).

The
British Journal of Educational Technology

also devoted a special issue in 200
9 to
addressing ‘
E
-
assessment: Developing new dialogues for the digital age’
. In the editorial
Denise Whitelock (2009) argues that it is important to ‘construct a pedagogically driven
model for e
-
assessment that can incorporate e
-
assessment and e
-
feedback
into a holistic
dialogic learning framework, which recognises the importance of students reflecting and
taking control of their own learning’ (p.199). A number of papers highlight the challenges of
e
-
assessment including: extra stress imposed on students t
aking CAA (Sieber, 2009); the
question to what extent e
-
assessment enhances student learning (Angus and Watson,
2009); electronic voting system used for promoting deep learning (Draper, 2009a, 2009b);
enhancement of feedback in dissertation supervision (He
inze and Heinze, 2009); e
-
portfolios
promoting active engagement in student
-
centred learning groups (Barbera, 2009; Chang and
Tseng, 2009).

International Journal of Technology Enhanced Learning

also announced a call for
papers for the special issue in 2010 on
Technology enhanced learning: Personalisation
strategies, Tools and Context Design.


4


ASSESSMENT FEEDBACK

Assessment Standards Knowledge exchange
centre (ASKe, 2008) argues that one of the
key reasons for assessment failing to support learning is lack of engagement and ineffective
feedback as
‘action without feedback is completely unproductive for the learner’
(Laurillard,
1993, p.61).
The main challenges of assessment feedback
are identifie
d in literature as
:
student engagement
;

limited time and institutional resources; quality and frequ
ency of
feedback; understanding and

interpretation

of feedback
;

accessibility and legibility of
feedback; purpose of feedback
(
Price and O’Donovan,

2006
;
Handley et al, 2007;
Nicol,
2007;

Nicol and Macfarlane
-
Dick, 2006;
McDowell

at al, 2005; Millar, 2005; Winter and Dye,
2004; Bloxham and Boyd, 2007; Higgins et al, 2002).

Millar (2005) provides an

extensive literature review on assessment feedback outlini
ng
conceptual models (Sadler, 1989, 1998; Rust, 2000),
student preferences and approach to
feedback; feedback content and communication; staff perspective;
principles of good
feedback (Rust et al, 2003, 2005; Juwah et al,
2004
;
Gibbs &

Simpson, 2004). The most
recent principles not included in the above review are proposed by
Nicol (2007)

in Box 1.

Box 1: Ten Principles of Good Assessment and Feedback Practice

Good assessment and feedback practices should:

1.

Help clarify what good perfor
mance is (goals, criteria, standards). To what extent do students
in your course have opportunities to engage actively with goals, criteria and standards,
before, during and after an assessment task?

2.

Encourage ‘time and effort’ on challenging learning task
s. To what extent do your assessment
tasks encourage regular study in and out of class and deep rather than surface learning?

3.

Deliver high quality feedback information that helps learners self
-
correct. What kind of teacher
feedback do you provide


in what

ways does it help students self
-
assess and self
-
correct?

4.

Encourage positive motivational beliefs and self
-
esteem. To what extent do your assessments
and feedback processes activate your students’ motivation to learn and be successful?

5.

Encourage interactio
n and dialogue around learning (peer and teacher student. What
opportunities are there for feedback dialogue (peer and/or tutor
-
student) around assessment
tasks in your course?

6.

Facilitate the development of self
-
assessment and reflection in learning. To wh
at extent are
there formal opportunities for reflection, self
-
assessment or peer assessment in your course?

7.

Give learners choice in assessment


content and processes To what extent do students
have choice in the topics, methods, criteria, weighting and/or

timing of learning and
assessment tasks in your course?

8.

Involve students in decision
-
making about assessment policy and practice. To what extent
are your students in your course kept informed or engaged in consultations regarding
assessment decisions?

9.

Sup
port the development of learning communities. To what extent do your assessments and
feedback processes help support the development of learning communities?

10.

Help teachers adapt teaching to student needs. To what extent do your assessment and
feedback proc
esses help inform and shape your teaching?

Source: Nicol, 2007
(
see more details at www.reap.ac.uk)

The above literature review can also be extended by Draper’s (2009b) recent work exploring
the important question:
what are learners regulating when given
feedback
? He points to the

5


multiple, alternative interpretations of feedback events. For example, the rational
explanations of students failing are explained in Box 2 (p.308):

Box 2:
Interpretation of p
ossible reasons for student failure

1.

Technical knowled
ge or method: I did not use the best information or method for the task, but can
improve it and do better next time.

2.

Effort: I did not leave myself enough time to do it well. (Almost everything we do in life is time
limited. If it is important enough, then

putting more effort in will get a better result. On the other
hand, everyone including students has limited time and must save time from some activities to
invest in other ones).

3.

Method of learning about the task: I did not seek the right information to m
ake a good job
application; I did not test my paper on the right audience; I should change my revision method for
this course; I should have discussed what the criteria really meant before writing the essay.

4.

Ability, trait, aptitude. This result tells me a
bout relatively unchangeable traits. I should apply for a
different kind of job, change the course I am studying on.

5.

Random: I did the right thing but the process is not deterministic. Another time I will succeed
without changing what I do. If it rains whe
n I go for a picnic at a beauty spot, it does not mean
either that picnics are bad or that that spot is ugly; not every lottery ticket is a winner; not
everyone I ask to fill in a questionnaire for me will agree to.

6.

The judgement process was wrong; I was r
ight. Appeal the mark the tutor gave me; find the bug
in the compiler not my program re
-
educate my readers; find a different audience.

Source:
Draper, 2009b

Draper suggests that the interpretation of feedback based on a single variable will cause
frustration about learning. Therefore, when giving feedback, effort should be made to
address all different variables.

In 2009, ASKe established the ‘Osney Grange Gr
oup’ (ASKe, 2009), proposing that current
feedback practices in HE are often

founded on myths, misconceptions and mistaken
assumptions that undermine student learning (see Box 3).

Box 3: The Osney Grange Group proposes the following agenda for change:

1.

It
needs to be acknowledged that high level and complex learning is best developed when
feedback is seen as a relational process that takes place over time, is dialogic, and is integral
to learning and teaching.

2.

There needs to be recognition that valuable and

effective feedback can come from varied
sources, but if students do not learn to evaluate their own work they will remain completely
dependent upon others. The abilities to self and peer
-
review are essential graduate attributes.

3.

To facilitate and reinforc
e these changes there must be a fundamental review of policy and
practice to move the focus to feedback as a process rather than a product. Catalysts for
change would include revision of resourcing models, quality assurance processes and course
structures,

together with development of staff and student pedagogic literacies.

4.

Widespread reconceptualisation of the role and purpose of feedback is only possible when
stakeholders at all levels in Higher Education take responsibility for bringing about integrated
change. In support of this reconceptualisation, use must be made of robust, research
-
informed guiding principles, and supporting materials1.

5.

The Agenda for Change calls on stakeholders to take steps towards bringing about necessary
changes in policy and pr
actice.


Source: ASKe, 2009
(see more resources at the

http://www.brookes.ac.uk/aske

)


6


E
-
FEEDBACK (ASSESSMENT

FEEDBACK ENHANCED BY

TECHNOLOGY)

Since ‘student feedback is indeed an important element of e
-
assessment in that it can offer
new forms of teaching and learning dialogues in the digital age’ (Whitelock, 2009, p. 202),
this report reviews current developments in the area of e
-
Feedback.

Sub
stantial parts of research in the area of e
-
feedback explore the application of particular
technological advancements in an educational context, and evaluate some
pedagogical
aspects of using technology in education.

BENEFITS OF E
-
FEEDBACK

The report on
Technology
-
enabled feedback

(HEA, 2008) summarises benefits of e
-
feedback such as: the legibility of electronic feedback (van den Boom et al, 2004; Guardado
and Shi, 2007; Tuzi, 2004) reduction in assignment turnaround time, efficiency in
administration an
d reduction in paper used (Price and Petre, 1997; Jones and Behrens,
2003; Bridge and Appleyard, 2005).

Other benefits have been recognised in a case study of a web
-
based course in primary care
(Russell, Elton, Swinglehurst and Greenhalgh, 2006). The spec
ific advantages include: the
use of hyperlinks and attachments in virtual communication which enable tutors and
students to easily suggest additional relevant resources; copying others into a
communication; joint feedback in specific areas; the ‘
senior com
mon room’

forum for staff
where feedback can be discussed facilitates team teaching and contributes to the quality of
assessment feedback.

Another desirable outcome of using ICT tool in e
-
assessment is improved efficiency with
regard to time and resources.

The question is to what extent technology could address the
most pressing challenge of quality feedback:
time

(Linn and Miller, 2005, Heinrich et al,
2009). And arguably the areas were e
-
tools can make a real impact on efficiency is
administration (Heinri
ch et al.2009
, p.472
):

‘providing documents, easily accessible to all involved, anytime and anyplace; accepting
assignment submissions, managing deadlines, recording submission details, dealing with
safe and secure storage; managing the distribution of as
signments to markers and facilitating
the communication within the marking team; returning marking sheets, commented
-
on
assignments and marks to students; storing and if necessary exporting class lists of marks.’

Using e
-
tools for these tasks frees up tim
e that can be used for focusing on quality feedback.
Participants in the above study saw benefits in (a) using stock comments from a large bank
so that comments could be individualised; (b) providing feedback online as it eliminates the
problem of students

not being able to read a lecturer’s handwriting, and allows providing
references to resources in the form of links to articles and books; (c) using electronic
marking sheets returned to students by email.

The evaluation of WebPA system (
Loddington
, 2009)
also suggest a numerous benefits for:
(a) the institution (QA, records are stored centrally, flexibility and accessibility); (b) academic
tutors (save time/reduce workload, transparency, confidence that the process is fair, reduce
the number of complaints)
; (c) students (getting timely feedback, opportunity to reflect,
enhancing skills such as communication, teamwork, monitoring, rewarding/penalising).


7


ICT TOOLS FOR E
-
ASSESSMENT

This section provides an overview of a variety of different software and technologies useful
for e
-
feedback, followed by examples of application of particular systems by practitioners at
particular universities.

DIVERSITY OF ICT TOO
LS

Grover (2008a, 2008b
) argues that effective observation and diagnosis of student learning
can be greatly assisted by 21st century technologies and lists five practical tools to help
tutors measure student progress: clickers, online quizzes, web
-
based surveys, digital logs,
an
d spreadsheets.

Other examples of technological advancements are discussed by
Fisher and Baird (2006)
who provide an overview of mLearning applications used to promote student engagement in
teaching and assessment including:
Virtual Graffiti
,
BuddyBuzz
,
Fl
ickr
, and
RAMBLE
.
Quantitative data support their hypothesis that mLearning technologies can provide a
platform for active learning, collaboration, and innovation in higher education.

On the other hand Roland (2006) emphasizes the role of technology in eas
ing the teacher's
burden and discusses online
technology

assessment

tools such as:
Certiport's Internet

and
Computing Core Certification
;
Thomson Learning's Skills
Assessment

Manager
(SAM)

Computer Concepts; Learning.com's TechLiteracy
Assessment

(TLA) etc.

Vendlinski et al., (2008) describes a web
-
based
assessment

design tool, the
Assessment

Design and Delivery System
(ADDS),that provides teachers both a structure and the
resources required to develop and use quality assessments.

Northcote (2002
) summarises a various software programmes developed to create on
-
line
environment, such as
WebCT, BlackBoard
and

TopClass, BrainZone

(Strassburger, 1997),
Question Mark Designer

(Pritchett and Zakrzewski, 1996),
WebTest

(Doughty, 2000),
PsyCall

(Buchanan,

1998).

GRADEMARK AND ELECTR
ONIC FEEDBACK:
HEA PROJECT

Jones (2007)

evaluated following ICT tools designed to provide feedback: (a)
Electronic
Feedback 13;
(b) M2AGICTM; (c)
GradeMark

(an extension of the Turnitin UK plagiarism
detection software). The study addressed students concerns with feedback
-

both its
formative value and the promptness of its return.

Student feedback was positive: the quantity and quality of feedback was seen

as being
better, as was their understanding of the feedback and where the mark came from.


Students particularly liked that comments had been personalised and did not appear to be
computer
-
generated.

Students also liked peer
-
evaluation, as that they could

compare their
performance with the rest of the class.


The study established that all three tools have the potential to enable tutors to provide
students with better quality, personalised feedback.


But for successful application of these

8


tools staff need

to: be computer literate; allow time for familiarisation and preparation; link
marks with assessment criteria.

INTELLIGENT ASSESSME
NT TECHNOLOGY (IAT):

OPEN UNIVERSITY

Another interesting ICT tool is the
Intelligent Assessment Technology

(IAT) engine
dev
eloped by the Open University (Jordan and Mitchell, 2009). IAT has been used to author
and mark short free
-
text assessment tasks. The system was designed to provide students
with ‘instantaneous feedback on constructed response items, to help them to monito
r their
progress and to encourage dialogue with their tutors’ (p. 371). The feedback is specifically
tailored and detailed to allow students to improve their incorrect and incomplete responses,
and consequently ‘close the gap’ between their current and des
ired performance (Sadler,
1989).

In their paper Jordan and Mitchell (2009) also compare the IAT with other software for the
marking of free
-
text answers, such as
E
-
rater

(Attali and Burstein, 2006),
Latent Semantic
Analysis

(LSA) (Landauer, Laham and Folt
z, 2003) and
C
-
rater

(Leacock and Chodorow,
2003). In conclusion they argue that the answer matching using the IAT has shown to be of
similar or greater accuracy than specialist human markers.

WEBPA: LOUGHBOROUGH
UNIVERSITY

Loddington et al (2009) evaluat
e the WebPA tool

developed at Loughborough University.
WebPA allows tutor to determine the

group or team size, the number of groups, the
assessment criteria, when and how the

assessment is made, and a whole host of other
flexible parameters. The process be
gins

with the academic supervisor creating the
assessment by proceeding through three

distinct areas within the WebPA system; my forms
(the questions set by tutor that students need to answer), my groups,
and my

assessments:
the final simple stage is the linking of the forms and groups to create the assessment. The
system is especially valuable for self
-
and peer evaluation (
as
discussed
on p. 10
)

FORMATIVE ASSESSMENT

AND E
-
FEEDBACK


Formative assessment is the si
ngle most powerful factor in promoting learning (Black &
Wiliam, 1998; Nicol & Macfarlane
-
Dick, 2006; Sadler, 1989, 1998). A formative e
-
assessment system has a great potential as it could allow students to take formative test at a
convenient time, obtain
meaningful feedback instantaneously and evaluate their progress
(Wills et al, 2009). Consequently, many practitioners explore the most effective ways of
designing formative e
-
assessment and e
-
feedback.


For example, Peat and Franklin (2002) reflect on a
variety of computer
-
based formative tools
such as: weekly quizzes, a mock exam, self
-
assessment module. They regard the use of
technology in communication with students as a must in the context of decreasing number of
staff per student in Australian HE and

increasing expectation of using ICT.

Formative Automated Computer Testing

(FACT) software was designed to allow tutors to
create their own test and to provide students with formative feedback. The evaluation of the

9


FACT system suggests effectiveness of i
nstant feedback in the form of non
-
adversarial
judgement and informative comment about areas for improvements. It is argued that the use
of the automated testing can improve IT skills of its users (Hunt, Hughes and Rowe, 2002).

MCQ AN EVS

Electronic Votin
g system (EVS) is one of the tools that provides students with instantaneous
formative feedback (Draper, 2009a; Cargill and Cutts, 2004; Banks, 2006). Most often the
EVS uses multiple
-
choice questions (MCQ) which are criticised for surface learning. Indeed

a large scale study of the
Hot Potatoes

software which has been used for formative
assessment at the Queen’s University in Canada (Miller, 2009) suggests only moderate
effectiveness in the student learning. The computer
-
based assessment system organised
i
nto five formats: jQuiz (MCQ), JCloze (fill
-
in the blanks), JCross (crossword), JMix(world
scramble) and JMatching (matching), required a substantial investment, but unfortunately
the potential benefits were achieved not to the full extent.

Draper (2009a)

explores how MCQ and EVS could be used to trigger deep learning. And a
number of studies suggest that it is more beneficial for student learning to ask students to
create MCQ, instead of only answering them (Sharp and Sutherand, 2007; Arthur, 2006;
Fellen
tz, 2004).

Linking MSQ with confidence measurement can also be of value, as Gardner
-
Medwin (2006)
has developed a system of confidence
-
based marking that requires students to indicate a
confidence level for each MCQ answer they give.

The MSQ questions sh
ould act as initiators for peer interaction or directly for metacognition
which subsequently leads to conceptual learning (Draper, 2009).
As in
Mazur’s method
(1997) the brain
-
teaser MCQ are designed to create uncertainty to stimulate discussion
rather than to give straight forward answers.

Therefore using EVS and MCQ for the peer interaction can promote peer feedback and self
-
directed learning.
And it is argued that ‘just
-
in
-
time
-
teaching’ (Novak, Patterson, Garvin and
Christian, 1999) providing tailored and instantaneous feedback is a way of targeting teaching
to students’ needs.


PEER AND SELF ASSESS
MENT AND FEEDBACK

Peer assessment is a proc
ess in which students evaluate the performance or achievement
of peers (Topping, Smith, Swanson & Elliot, 2000). This innovative assessment approach
aims to empower students and foster active learning. A number of authors discuss the
benefits of peer asses
sment for learning (Pope, 2001; Venables and Summit, 2003; Stefani,
1994, Li and Steckelberg, 2005).

CAP (computerised assessment by peers) system has evolved from a basic marking tool
that replicates traditional peer
-
assessment (Davies 2000), to include
anonymous
communications between marker and marked (Davies 2003) and the inclusion of menu

10


driven comments and weightings to take into account the subjectivity of the marker and
automatic creation of a mark for marking (Davies 2005). Throughout the various

development stages of this system, the importance of feedback and quality of comments has
been emphasised (Davies 2004, 2006)

Keppell et al (2006) attempted to reinforce the importance of learning
-
oriented peer
assessment within technology
-
enhanced enviro
nments. They argue that to ‘enhance peer
feedback, principles of learning
-
oriented assessment need to be embedded into group and
collaborative learning settings so that we encourage cooperation, communication and the
giving and receiving of feedback’ (p.46
3).

On the other hand self
-
assessment and feedback is also recognised by different authors. For
instance, Nicol (2007b, 2009) implies that self
-
assessment is at the heart of formative
feedback and is a key component of self
-
regulation. In the
REAP

project
(Re
-
engineering
Assessment Practice) he investigates how ICT can support the development of learner self
-
regulation and to develop teacher feedback (see www.reap.ac.uk). In the first case study
students were provided with constructive formative assessment
and feedback facilitated with
WebCT, and the second case explored the use of EVS, and Intelligent Homework Systems.
The evaluation of the studies in relation to the seven principles of good feedback provides
more insight into the effective use of technolog
y to support student learning.

Challis (2005) also emphasis the importance of self
-
assessment in assisting students to see
their work as an ongoing leaning. In that context ICT, in particular online formative
assessment can facilitate immediate feedback t
o help students evaluate their progress.
Therefore, it is important to differentiate between on
-
line formative assessment designed for
student feedback and for student benefit; and that which is design to inform educators.

WEB BASED SYSTEMS FO
R SELF AND P
EER ASSESSMENT FEEDB
ACK

Self and Peer Assessment Resource Kit

(SPARK) is a web
-
based template which aims to
improve learning from team assessment tasks and make the assessment fairer for students
(Freeman and McKenzie, 2002). SPARK allows of accurate asses
sment of relative
contributions of individual students. It could be used in groups work preventing problems
with ‘free riders’ who do not contribute to the group effort.

Review Stage

is another tool used for a postgraduate module in e
-
learning providing t
he
students with a ‘second chance’ in marking and commenting their peers’ essays having been
able to view the peer
-
comments of other markers (Davis 2009).

The
WebPA

tool

developed at Loughborough University
provides
students with opportunity
to
individually enter their scores for themselves (self
-
assessment) and their peers (peer
assessment) against the prescribed assessment criteria (
Loddington, Pond, Wilkinson and
Willmot, 2009).

Self
-
assessment has been also used in an interactive computer
-
ass
isted learning program
for teaching gynaecological surgery (Iha, Widdowson and Duffy, 2002). The programme on
CD
-
Roms includes video
-
clips, voice
-
over, text, interactive self
-
assessment section with
MCQ and anatomy self
-
test. 75% of 28 students agreed that

feedback from the self
-

11


assessment was useful and 75% of participants agreed that using the CD
-
ROM with videos
and had been more useful than attending the gynaecology theatre. The paper suggests that
in the context of teaching in the operating theatre ofte
n limited by time and resources and
limited interaction between the operating team and the learners, the technology enhanced
teaching tool with interactive feedback could be a beneficial option.

PEER FEEDBACK AND PE
RFORMANCE

The important relationship bet
ween the quality of peer assessment and the quality of student
projects was investigated by a number of studies. The study by

Li, Liu and Steckelberg

(2009) indicated that when controlling for the quality of the initial projects, there was a
significant re
lationship between the quality of peer feedback students provided for others
and the quality of the students’ own final projects. However, no significant relationship was
found between the quality of peer feedback students received and the quality of their

own
final projects (see Figure 1).

Figure 1 Web
-
based peer assessment support system (p.4)


Source:
Li, Liu and Steckelberg

(2009)

In another study 34 undergraduate students utilised both web
-
based peer and self
-
assessment to evaluate research proposals.

The comparison of the original versions and the
revised versions of student proposals indicated a significant improvement of quality (Sung,
Chen
-
Shan Lin, Chi
-
Lung Lee and Chang, 2003).

DIGITAL FEEDBACK

The literature suggests that digital feedback might
be a preferred option by students (Denton
et al, 2008; Bridge and Appleyard, 2007) as it enables students to address their overall
learning development (Ribchester et al, 2007). It can also provide useful immediate spoken
observations on students’ practic
al sessions (Epstein et al, 2002). And different authors
argue that audio feedback tends to be more extensive, easier to access, and has more
depth (Merry et al, 2007; Gomez, 2008; Rotherham, 2008).


12


From the staff perspective analogue
-
recorded feedback c
an be less stressful and time
consuming for tutors (Nortcliffe and Middleton, 2007). However, the attention to the manner
of speech (tone of voice, emphasis, pauses etc) cannot be ignored (Race, 2008).

Audio feedback is not yet widely used, and despite som
e potential benefits discussed above,
there is a question to what extent audio feedback support
s

student learning and academic
achievement. Since using technology for formative feedback ‘is not a cheap or easy option’
(Irons, 2008) it is important to evalu
ate in
-
depth the impact of audio feedback on student
learning.

HEA PROJECT
: UNIVERSITY OF EXET
ER

Rodway
-
Dyer et al

(2008) explores how audio and screen visual feedback can support
student learning evaluating: (a) audio feedback on a written assignment of
fered to a sample
of 73 first year geography undergraduates, and (b) video providing ongoing feedback from
laboratory sessions and made available to 180 first
-
year Biosciences students.

Overall, students in this project did think that audio or screen visual feedback would enable
them to improve future performance. But it was not clear whether audio or written feedback
would be more effective; and whether students would access audio feedba
ck more regularly
than written one.

A major benefit of audio and video feedback is solving the problem with illegible handwriting.
However, in a case of international students there might be a problem of listening
comprehension, and the study does not add
ress this issue.

Interestingly, 76% of students wanted face to face feedback from a tutor in addition to other
forms of feedback. Thus the effectiveness of technology has to be re
-
examined in further
research.

JISC SOUNDS GOOD PRO
JECT,

Bob Rotheram
(2009)

test
ed

the hypothesis that using digital feedback can benefit staff
and
students by: (a) saving assessors’ time (speaking the feedback rather than writing it)
and

(b)
providing richer feedback to students (speech is a richer medium than written text
).

The responses in the study were mixed, but it is worth noting that substantial proportion of
participants thought that
providing
audio feedback took more time, or the same time as the
written feedback.

The most favourable circumstances improving effici
ency of audio feedback were identified
as: (a) the assessor is comfortable with the technology; (b) the assessor writes or types
slowly but records their speech quickly; (c) a substantial amount of feedback is given; (c) a
quick and easy method of deliveri
ng the audio file to the student is available (p.10).

Numerous practical and technical recommendations
are also discuss in this report.



13


E
-
PORTFOLIOS

E
-
portfolios are often praised for reflection, in
-
depth learning,
and
criticality (Zubizarreta,
2004).
They create an opportunity for collaborative construction of knowledge between
students and teacher and between the students themselves (Hunt and Pellegrino, 2002
;
Woodward and Nanlohy, 2004
). However there are some limitations to the learning in the
class
ic e
-
portfolio, thus Barbera (2009) proposes an alternative application called netfolio: a
network of student e
-
portfolios:

‘Class student e
-
portfolios are interconnected in a unique netfolio such that each student
assesses their peers’ work and at the sa
me time is being assessed. This process creates a
chain of co
-
evaluators, facilitating a mutual and progressive improvement process. Results
about teachers’ and students’ mutual feedback are presented and the benefits of the process
in terms of academic ac
hievements.’(p.342)

Barbera starts with an assumption that one of the leading mechanisms in the educational
process is the assessment procedure of learning focused on regular qualitative feedback.
And it is argued that sometimes the feedback process in onl
ine education needs to be more
explicit than in face
-
to
-
face education to have similar educational effects (Rovai, Ponton,
Derrick and Davis, 2006). The netfolio seems to be a useful tool in achieving this aim
because of the inclusion of peer and co
-
assess
ment processes and their consequences
(Dochy, Segers & Sluijsmans, 1999; Olina & Sullivan, 2004).

Comparing the classic e
-
portfolios and netfolio she points to the the added benefits in the
location and product of feedback in Netfolios. The satisfaction of

the participants in the study
is argued to be related to the monitoring and feedback of the class work. 20% of students
using e
-
portfolio reported feedback as an obstacle to improve the work comparim with only
8% of students using netfolio. And overall 87
,5% of students using Netfolio were satisfied
with class work comparing with 69% of students using e
-
portfolio. Students using Netfolios
seem to demonstrate a greater perception of improvements (56%) than the students who
used e
-
portfolios (40%).

Table

1
:

Similarities and differences between e
-
portfolio and netfolio systems

(p. 348)



14


Barbera postulates that feedback between peers should not be an accidental practise, but
an essential, integrated component of the evaluation system. And arguably netfolio
increases the student’s revisions, and directs tutor towards ‘observant assessment’: where
the teacher does not provide more feedback in the development of the netfolio than in the
classical e
-
portfolio but must bring together a more complex network of int
eractions between
the students and intervene when necessary.


DISSERTATION SUPERVI
SION

Good quality and frequent feedback is a key element of an effective supervision process.
And some academics explore using technology to enhance the student
-
supervisor
co
mmunication. Heinze and Hainze (2009) postulate a combination of technology
-
enabled
communication where face
-
to
-
face and electronic formative assessment is used. In addition,
the blended e
-
learning skeleton of conversation model provides a sound theoretica
l
framework that could guide supervisors and students in the supervision process

(see Figure
2)
.

Figure

2
: Blended E
-
Learning skeleton of Conversation


Source: Heinze and Heinze, 2009


Using ICT for formative assessment has also been explored in the
context of supervising
doctoral students (Crossouard and Pryor, 2009). The formative assessment in th
is pilot is
outlined in Figure 3
.




15



Figure 3
: Formative assessment structure


Source: Crossouard and Pryor, 2009 p.381


The evide
nce from this study
suggests that

e
-
feedback

allow
s

tutor to contribute productively
to students’ work by: managing the affective dimensions of feedback; improving writing
practices in a disciplinary context; bringing tutor’s insights very concretely into students’
texts. However, problematic was the level

of authority attributed to comments, which seemed
related to cultural expectations of a tutor’s role, students’ previous summative assessment
experience and to the materiality of email texts.


DISCUSSION AND FURTH
ER RESEARCH


The reviewed literature sugg
ests that a large proportion of studies are small scale projects,
what in some cases might question the impact of the studies. There is substantial research
on using feedback in formative assessments and peer
-

and self
-
regulated assessments. The
variety of

technologies applied by different universities is described in detail, allowing
practitioners to make an informed decision before adopting any of these tools.
The
numerous benefits of providing e
-
feedback to students, staff and institutions are also widel
y
discussed. However, the real impact of e
-
feedback on student learning and the improvement
of their performance require further research.


Some attempts have been made to discuss e
-
feedback in a more conceptual framework with
reference to
technological scaffolding (McLoughlin and Luca, 2002;
Whitelock, 2006), where
structured feedback could help students to be more reflexive, thanks to the responsiveness
and immediacy of feedback offered by the technology. However, further research would be

recommended in the following areas:


1.

Pedagogical aspects of using e
-
feedback

a.

Conceptual framework

b.

Dimensionality of e
-
feedback

c.

Relationship with performance


16



2.

Students and staff perception of e
-
feedback

a.

Improved efficiency

i.

Does e
-
feedback save tutors
’ time?

ii.

Is it easier for students to access it?

b.

Improved quality

i.

Is e
-
feedback more meaningful?

ii.

Can e
-
feedback offer more in
-
depth information?


3.

Comparison of the impact on student learning between:

a.

Face
-
to
-
face feedback

b.

Hand
-
written feedback

c.

E
-
feedback


4.

Impact of e
-
feedback on different audiences

a.

International students

b.

Students with special needs



















17


REFERENCE LIST

Angus, A.D. and Watson, J. (2009) Does regular online testing enhance student learning in the
numerical sciences? Robust evidence from a large data set.
British Journal of Educational
Technology,
40 (2), pp. 255
-
272.

Arthur, N. (2006) Using student
-
generated assessment items to enhance teamwork, feedback and the
learning process.
Synergy: Supporting the Scholarship of Teaching and Learning at the University of
Sydney,
24, pp. 21
-
23.

Assessment Standards Knowledge
exchange (ASKe) (2008) Position paper. [Online] Available at:
<http://www.brookes.ac.uk/aske/documents/ASKePositionPaper.pdf> (Retrieved 20 October, 2008).

Assessment Standards Knowledge exchange (ASKe) (2009)
Osney Grange Group: Agenda for
change
. [Online
] Available at: http://www.brookes.ac.uk/aske/OGG.html (Retrieved 20 December,
2009).

Attali, Y. and Burstein, J. (2006) Automated essay scoring with e
-
rater® v.2.
Journal of Technology,
Learning and Assessment,
4 (3), pp. 1
-
30.

Ball, S. (2009) Accessibil
ity in e
-
assessment.
Assessment & Evaluation in Higher Education,
34 (3),
pp. 29
-
303.

Banks, D.A. (2006)
Audience response systems in higher education: Applications and cases.
Information Science Publishing.

Barbera, E. (2009) Mutual feedback in e
-
portfo
lio assessment: an approach to the netfolio.
British
Journal of Educational Technology,
40 (2), pp. 342
-
357.

Bloxham, S. and Boyd, P. (2007)
Developing effective assessment in higher education: a practical
guide.
Berkshire: Open University Press.

Boyle,
A. and Hutchison, D. (2009) Sophisticated tasks in e
-
assessment: What are they and what are
their benefits?
Assessment & Evaluation in Higher Education,
34 (3), pp. 305
-
319.

Bridge, P. and Appleyard, R. (2005) System failure: A comparison of electronic an
d paper
-
based
assignment submission, marking, and feedback.
British Journal of Educational Technology,
36 (4),
pp. 669.

Brown, S., Rust, C. and Gibbs, G. (1994)
Strategies for diversifying assessment in higher education.
Oxford: Oxford Centre for Staff De
velopment Oxford.

Buchanan, T. (1998) Using the world wide web for formative assessment.
Journal of Educational
Technology System,
27 (1), pp. 71
-
79.

Challis, D. (2005) Committing to quality learning through adaptive online assessment.
Assessment &
Evalu
ation in Higher Education,
30 (5), pp. 519
-
527.

Chang, C.C. and Tseng, K.H. (2009) Use and performances of Web
-
based portfolio assessment.
British Journal of Educational Technology,
40 (2), pp. 358
-
370.

Crossouard, B. and Pryor, J. (2009) Using email for

formative assessment with professional doctorate
students.
Assessment & Evaluation in Higher Education,
34 (4), pp. 377
-
388.


18


Cutts, Q., Carbone, A. and van Haaster, K. (2004) Using an electronic voting system to promote
active reflection on coursework fe
edback.
Proceedings of the International Conference on Computers
in Education
Citeseer.

Davies, P. (2000) Computerized peer assessment.
Innovations in Education & Training International,
37 (4), pp. 346
-
355.

Davies, P. (2003) Closing the communications lo
op on the computerized peer
-
assessment of essays.
Association of Learning Technology Journal,
11 (1), pp. 41
-
54.

Davies, P. (2004) Don't write just mark: the validity of assessing student ability via their computerized
peer− marking of an essay rather tha
n their creation of an essay.
Association of Learning Technology
Journal,
12 (3), pp. 263
-
279.

Davies, P. (2006) Peer assessment: judging the quality of students' work by comments rather than
marks?
Innovations in Education & Training International,
43 (1), pp. 69
-
82.

Davies, P. (2009) Review and reward within the computerised peer
-
assessment of essays.
Assessment & Evaluation in Higher Education,
34 (3), pp. 321
-
333.

Denton, P., Madden, J., Roberts, M. and Rowe, P. (2008) Students' response to trad
itional and
computer
-
assisted formative feedback: A comparative case study.
British Journal of Educational
Technology,
39 (3), pp. 486.

Dochy, F., Segers, M. and Sluijsmans, D. (1999) The use of self
-
, peer and co
-
assessment in higher
education: a review.

Studies in Higher Education,
24 (3), pp. 331
-
350.

Doughty, G. (2000)
Web
-
based assessment: UK initiatives.
University of Western Australia.

Draper, S.W. (2009a) Catalytic assessment: understanding how MCQs and EVS can foster deep
learning.
British Journ
al of Educational Technology,
40 (2), pp. 285
-
293.

Draper, S.W. (2009b) What are learners actually regulating when given feedback?
British Journal of
Educational Technology,
40 (2), pp. 306
-
315.

Fellenz, M.R. (2004) using assessment to support higher lev
el learning: the multiple choice item
development
assignment
.
Assessment & Evaluation in Higher Education,
29 (6), pp. 703
-
729.

Fisher, M. and Baird, D.E. (2006) Making mLearning Work: Utilizing Mobile
Technology

for Active
Exploration, Collaboration,
Ass
essment
, and Reflection in Higher Education
Journal of Educational
Technology Systems,
35 (1), pp. 3
-
30.

Freeman, M. and McKenzie, J. (2002) SPARK, a confidential web
-
based template for self and peer
assessment of student teamwork: benefits of evaluating across different subjects.
British Journal of
Educational Technology,
33 (5), pp. 551
-
569.

Gardner
-
Medwin
, A.R. (2006) Confidence
-
based marking: towards deeper learning and better exams.
In
Innovative assessment in higher education.
, eds. C. Bryan and

K. Clegg,

London: Routledge.

Gibbs, G. and Simpson, C. (2004). Measuring the response of students to assessm
ent: the
Assessment Experience Questionnaire, in C. Rust (ed.),
Improving Student

Learning: Theory,
Research and
Scholarship. Oxford: Oxford Centre for Staff and Learning Development.

Gomez, S. (2008)
Making assessment feedback meaningful and rapid
. Sixth Annual Conference and
Exhibition from Assessment Tomorrow. Innovative Use of Technology to Assess and Support

19


Learning. London 12

13/3/2008.
[Online] Available at: <http://
www.e
-
assess.co.uk/html/speakers.html

(Retrieved 20 September, 2009).

Grover
, T.H. (2008) Part 1: Digital Age
Assessment
--
A Look at
Technology

Tools that Aid Formative
Assessment

Technology & Learning,
28 (8), pp. 28.

Grover, T.H. (2008) Part 2: Digital Age Assessment
--
A Look at How Technology Use in Formative
Assessments Improve
s Feedback and Reporting Opportunities
Technology & Learning,
28 (9), pp.
22.

Guardado, M. and Shi, L. (2007) ESL students’ experiences of online peer feedback.
Computers and
Composition,
24 (4), pp. 444
-
462.

Handley, K., Szwelnik, A., Ujma, D., Lawrence
, L., Millar, J. and Price, M. (2007) When less is more:
Students’ experiences of assessment feedback.
Paper presented at the Higher Education Academy
.

HEA (2008)
Technology, Feedback, Action! Literature Review
. Higher Education Academy funded
project. [O
nline]. Available at: <http://evidencenet.pbworks.com/Technology,
-
Feedback,
-
Action!
-
Literature
-
Review> (Retrieved 20 October, 2009).

Heinrich, E., Milne, J., Ramsay, A. and Morrison, D. (2009) Recommendations for the use of e
-
tools
for improvements around
assignment marking quality.
Assessment & Evaluation in Higher Education,
34 (4), pp. 469
-
479.

Heinze, A. and Heinze, B. (2009) Blended e
-
learning skeleton of conversation: Improving formative
assessment in undergraduate dissertation supervision.
British J
ournal of Educational Technology,
40
(2), pp. 294
-
305.

Higgins, R., Hartley, P. and Skelton, A. (2002) The conscientious consumer: reconsidering the role of
assessment feedback in student learning.
Studies in Higher Education,
27 (1), pp. 53
-
64.

Hunt, E. and Pellegrino, J.W. (2002) Issues, examples, and challenges in formative assessment.
New
directions for Teaching and Learning,
2002 (89), pp. 73
-
85.

Hunt, N., Hughes, J. and Rowe, G. (2002) Formative automated computer testing (FACT).
British
Jo
urnal of Educational Technology,
33 (5), pp. 525
-
535.

Irons, A. (2007)
Enhancing learning through formative assessment and feedback.
Routledge.

Jha, V., Widdowson, S. and Duffy, S. (2002) Development and evaluation of an interactive computer
-
assisted lea
rning program
-
a novel approach to teaching gynaecological surgery.
British Journal of
Educational Technology,
33 (3), pp. 323
-
331.

Jones, D. and Behrens, S. (2003) Online assignment management: An evolutionary tale.
Proceedings
of HICSS
Citeseer.

Jones, A
. (2007)
Evaluation of Generic, open
-
source, web based marking tools with regard to their
support for the criterion
-
referenced marking and the generation of student feedback
. HEA Report.
[Online]. Available at:

<
http://www.heacademy.ac.uk/projects/detai
l/learningandtechnology/elro/elro_qub_06
> (Retrieved 20
September 2009).

Jordan, S. and Mitchell, T. (2009) e
-
Assessment for learning? The potential of short
-
answer

free
-
text questions with tailored feedback.
British Journal of Educational Technology,
40
(2), pp. 371
-
385.


20


Juwah, C., Macfarlane
-
Dick, D., Matthew, B., Nicol, D., Ross, D. and Smith, B.
(2004)
Enhancing
student learning through effective formative feedback.
Higher Education Academy
. [Online]. Available
at: <https://ltsn.ac.uk/genericcentre >
(Retrieved 20 October, 2009).

Keppell, M., Au, E., Ma, A. and Chan, C. (2006) Peer learning and learningoriented assessment in
technologyenhanced environments.
Assessment & Evaluation in Higher Education,
31 (4), pp. 453
-
464.

Knight, P.T. (2002) The Achil
les' Heel of Quality: the assessment of student learning.
Quality in
Higher Education,
8 (1), pp. 107
-
115.

Landauer, T.K., Laham, D. and Foltz, P.W. (2003) Automated scoring and annotation of essays with
the Intelligent EssayAssessor. In

Automated essay
scoring: a cross
-
disciplinary perspective.
, eds. M.D. Shermis andJ. Burstein,NJ:
Lawrence Erlbaum Associates, Inc., pp. 87
-
112.

Laurillard, D. (1993)
Rethinking university teaching: A framework for the effective use of educational
technology.
New York: Ro
utledge New York.

Leacock, C. and Chodorow, M. (2003) C
-
rater: automated scoring of short
-
answer questions.
Computers and Humanities,
37 (4), pp. 389
-
405.

Li, L. and Steckelberg, A.L. (2005) Impact of technology
-
mediated peer assessment on student
projec
t quality.
Proceedings of Association for Educational Communications and Technology
International Conference
AECT, pp. 307

313.

Li, L., Liu, X. and Steckelberg, A.L. (2009) Assessor or assessee: How student learning improves by
giving and receiving peer fe
edback.
British Journal of Educational Technology,
40 (2).

Linn, R.L. and Miller, M.D. (2005)
Measurement and assessment in teaching. Columbus, OH:
Pearson Education.
Columbus, OH: Pearson Merrill Prentice Hall.

Loddington, S., Pond, K., Wilkinson, N. and Willmot, P. (2009) A case study of the development of
WebPA: An online peer
-
moderated marking tool.
British Journal of Educational Technology,
40 (2),
pp. 329
-
341.

Mazur, E. (1997)
Peer instruction.
London: Prentice Hall.

McDowell, L., Sambell, K., Bazin, V., Penlington, R., Wakelin, D., Wickes, H. and Smailes, J. (2006)
Assessment for Learning: Current Practice Exemplars from the Centre for Excellence in Teaching and
Learning in Assessment for Learn
ing.
Northumbria University,
.

McLoughlin, C. and Luca, J. (2002) A learner
-
centred approach to developing team skills through
web
-
based learning and assessment.
British Journal of Educational Technology,
33 (5), pp. 571
-
582.

Merry, S., Orsmond, P. and G
albraith, D. (2007)
Does providing academic feedback to students via
mp3 audio files enhance learning?
HEA Centre for Bioscience.
[Online] Available at:
http://
www.bioscience.

heacademy.ac.uk/resources/projects/merry.aspx

(Retrieved 20 September,
2009).

Mi
llar, J. (2005) Engaging students with assessment feedback: What works: Literature Review.
FDTL5 Project Report. [Online]. Available at:
<https://mw.brookes.ac.uk/download/attachments/2851502/FDTL5+Engaging+Students+with+Feedba
ck+
-
+Literature+Review+
-
+Sept
+2005.pdf?version=1> (Retrieved 20 October, 2009).


21


Miller, T. (2009) Formative computer
-
based assessment in higher education: the effectiveness of
feedback in supporting student learning.
Assessment & Evaluation in Higher Education,
34 (2), pp.
181
-
192.

N
aismith, L., Lonsdale, P., Vavoula, G. and Sharples, M. (2004) Literature review in mobile
technologies and learning.
FutureLab Report,
11.

Nicol, D. (2007) [Online]
Principles of good assessment and feedback: Theory and practice
. Keynote
speech, May
2007. [
Online]
Available at:
<
http://www.reap.ac.uk/reap07/Portals/2/CSL/feast%20of%20case%20studies/Examples_of_assess
ment_design_for_learner_responsibility.pdf
> (
Retrieved 20 September, 2009).

Nicol, D. (2007a) E
-
assessment by design: using multiple
-
choi
ce tests to good effect.
Journal of
Further and Higher Education,
31 (1), pp. 53
-
64.

Nicol, D. (2007b) Laying a foundation for lifelong learning: Case studies of e
-
assessment in large 1st
-
year classes.
British Journal of Educational Technology,
38 (4), pp
. 668
-
678.

Nicol, D. (2009) Assessment for learner self
-
regulation:enhancing achievement in the first year using
learning technolgies.
Assessment & Evaluation in Higher Education,
34 (3), pp. 335
-
352.

Nicol, D.J. and Macfarlane
-
Dick, D. (2006) Formative
assessment and self
-
regulated learning: A
model and seven principles of good feedback practice.
Studies in Higher Education,
31 (2), pp. 199
-
218.

Nortcliffe, A.L. and Middleton, A. (2007) Audio feedback for the iPod Generation.
Proceedings of
Internationa
l Conference on Engineering Education, Coimbra, Portugal, 2007
.
Coimbra, Portugal. 3

7/9/2007
[Online] Available at: <http://
icee2007.dei.uc.pt/proceedings/papers/489.pdf >
(Retrieved 20
September, 2009).

Northcote, M. (2002) Colloquium: Online aseessment:
friend, foe or fix?
British Journal of Educational
Technology,
33 (5), pp. 623
-
625.

Novak, G.M., Patterson, E.T. and Gavrin, A.D.C., W. (1999)
Just
-
In
-
Time Teaching: Blending Active
Learning With Web Technology.
New Jersey: Prentice Hall.

Olina, Z. and S
ullivan, H.J. (2004) Student self
-
evaluation, teacher evaluation, and learner
performance.
Educational Technology Research and Development,
52 (3), pp. 5
-
22.

Peat, M. and Franklin, S. (2002) Supporting student learning: the use of computer
-
based formative

assessment modules.
British Journal of Educational Technology,
33 (5), pp. 515
-
523.

Pope, N. (2001) An examination of the use of peer rating for formative assessment in the context of
the theory of consumption values.
Assessment & Evaluation in Higher Ed
ucation,
26 (3), pp. 235
-
246.

Price, B. and Petre, M. (1997) Teaching programming through paperless assignments: an empirical
evaluation of instructor feedback.
Proceedings of the 2nd conference on Integrating technology into
computer science education
, pp. 94.

Price, M., and

O’Donovan, B.

(2006), ‘
Improving performance through enhancing student
understanding of criteria and standards', in C. Bryan & K. Clegg


(Eds)

Innovative Assessment in
Higher Education
, London: Routledge


Pritchett, N. and Zakrzew
ski, S. (1996) Interactive computer assessment of large groups: student
responses.
Innovations in Education and Training International,
33 (3), pp. 242
-
247.


22


Race, P. (2008)
Assessment, Learning and Teaching Reflections, for Leeds Metropolitan ‘Sounds
Good
’ week
.
[Online] Available at:
<http://
www.leedsmet.ac.uk/the_news/alt_reflections/1F9D98B779D84BA3930017E4E833E33B_07A
pr08.htm
>

(Retrieved 20 September, 2009).

Ramsden, P. (1992)
Learning to teach in higher education.
London: Routledge.

Ribchester, C, France, D., and Wheeler, A. (2007)
Podcasting: a tool for enhancing assessment
feedback?
The 4th Education in a Changing Environment Conference, Salford. 12

14/9/2007.

[Online] Available at: <http://
chesterrep.openrepository.com/cdr/handle/1
0034/15074>

(Retrieved 20
December, 2009).

Rodway
-
Dyer, S., Dunne, E. And Newcombe, M.
(2008) 0207 Audio and Screen visual feedback to
support student learning.
[Online] Available at:
<
http://repository.alt.ac.uk/641/1/ALT
-
C_09_proceedings_090806_web_0207
.pdf
>
(Retrieved 20 December, 2009).

Roland, J. (2006) Measuring Up: Online
Technology

Assessment

Tools Ease the Teacher's Burden
and Help Students Learn
Learning & Leading with Technology,
34 (2), pp. 12
-
17.

Rotheram, B. (2008)
Sounds Good: Quicker,
better assessment using audio feedback
. JIScfunded

project, Jan
-
July 2008.
[Online] Available at:
<http://
www.jisc.ac.uk/whatwedo/programmes/programme_users_and_innovation/soundsgood.aspx>

(Retrieved 20 September, 2009).

Rovai, A.P., Ponton, M.K., Derrick, M.G. and Davis, J.M. (2006) Student evaluation of teaching in the
virtual and traditional classrooms: A comparative analysis.
The Internet and higher education,
9 (1),
pp. 23
-
35.

Russell, J., Elton, L., Swinglehurst, D.

and Greenhalgh, T. (2006) Using the online environment in
assessment for learning: a casestudy of a webbased course in primary care.
Assessment and
Evaluation in Higher Education,
31 (4), pp. 465
-
478.

Rust, C. (2000) An opinion piece: a possible student
-
centred assessment solution to some of the
current problems of modular degree programmes.
Active Learning in Higher Education,
1 (2), pp. 126.

Rust, C., O'Donovan, B. and Price, M. (2005) A social constructivist assessment process model: how
the research
literature shows us this could be best practice.
Assessment & Evaluation in Higher
Education,
30 (3), pp. 231
-
240.

Rust, C., Price, M. and O Donovan, B. (2003) Improving students' learning by developing their
understanding of assessment criteria and proce
sses.
Assessment & Evaluation in Higher Education,
28 (2), pp. 147
-
164.

Sadler, D.R. (1989) Formative assessment and the design of instructional systems.
Instructional
Science,
18, pp. 119
-
144.

Sadler, D.R. (1998) Formative assessment: revisiting the ter
ritory.
Assessment in Education,
5 (1), pp.
77
-
84.

Sharp, A. and Sutherland, A. (2007) Learning Gains…“My (ARS)” The impact of student
empowerment using Audience Response Systems Technology on Knowledge Construction, Student
Engagement and Assessment.
The

REAP International Online Conference on Assessment Design for
Learner Responsibility
, pp. 29.


23


Sieber, V. (2009) Diagnostic online assessment of basic IT skills in 1 st
-
year undergraduates in the
Medical Sciences Division, University of Oxford.
British Journal of Educational Technology,
40 (2), pp.
215
-
226.

Slavin, R. (2003) Reader’s Guide to Scientifically Based Research.
Educational Leadership
, 60 (5),
pp.12
-
16.

Stefani, L. (1994) Peer, self and tutor assessment: relative reliabilities.
Studi
es in Higher Education,
19 (1), pp. 69
-
75.

Sung, Y.T., Lin, C.S., Lee, C.L. and Chang, K.E. (2003) Evaluating proposals for experiments: an
application of web
-
based self
-
assessment and peer assessment.
Teaching of Psychology,
30 (4), pp.
331
-
334.

Topping
, K. (2003) Self and peer assessment in school and university: Reliability, validity and utility.
Optimizing new modes of assessment: In search of qualities and standards,
, pp. 55
-
87.

Topping, K.J., Smith, E.F., Swanson, I. and Elliot, A. (2000) Formativ
e peer assessment of academic
writing between postgraduate students.
Assessment & Evaluation in Higher Education,
25 (2), pp.
149
-
169.

Tuzi, F. (2004) The impact of e
-
feedback on the revisions of L2 writers in an academic writing course.
Computers and Com
position,
21 (2), pp. 217
-
235.

van den Boom, G., Paas, F., Van Merrienboer, J.J.G. and Van Gog, T. (2004) Reflection prompts and
tutor feedback in a web
-
based learning environment: Effects on students' self
-
regulated learning
competence.
Computers in Huma
n Behavior,
20 (4), pp. 551
-
567.

Venables, A. and Summit, R. (2003) Enhancing scientific essay writing using peer assessment.
Innovations in Education and Teaching International,
40 (3), pp. 281
-
290.

Vendlinski, T.P., Niemi, D., Wang, J. and Monempour, S
. (2008)
Improving Formative
Assessment

Practice with Educational Information
Technology
. CRESST Report 739.

National Center for
Research on Evaluation, Standards, and Student Testing (CRESST).

Whitelock, D. (2009) Editorial: e
-
assessment: developing new
dialogues for the digtal age.
British
Journal of Educational Technology,
4 (2), pp. 199
-
202.

Wills, G., Davis, H., Gilbert, L., Hare, J., Howard, Y., Jeyes, S., Millard, D. and Sherratt, R. (2009)
Delivery of QTIv2 question types.
Assessment & Evaluation
in Higher Education,
34 (3), pp. 353
-
366.

Winter, C. and Dye, V. L. (2004).
An investigation into the reasons why students do not collect
marked assignments and the accompanying feedback
. [Online]. Available at:
<
http://wlv.openrepository.com/wlv/bitstream/2436/3780/1/An%20investigation%20pgs%20133
-
141.pdf>

(Retrieved 20 October, 2009).

Winter, C. and Dye, V. L. (2004). 'An investigation into the reasons why students do not collect
marked assignments and the accomp
anying feedback'. [Online]. Last accessed 21/10/2008 at:
http://wlv.openrepository.com/wlv/bitstream/2436/3780/1/An%20investigation%20pgs%20133
-
1
41.pdf

Woodward, H. and Nanlohy, P. (2004) Digital portfolios: fact or fashion?
Assessment & Evaluation in
Higher Education,
29 (2), pp. 227
-
238.

Zubizarreta, J. (2004)
The learning portfolio: Reflective practice for improving student learning.
Bolton
MA:
Anker Pub. Co.