1
Automatic Evaluation of mathematical open questions
Eman K. Elsayed
Faculty of Science(girls),
Al

Azhar University, Egypt
emankaram10@azhar.ed.eg
S桡業aa⁍⸠呡睦eek
Faculty of Science(girls),
Al

Azhar University, Egypt
Shi

m@hotmail.com
Abstract:
This paper proposes a fuzzy automatic evaluation web method. This proposed method
uses the fuzzy concept in open question mathematical evaluation in
e

learning. The
question type has variable number of mathematical operations, where the solution steps
aren’t unique. So the proposed method uses combination of sets and vectors to generate a
one multiple dimension
al
matrix. The answers for each student re
present his own
thinking, knowledge, and cognitive ability in solving problems.
Keywords
:
e

learning, open question, fuzzy evaluation.
Introduction:
Practically actual test has multiple types of questions.
As
Multiple choice q
uestion with
one correct answer,
multiple
c
hoice with many correct answer,
matching
question, Fill
the gab question, True/False question and
Open question.
E
ach answered question type should be introduced to computer system in a way to be
interpreted, sear
ched, indexed
and processed by the system to support the test automated
evaluation. Give author the opportunity to select advanced evaluation method in
e
ducation process.
Nowadays, open question evaluation tasks in e

learning is an important field in compu
ter
science, which based on each student created answers and the way of thinking he/she
proceeds, different technique introduced questions to be eva
luated as a true or false value.
There are five key requirements to verify when considering the use of autom
ated scoring
systems for open end questions as automated scores are consistent with the scores from
expert human graders, the way automated scores are produced is understandable and
substantively meaningful,
a
utomated scores are fair, automated scores have
been
validated against external measures in the same way as is done with human scoring and
the impact of automated scoring on reported scores is understood. [
1
]
In the area of mathematics, the performance of automated scoring systems is typically
quite robust when the response format is constrained. The types of mathematics item
responses that can be scored by automated systems include mathematical equations or
ex
pressions, two

dimensional geometric figures, linear, broken

line or curvilinear plots,
bar graphs, and numeric entry.
For the more constrained response types,
the most notable
2
limitation is that automated scoring assumes computer test delivery
and data ca
pture,
which in turn may require an equation editor or graphing interface that students can
use
comfortably.
.
The proposal in this paper gives a brief description on the automated evaluation online
certain type
of mathematical open question. This type called operational mathematical
proof question. Our algorithm uses the advantages of matrix evaluation mechanism which
generate all possible
right
solution
s
with scores based on the concept of fuzziness
.
T
hat is
fo
r the sake of the student cognitive ability in solving questions.
The paper is organized as follows.
Section2 present the main concepts, then section three
display the
Literature
review, section four for proposal method to display the algorithm,
then sec
tion five implement the proposed method in case study and finally conclusion
and further work.
2

Main concept:
Deploying Automated Scoring
:
There are a number of ways in which automated scoring might be deployed. The decision
about how to deploy is influenced by two aspects of the automated scoring system:
measurement
c
haracteristics and speed. The measurement characteristics of automated
scor
ing are typically a function of the automated scoring system, the type of task it is
applied to, the population, and the purpose for which the system and scores are to be
used. In contrast, the speed of response is a combination of the time required for th
e
computer to score the response, the speed of the connection between the administration
terminal to the
scoring server, and the capacity of scoring servers available to meet
scoring demand at any given time.
As such, the response time can be reduced to so
me extent through greater investment in
the hardware
infrastructure for scoring.
One way to deploy automated scoring is as the
sole score for an item, with no human oversight or
intervention. This model is typical
when the measurement characteristics of th
e automated scores are
sufficient for the
purposes of the assessment. The item types that most lend themselves to this use of
automated scores are those for which the expected set of responses is relatively
predictable, such as
some mathematics items, some
types of simulation

based tasks, some
short content responses, and
some types of spoken responses. Use of automated scoring
alone for other task types, including essays,
is common in low

stakes assessment but less
common for high

stakes assessment. If the
speed of scoring
is very high for such systems,
the item can be used like any other in computerized assessment, including
in
computerized adaptive tests.
An alternate way to deploy automated scoring is as one of two scores for a response, with
the other
score being provided by a human rater. This model is common primarily for
essay scoring, with some
short content responses and some types of spoken responses
also potentially lending themselves to this
approach. The speed of automated scoring is
less of a
factor in this model, since the final score would not
be determined until after the
human score was also obtained.
M
atrix
(plural
matrices
): is a rectangular arrangement of mathematical expressions (or
array) of numbers, symbols, or expressions, arrang
ed in
rows
and
columns
. The
individual items in a matrix are called its
elements
or
entries.
3
3

Literature review:
Human scoring is not the only option for scoring the open question test. Recent advances
in artificial intelligence and computing
technology, paired with research over several
decades, have yielded a number of systems in which computers assign scores of open
question. The growth in the use of automated scoring is due to the ability of such systems
to produce scores more quickly and a
t a lower cost than human scoring. In addition,
because automated scoring systems are consistent in how they produce scores, they
support longitudinal analysis of performance trends and equity in scoring. Some
applications of these systems provide detailed
feedback on aspects of performance.
As in r
eference
[
2
]
the authors
develop Numbas: a new SCORM

2004 compliant open

source and multi

platform e

assessment and e

learning system, focus on rich formative e

assessment and learning; blending powerful mathemat
ical and statistical functionality
based upon proven design principles, using the full capability and resource
s of the
internet, It can be used for all numerate disciplines in education and training and builds
upon and extends successful designs and imple
mentations used for many years in HE, FE
and secondary education.
And in reference
[
3
]
Clarify how to Provoking student
thinking/deepening conceptual understanding in the mathematics classroom, use eight
Tips for Asking effective ques
tion as indicate in th
ese tips:
anticipate student thinking,
link to learning goals, pose open questions, pose questions that actually need to be
answered, incorporate verbs that
elicit
higher levels of bloom’s taxonomy, pose questions
that open up the conversation to include o
thers,
keep questions neutral,
and
provide wait
time. Questioning is a powerful instructional strategy.
But reference
[
4
] concentrates on
tasks that call for complex con
structed responses not amenable to scoring via exact

matching approaches, In literacy,
three task types and approaches to scor
ing are described
and For mathematics, examined four task categories: (1) those calling for equations or
expressions, where the problem has a single mathematically correct answer that can take
many dif
ferent surfac
e forms; (2) those calling for one or more instances from a
potentially open

ended set of numeric, symbolic, or graphical responses that meet a given
set of conditions; (3) those requiring the student to show the symbolic work leading to the
final answer;
and (4) those asking for a short text explanation. For various reasons, none
of these task classes has been regularly used in consequential assessments, identifies
potential uses and challenges around automated scoring making better

informed planning
and i
mplementation decisions
.
Reference
[
5
] described a research project that can form the basis of an open platform for
testing and evaluating various learning and instruction methods, describes the
relationship between problem solving and mathematical
knowledge in an online
mathematics community, examining the activities of people and analysis of these
activities in context, paper proposed to add three core features to the software system that
underlies PlanetMath, The proposal build a problem

solving l
ayer over the encyclopedia
layer that comprises the central feature of the current PlanetMath.org
. And reference
[
7
]
explore how can Moodle enhances students construct their mathematics knowledge,
Research finding showed that students were able to construc
t their mathematics
knowledge through Moodle in virtual classes or on

line learning environment by
communicating and receiving helping from peers. The students explored their
4
mathematics activities with the Geometer’s Sketch
pad, they interacted by draggin
g and
animating as much as they wanted
.
Despite these efforts, it is still widely recognized that manual
scoring
is the most reliable
method for evaluate any type of open questions.
4

Proposed
methodology:
Each student has its own exam test interacts with
e

learning system
through his/her browser
. The
system
asked to answer a set of different types of questions in a specific time (determined by
admin
). E
ach question/answer has a specific way of evaluation
.
In this section we display the abstract structure of the part of our system. This part about
automatic fuzzy scoring of the type of mathematical question
s called
operational mathematical
proof question
.
The structure is shown as in figure
[1].
Fi
gure1: structure of the proposed
system
We develop an operational proof question evaluation
tool used in e

learning systems
.
Our proposal used the matrices concept to merge vector evaluation technique and set
evaluation technique
i.e.
set
evaluation technique +h vector evaluation technique = matrix
evaluat
ion
technique
where +h means hybrid technique
.
This update in the main
idea
in
the algorithm of the evaluation tool can automate the human rate in the scoring
.
Also
using matrices concept make the proposed method moderate for any answers have
ordered steps
.
In general, We find that some final
solution
values are dep
end on the previous existing
values, i.e. we can’t reach to the final correct answer value if t
he previous one related to it
is wrong, so steps while writing created
solution values
should be set in correct orders
with corre
ct values
, in almost cases the order for each solution is important ,using the
vector evaluation
restrict the posi
tion for each
value in the
answer set
template
,
with the
using sets
evaluation concern with the number of all possible values without restricting
its position in the solution to propose the evaluation matrix which has a set of array
values
generate
the set of all possi
ble values have a probability to exist in the solution,
concerning with the position
importance
of each item in the created solution,
evaluate
answers relatively and absolutely, So we could measure the similarity between student
answers and
certain row in
the generated matrix of solution
.
Stude
nt
Final solution
template
Answers
Open question evaluation
tool
Teacher
Deployment fuzzy
Scoring
Insert student score in
student DB
5
To prevent plagiarism Sometimes teachers require a descriptive details for each answers’
question so decrease number of steps, decrease scores as well. Each item in the
evacuation matrix assigns to it a specific score, if
a correspondence exists between
student and teacher items
then score is gained ,otherwise score is not gained sum all
those scores student get a total score.
The general form for evaluation matrix has all possible set of values
where each
item/element
in teacher template has one/set of values
is:
The general form for student
solution
has all possible set of created values is:
z=reference the row number in the evaluation matrix, m= number of student solution
values.
Algorithm
steps
:
I

Input:
Teacher
’s
question,
teacher’s model answers and s
tudent
’s
solutions
.
II

T
he processes
:
1

C
ount
mathematical operations
(n

elements) in a teacher
’s
question.
2

Assign each item in a teacher row answer solution a specific score.
3

Create n
X
n

matrix.
4

Insert
t
eacher
’s
answers/items within this n
X
n

matrix
as a first row
.
5

G
enerate next row, the 1
st
item is 2
nd
item in the previous row
then
proceed in this
way until reach last item.
6

Put student
solution
into array.
7

C
ompare
between the solution array
and
each row in the created matrix.
8

If there is a similarity between array and certain row, then calculate
the
percentage
depend on several variables as the row number and the teacher
request. Otherwise the score is zero.
9

Total score= question mark * percent
age
III

The
o
utput
is the
total score
.
6
Through this algorithm we can generate the set of all possible values within n
x
n

matrix
make the system more accurate and support descriptive details thus support run time
system. Question is evaluated either absol
utely, relatively, or absolutely and relatively
wither a corresponding exists between item

item or a corresponding exists between item

set of items.
The characteristics of vectors and sets are combined together formed matrix
structure model.
5

Case study:
We
implemented
the proposal algorithm by PHP web scripting language and
WampServer connecting scripts codes.
PHP programming language
enable
s
us to build
the
evaluation matrix with multiple dimensional array
which contains vectors and sets.
There
are va
rieties
in the student solution v
ector length
so
using
PHP
language is
moderate.
We are
n
’
t restricted to build nxn

matrix with
n
2

elements
where
each vector is
created contain its own set
. O
nce we
had
mathematical
question and its corresponding
solution values, the matrix rows elements
could
generated
. The following example will
show the implementation of the algorithm.
The system students’ interface by online
mathematical question as shown in f
igure
[2]
Figure
2
:
O
nline open mathematical
question
S
uppose
that the
open
mathematical question
is
I= (3*5

12)
2
. And the detailed solution is
i) 3*5=15,
ii)15

12=3,
iii)3
2
=9.
We know that there are some arithmetic operations have higher parentheses
(executed *,/
before +,

)
more than the other operations.
We
has 3

operations/steps for 3

equation al
l
should solve in correct order. Where
if
the
student solve it as 3
2
=9 , 5
2
=25,122
2
=144 and
sum them, then i
t will be wrong
answer. S
o the correct v
alues a
re (15,3,9) respectively.
But
to be more specific
,
some students reach to the final correct solution
by
summarize
the
number of steps/operations required to be solved
. W
e
take in consideration this point,
so the first row in the evaluation matrix
contain
s
the
detailed (standard)
correct values
(15,3,9)
as a vector of array values, in other problems we may have two or more
operations/steps that should be solved with the same parentheses
. S
o we represent all
possible set of solution for each step in its corre
sponding order
. T
he 2
nd
row contains
number of correct elements less than the 1
st
one i.e. student solution values (15,9) or(3,9)
re
present a vector of array value.
I
n the other hand represent those possible items in the
evaluation matrix as ({15,3},9)
. B
o
th arrays are
combined
together. S
o as we move
down
along the evaluation matrix the number of expected generated correct values
expected to be in each row is less than the number o
f values exists in the last one.
The
7
last
row
has
the final correct solution
.
The following part of PHP code is shown how to
generate matrix from question model answer.
<html> <head><h3>enter your model answer here</h3></head>
<body> <form method="POST" action="">
<table border="2" align='center'>
<tr>
<input name="t1
" value="<?php echo $_POST['t1']; ?>"
type="text"/></br>
<input name="t2" value="<?php echo $_POST['t2']; ?>"
type="text"/></br>
<input name="t3" value="<?php echo $_POST['t3']; ?>"
type="text"/></br>
<input name="t4" value="<?php echo $_POST['t4
']; ?>"
type="text"/></br>
<input type="submit" align='center' value="enter"/>
</tr>
</table>
</form>
<?php
$x=$_POST['t1'];
$y=$_POST['t2'];
$z=$_POST['t3'];
$w=$_POST['t4'];
$c_l = array($x, $y, $z, $w);
$d=print_r($c_l);
echo'</br>'
;
foreach ($c_l as $k=>$color) {
while($k=0)
$c_l1=false;
//$c_l1=$c_l;
unset($c_l[$k]);
$c_l1 = array_values($c_l);
$c_l=$c_l1;
print_r($c_l);
echo'</br>';
//$z=arr[print_r($c_l);];
} echo'</br>';
?>
</body> </html>
In the comparison between the solution matrix and the student solution
If z=2
i.e. the
student
array li
k
e
the 2
nd
row in the evaluation matrix, for each similar values we give a
score as a correct answer otherwise score is zero, total score is the sum of all scores.
We need to notice that each item/element in teacher template has one/set of values i.e.
a
12
=a
11
or a
12
and one of these values may/may not is exist in student array solution.
The
following are the
generated matrix
,
the student answer array
and the score percentage.
8
Score = 85%
6

Conclusion
:
We have developed a comp
utational method that
supports
the Evaluation test tools.
It
is
based on
generating a
matrix for each solution.
The proposed
method evaluates
a certain
type of
mathematical
open question
s
in
a
fuzzy way
. Traditionally the fuzzy way is used
in
the
manual method.
T
he
Future
work
will
introduce
“HES

Hybrid Evaluation System” as an intelligent
semantic system to evaluate online open questions test.
Reference:
[
1
]
David M. Williamson,
et al
: “Automated Scoring for the Assessment of Common Core
Standards” ETS
, Pearson and The College Boa
rd project
2010
.
[2]
Bill Foster, Christian Perfect and Anthony Youd: “
A completely client

side approach to e

assessment and e

learning of Mathematics and Statistics
”
E

learning Unit of the School of
Mathematics and Statistics, Newcastle University, 2011.
[3]
Asking Effective Questions , student achievement division,
available from
www.edu.gov.on.ca/eng/literacynumeracy/inspire
[4] Randy Elliot, B. EnnEtt,
Automated Scoring of Construc
ted

Response Literacy and
Mathematics Items
2011.
[5]
Joseph Corneli ,
Problem solv
ing and mathematical knowledge
, 2010.
[
6
]
Krongthong Khairiree,
A Study of Constructivist in Mathematics in Virtual Class with
Moodle and the Geometer’s Sketchpad,
International College, Suan Sunandha Rajabhat
University, 2011.
[
7
] Ikdam Alhami, and Izzat Alsmadi,Automatic Code Homework Grading Based on Concept
Extraction,
Yarmouk University, Irbid, Jordan
,
International Journal of Software Engineering
and Its Appli
cations Vol. 5 No. 4, October, 2011.
[
8
] Antonio Robles

Gómez, Llanos Tobarra, Salvador et.al, Proposal of an Auto

Evaluation
Framework for the Learning of Network Services, Madrid, España, IEEE Global Engineering
Education Conference (EDUCON)
–
"Learning
Environments and Ecosystems in Engineering
Education",2011.
[
9
] Martin Němec, Radoslav Fasuga ,The Automatic Evaluation Testing, Tasks with graphics
character, Ostrava, Czech Republic
,
The Third International Conference on Mobile, Hybrid, and
On

line Learn
ing
,2011.
[1
0
]
Radoslav Fasuga,Michal Radecky: Automatic evaluation of created tasks in e

learning
environment Ostrava : university of Ostrava 2010.
Comments 0
Log in to post a comment