Diagnosing Non-Native English Speaker's Natural Language

estonianmelonΤεχνίτη Νοημοσύνη και Ρομποτική

24 Οκτ 2013 (πριν από 3 χρόνια και 9 μήνες)

78 εμφανίσεις

Diagnosing Non
-
Native English Speaker’s Natural Language


Richard Fox

Dept. of Mathematics and Computer Science

Northern Kentucky University

Highland Heights, KY 41099

foxr@nku.edu


Mari Bowden

Department of Computer Science

The University of Texas


Pan
American

Edinburg, TX 78539

Mariiam58@aol.com



Abstract


Typical grammar checkers use some form of natural language parsing in order to determine errors, if any. Parsing is
a computationally difficult process. Further, grammar checkers perform their an
alysis with the expectation that
errors will not occur in any given sentence, and if the sentence is determined ungrammatical, the grammar checker
usually seeks a single error. For non
-
native speakers of English, it should be expected that a given sentenc
e might
contain one or possibly several errors. This paper presents the GRADES expert system, a diagnostic program that
detects and explains grammatical mistakes. GRADES performs its diagnostic task, not through parsing, but through
the application of cl
assification and pattern matching rules. This makes the diagnostic process more efficient than
other grammar checkers. GRADES is envisioned to be a tool to help non
-
native English speakers learn to correct
their English mistakes, but is also a demonstrat
ion that grammar checking does not have to rely solely on parsing
techniques.


Introduction


People who learn English as a second language invariably produce erroneous sentences in spite of knowing the
proper grammatical rules of English. Teaching aids ar
e sought to improve their performance. One solution is the
grammar checker [8]. Grammar checkers are standard in today’s word processor software and are also available as
tutorial systems. However, the typical grammar checker does not expect mistakes, a
nd when found, anticipates a
single error, or an error common to native English speakers. It is more effort to diagnose the problems of non
-
native
English speakers.


There are many approaches taken to perform grammar checking, whether as part of a word pr
ocessor or a true
grammar diagnostic system. Most revolve around the concept of bottom
-
up parsing with unification. This process
requires that the system match the words of a sentence to grammatical roles (or constituents). Grammatical roles are
then me
rged into larger roles. For instance, a determiner, adjective and noun are merged into a noun phrase, an
auxiliary verb, verb, and prepositional phrase are merged into a verb phrase, and the noun phrase and verb phrase
are merged into a legal sentence. U
nfortunately, the English language is rife with ambiguity since so many words
can take on multiple grammatical roles. Therefore, bottom
-
up parsing is a computational complex process (i.e., an
intractable one). Some shortcuts have been applied, such as cr
eating a list of common well
-
formed phrase lists, and
applying partial parsing. Additional processes such as performing some top
-
down parsing for disambiguation, and
also using ill
-
formed phrase lists, can further reduce the amount of processing, but beca
use these approaches are still
based on bottom
-
up parsing, they remain intractable. A variety of systems have been developed which try these
variations. [1, 5, 6, 7].


Here, a different approach is taken. It is noted first of all that grammar checking is

in fact a diagnostic process. It is
one of identifying the cause of a malformed sentence. Artificial Intelligence (AI) has pioneered many approaches to
automated diagnosis. One successful approach has been the Generic Tasks paradigm [Chandra], where se
veral
information
-
processing strategies have been identified that can be united to solve a variety of problems. In the case
of grammar checking, two specific tasks, Hierarchical Classification [Bylander] and Hypothesis Matching
[Bylander], can be applied
to solve the problem. This has been the approach taken by the GRADES system.
GRADES has successfully and efficiently diagnosed a large variety of non
-
native English speaker sentences for
their grammatical mistakes. This paper describes the GRADES system
, the Generic Task
-
based approach using
Hierarchical Classification and Hypothesis Matching, and offers several examples of the system in action.


Diagnosing Grammatical Mistakes


Most grammar checking systems perform a full or partial parse of the sente
nce using bottom
-
up parsing with
unification. Another approach is to view grammar checking as a diagnostic task whereby a sentence has a
malfunction that must be identified through some AI approach. One such approach is classification. Hierarchical
Clas
sification is one way to solve this problem. Hierarchical Classification is a Generic Task. Generic Tasks are
domain and problem
-
independent information
-
processing strategies. Generic Tasks have been combined to solve a
great variety of problems, includ
ing diagnosis. A basic approach to diagnosis is to classify the malfunction. In
Hierarchical Classification,, the diagnostic problem is one of searching through some taxonomy of malfunctions for
the specific cause of the error. A general malfunction in
the hierarchy is considered by appealing to specialized
knowledge of how to recognize the given malfunction. This knowledge is captured in the form of a hypothesis
matcher, a pattern matcher that considers the relevant data of the case seeking features th
at are associated with the
malfunction. If suitable features are found, the hypothesis matcher responds that the malfunction is relevant or
plausible. Classification resumes by considering the malfunction’s children in the hierarchy, that is, more specif
ic
malfunctions.


Applying this to grammar checking is straightforward. First, a hierarchy of grammatical errors is required. This
hierarchy will consist of, at a high level, the general types of errors. In grammar checking, errors fall into such
cate
gories as “verb
-
based errors” and “noun
-
based errors.” Each of these categories can be broken into more
specific types of errors. For instance, a verb
-
based error might be a situation where the auxiliary verb “to be” was
misused, another might be that a
modal verb was misused while another might be a disagreement between the
subject and the verb, or a lack of an auxiliary verb. Many of these errors can be further decomposed into more
specific categories such as incorrect verb form for a given auxiliary v
erb or incorrect order of auxiliary verbs.


In order to identify if a given if a given error category is relevant, a hypothesis matcher is called upon. There are
multiple hypothesis matchers, at least one per error category. It is the job of the hypot
hesis matcher to examine the
given sentence and see if the features it expects to find are present or not. These features are words of certain
grammatical roles with the right modality, number, tense and order. A hypothesis matcher that finds features as

expected then alerts the classifier that there is an error of the type associated with that matcher. If there are child
errors underneath that error, those are examined in turn. If the hypothesis matcher does not find the features it seeks,
the error is

assume to not appear and the next error type is examined.


By applying classification and hypothesis matching, three things are gained. First, the hypothesis matcher tests
usually entail a single pass through the sentence or a subset of the sentence so t
hat each test can be performed
without a computationally difficult parse of the sentence. Second, in identifying if a given error is present or absent,
the explanation of the error is easy to present. For instance, if the error is a lack of auxiliary ver
b, then the
explanation is essentially the same. Finally, by viewing the problem as one of diagnosis instead of parsing, the
knowledge needed by the system is easy to identify and more closely matches the goal of the grammar checker


to
identify errors.

The next section presents a system built on this approach.


GRADES


GRADES is the GRAmmar Diagnostic Expert System, a program that can detect and explain grammatical errors of
non
-
native English speakers. It was specifically designed to diagnose errors
commonly found by native Japanese
adults who are learning English as a second language. GRADES comprises a fairly small lexicon of words
(approximately 220) but can easily be expanded. Each word in the lexicon contains a variety of grammatical
informatio
n about that word. For instance, an auxiliary verb will include its category (have, be, do, modal), form
(present, past, etc) and number (singular, plural). Other grammatical categories contain different attributes. Words
in GRADES are categorized in on
e or more of ten roles.


GRADES is able to diagnose two generally classes of grammatical errors, verb
-
related errors (of which there are
thirteen different specific types of errors) and noun
-
related errors (of which there are four different specific types
of
errors). These errors are listed below. Figures 1 and 2 demonstrate the organization of the errors and how they are
sought. There are grammatical other errors, but these are the most common types.


GRADES performs its diagnosis by applying pattern
matching rules to the sentence, after performing some initial
minor parsing. The initial parsing creates a verb list, which is then used by many of the error hypothesis matchers,
and requires only a single pass through the sentence. The construction of t
he verb list uses the lexicon of words to
determine if a given word can be a verb or auxiliary verb and therefore added to the verb list. Unlike the typical
bottom
-
up parser, which is a data
-
driven approach, GRADES is a goal
-
driven system that applies pat
tern matching
rules to the structure of the sentence in order to identify any possible error in the sentence.


The following are the specific verb
-
related list of errors that are detected by GRADES:




Subject
-
verb disagreement




Subject
-
auxiliary verb di
sagreement




Verbal elements out of order (auxiliary verbs and main verb)



Incorrect verb form of the verbal elements (verb form requirement by an auxiliary verb
immediately before)



Incorrect verbal sub
-
structure of the verb (verb form when

the verb is used right after the main verb, such
as “avoid driving” vs. “avoid to drive”)



Lack of the auxiliary verb “do” for the negation



Incorrect choice of the auxiliary verb “be” with an intransitive verb



Incorrect use of passive voice (the pres
ence of verb “be” when not needed)



Lack of the auxiliary verb “be” for the passive voice (the absence of verb “be” when needed)



Lack of the main verb



Incorrect choice of the auxiliary verb “be” in the negation*



Lack of the auxiliary verb “be” for

the progressive tense*



Lack of the auxiliary verb “have” for the present perfect*


Errors denoted with an * have more than one cause, which cannot be determined purely syntactically and so
GRADES offers multiple causes. An example of this type of error

is demonstrated in the next section of this paper.
Some of the above errors have their own more specific errors. For instance, there are ten legal verb element
orderings, so “verbal elements out of order” has its own sub
-
hierarchy, illustrated in figure

1. In this figure, it can be
seen that if category 2 and category 3 auxiliary verbs appear in a sentence without a category 1 auxiliary verb, then
the order must be the category 2 auxiliary verb followed by the category 3 auxiliary verb followed by the v
erb. If a
different order appears, then there is an error with the verbal ordering. Category 1 auxiliary verbs are modal verbs
(e.g., “can”, “will”), category 2 auxiliary verbs are based on “have”, category 3 auxiliary verbs are based on “to be”
and cate
gory 4 auxiliary verbs are based on “do.”



















Figure 2 illustrates how GRADES performs verb
-
based error detection. First, the verb element list is composed.
Next, errors based on a misuse of “have” are sought. If a version of “have” doe
s not appear in the sentence, this
entire portion of the hierarchy is ignored. Next, errors based on a misuse of “to be” are sought, followed by errors
based on the misuse of modal verbs and “do” are sought. The last category tests various errors that ar
ise if no

Figure 1: Legal Verb and Auxiliary Verb Orderings


Figure 3: Noun
-
related Tests

auxiliary verbs are present. Within each of these classes, there are subclasses so that if a given error is found
plausible, its subtypes are examined for more specific errors. If any such error is found, GRADES generates an
explanation of the
error and continues searching (because sentences might contain more than a single error). In the
case of a few errors (listed in the verb list above with an *) then there is more than one possible cause of the error
and so multiple explanations are provid
ed.























After searching for verb errors, GRADES next concentrates on noun
-
related errors. The list of possible noun
-
related
errors is given below, a substantially shorter and easier list.




Noun number disagreement in NP




Lack of a d
eterminer




Extra determiner




Modifiers out of order

Figure 3 illustrates how GRADES searches for the noun
-
related errors. The noun phrase is examined for one of four
errors. The noun is compared to the adjectives for number agreement. A determiner i
s sought to see if there is
either a lack of determiner when one is needed, or extra determiner. And finally, the determiner and adjectives are
compared to make sure that they are in a proper order. Again, if any hypothesis matcher detects an error, an
e
xplanation is generated and the search continues.
















Increasing the scope of GRADES is a matter of adding error categories and increasing the lexicon size. Both of
these are just a matter of effort, and neither is conceptually difficult. Enl
arging the system will not cause its

Figure 2: Partial Hierarchy of Verb
-
related Errors

accuracy to decrease since grammatical errors do not generally interact in ways that makes the detection of an error
more difficult when it is a multiple
-
error case, unlike many diagnostic domains. Errors not currently

implemented
but identified as future work are two forms of disagreement: incorrect choice of pronoun or tense mismatch,
incorrect modification: wrong selection of adverb or adjective or wrong form of adjective, incorrect use of a
pronoun or preposition, l
ack of or extra article, preposition or subject, and incorrect word order of pronoun or
adverb. Most of these errors are similar to those already implemented and the effort to include them is merely
adding the portions of the classification hierarchy and
hypothesis matchers.


GRADES does place three restrictions on its usage. First, and most importantly, GRADES is not able to accept
sentences with more than one predicate (subject clause). Second, similar to the first restriction, inserted words and
claus
es surrounded by commas or hyphens, and parenthesized words and clauses are not allowed. These restrictions
limit the scope of GRADES in that more complex forms of sentences cannot be handled (at least currently). The
reason for these restrictions is to
simplify the efforts of GRADES. However, they are reasonable restrictions in that
the system is intended for non
-
native English speakers who are not yet very advanced in their use of English, and so
are not yet expected to use such sentences. Third, in o
rder to simplify the process of compiling the verb list, a
shortcut was inserted into the system whereby the subject of the sentence would specifically be annotated as the
subject. This identification helps reduce the ambiguity of a sentence, removing the

need for any parsing. While this
shortcut would not work in a general
-
purpose grammar checker, it is another reasonable restriction as the system is
intended for use as a teaching tool.


Examples


What follows are a number of examples of the GRADES syste
m. The examples have been reduced for space
considerations in this article. Each example is of the following form: the sentence, GRADES’ output, and a brief
explanation of what GRADES has done to identify the error(s).


The first example is of the sent
ence “The boy plan buy a car.” GRADES’ trace for this sentence is shown in figure
4. GRADES begins by compiling the verb element list, in this case “plan buy.” GRADES then attempts to classify
the error. First, GRADES considers verb
-
based errors. The

first error seeks whether the verbs are in order. The
legal order of verbs is based on the category of each verb (refer back to figure 1). Here, the “plan” and “buy” are
found in proper order. Next, GRADES
determines if there are any incorrect
verb for
ms. It finds that finds two main
verbs (“plan”, “buy”) which is not
legal, therefore one of the verbs should
be an auxiliary verb and so is of an
incorrect form. GRADES determines
that “buy” should be an auxiliary verb,
“to buy.” GRADES is not done
howe
ver as there could be multiple
errors. So, GRADES now checks the
third possible error cause, a subject
-
verb disagreement. It finds such a
disagreement in that “the boy” is
singular and “plan” is a plural form of
the verb. Either the noun or the verb is
incorrect in that both should be singular
(“the boy plans”) or plural (“the boys
plan”). The remaining verb and noun
tests check out so the system has
completed its diagnosis of the sentence.


The second example is of a very simple sentence with incorrect

grammar, “The book written.” GRADES’ run
-
trace
is given in figure 5. As with the first example, the verb element list is merely “written” so most of the possible
errors can be skipped (without an auxiliary verb, there are few verb
-
related errors possibl
e). However, since there
Test 1: Checking verb elements out of order


Done. 2 Verbs in order


Test 2: Checking incorrect verb forms


verb form of main verb and no auxiliary ve
rbs Done. OK


second verb Done.


Error: 'to' missing before 2nd Verb( buy )


Test 3: Checking Subject
-
Verb disagreement Done. Done.


Error: Subject( boy )
-

Main V( plan ) Number disagreement


Test 4: Checking noun (bo
y) number disagreement Done. Number in agreement


Test 5: Checking modifiers out of order Done. OK Modifier(s) in order


Test 6: Checking noun (car) number disagreement Done. Number in agreement


Test 7: Checking extra determiner Done. OK No extra de
terminer found


Figure 4: GRADES run
-
trace for “The boy plan buy a car.”

Step 1: Checking verb elements out of order


Done. 3 Aux Vs, 1 Verb in order


Step 2: Checking incorrect verb forms


verb form after modal Done. OK. present
perfect form Done.


Error: Auxiliary verb( be )incorrect form


progressive/passive form Done.


Error: Main verb( enjoy )incorrect form after verb
-
be


Step 3: Checking Subject
-
Aux V disagreement


Done. Subje
ct
-
Aux V in agreement


Step 4: Checking lack of auxiliary verb (be) for passive Done. OK


Passive (be) not missing.


Step 5: Checking noun (girl) number disagreement Done.


Error: Determiner( many )
-

Noun( girl ) Number disagreemen
t


Step 6: Checking modifiers out of order Done. OK Modifier(s) in order


Step 7: Checking noun (book) number disagreement Done.


Error: Determiner( these )
-

Noun( book ) Number disagreement


Step 8: Checking extra determiner Done.



Error: 2 determiners( these and my ) cannot be used together


Step 9: Checking modifiers out of order Done. OK Modifier(s) in order



Figure 6: GRADES run
-
trace for “Many girl may have be enjoy to read my these book.”

Step 1: Checking verb elements ou
t of order


Done. 1 Verb in order


Step 2: Checking incorrect verb forms


verb form of main verb and no auxiliary verbs Done.


Error: Main verb( written ) incorrect form


Step 3: Checking lack of auxiliary verb (be) for pass
ive Done.


Error: Main verb( written ) must be the passive voice with


auxiliary verb( is/was ) for the passive


Step 4: Checking noun (book) number disagreement Done. Number in agreement


Step 5: Checking modifiers out of order Don
e. OK

Modifier(s) in order


Figure 5: GRADES run
-
trace for “The book written.”

are no auxiliary verbs, GRADES determines that the form of the verb is incorrect. And since there are no auxiliary
verbs, GRADES further determines that the verb must be in a passive form. While these are reported as two
separa
te errors in the sentence, they are in fact related so that the solution to the second error will also solve the first.



















The final example for this paper is the sentence “Many girl may have be enjoy to read my these book.” GRADES’
run
-
tr
ace is shown in figure 6. The verb element list is compiled as “may have be enjoy” and processing continues
with GRADES searching for both verb
-
related and noun
-
related errors. GRADES determines five errors with the
sentence, some from each general categ
ory. The first error is that verbs are incorrect. Both the auxiliary verb “be”
is the wrong form of “to be” and the main verb “enjoy” is the wrong form. Next, the noun “girl” is found to
disagree with the determiner “many” as one denotes singular and th
e other plural. A similar error is found in the
prepositional phrase at the end of the sentence where “these” and “book” do not agree. GRADES detects a final
error in that both “my” and “these” are determiners for “book” and the noun can only have one.























Conclusion


GRADES is a diagnostic expert system built to identify the cause of grammatical errors. It is envisioned as a tool
for people who learn English as a second language. Unlike grammar checkers that rely on bottom
-
up parsing

and
shortcut mechanisms, GRADES performs diagnosis through a classification process whereby an error category is
considered and pattern matching rules are used to determine if it is plausible or not. If found plausible, the error
category is refined into

more detail by considering that error’s sub
-
causes. Categories found implausible by their
pattern matching rules are discarded. If an error is detected, an explanation is automatically generated to help the
user learn why the sentence was ungrammatical.

GRADES has a small lexicon is able to detect many of the various
grammatical errors in English as long as the sentence contains no complex, compound or inserted clauses or words.
GRADES has been successfully tested on a large number of sentences. It i
s not in current use as a teaching tool, but
it could be in the future.


References


[1] Allen, J. 1987.
Natural Language Understanding
. Menlo Park, CA: Benjamin/Cummings.


[2] T. Bylander, T. Johnson, and A. Goel. Structured Matching: A task
-
spe
cific technique for making decisions.
Knowledge Acquisition
, 3(1):1
-
20, 1991.


[3] T. Bylander and S. Mittal. CSRL: A language for classificatory problem solving and uncertainty handling.
AI
Magazine
, 7(3):66
-
77, August 1986.


[4] Chandrasekaran, B.

1988. Generic Tasks in Knowledge
-
Based Reasoning: High
-
Level Building Blocks for
Expert System Design.
IEEE Expert
, 1.3, pp. 23
-
30.


[5] DeSmelt, William H. 1995. Ker Kommissar: an ICALL conversation simulator for intermediate German, in
Intelligent

Language Tutors, Theory Shaping Technology
, ed. V. Mellisa Holland, Jonathan D Kaplan and
Michelle R. Sams, pp. 153
-
174. New Jersey: Laurence Erlbaum Associates, Inc., Publishers.


[6] Loriz, Donald. 1995. GPARS: a suite of grammar assessment systems
. In
Intelligent Language Tutors,
Theory Shaping Technology
, ed. V. Mellisa Holland, Jonathan D Kaplan and Michelle R. Sams, pp. 121
-
133. New Jersey: Laurence Erlbaum Associates, Inc., Publishers.


[7] Rich, Elaine and Kevin Knight. 1991.
Artificial In
telligent second edition
. New York: McGraw
-
Hill, Inc.


[8] Wong, C.J. 1996.
Computer grammar checker and teaching ESL writing
,
the 9
th

Annual Midlands Conference
on Language and Literature
,
http://www.coe.missouri.edu/~cjw/portofolio/grammar
-
checker.htm