NATURAL LANGUAGE TRANSLATION

estonianmelonΤεχνίτη Νοημοσύνη και Ρομποτική

24 Οκτ 2013 (πριν από 3 χρόνια και 7 μήνες)

71 εμφανίσεις

NATURAL LANGUAGE TRANSLATION



By Benjamin J. Van Someren



Abstract


Natural languages have always been complicated and riddled with ambiguity and uncertainty which
can cause great confusion. It is for this reason that over seventy years of research have
been put into
the study of machine translations, or more specifically on how computers are able to translate one
natural language into another. However, there is still no acceptable replacement for human
translators. So it is only from inspecting

relevant
popular theories in the field of natural language
processing that one can understand the barriers which have prevented science fiction dreams from
becoming a reality. Such theories are based on the syntax and semantics of natural languages and
have roots d
eep within the fields of statistics, linguistics and artificial intelligence. In my
paper

I
will discuss the benefits and short comings of said theories as well as how they relate to the
specified fields.





Introduction


One of the first desires for the

use of computers was to translate one natural language into another.
A natural language is defined as any language that is not methodically designed by humans but is
formed
naturally

by human use over numerous generations. This ability first began to be
i
mplemented in the 1940's during World War II when the allies
commissioned

a team of engineers
to create a machine (computer) that would allow them to translate German into English.

Since then
numerous different algorithms and techniques have been theorized

and implemented in order to
solve this problem and have primarily used the fields of artificial
intelligence

(AI), linguistics, and
statistics. While there is currently no optimal solution for the problem,
numerous

satisfactory
solutions have been impleme
nt
ed
. In this
paper

I will discuss how these solutions work and what
the future holds for this field of study.

[1]


Natural Language Translation, sometimes
referred

to as machine translation, is

defined as

the
translating of one natural language into anoth
er natural language. Because of the nature of this
topic, it involves a great deal of natural language processing
.

Natural language processing is defined
as allowing a machine to understand the meaning of a natural language and is a sub
-
category

of AI.
How
ever, the most important part of Natural
Language

Translations is linguistics, which is the
scientific study of a language and its
structure
.

Since natural languages are not designed and are just
formed, it is extremely
difficult

to be able to completely a
nd correctly translate them.

[3]



2


Direct Translation


One of the first techniques used to solve the problem of machine translation was to use direct
translation. This is where you have a dictionary, or collection of words from one language and
possible wo
rds that they translate to and have a similar meaning.

So if we have the word “cigars”
and want to know what word would be used to describe it in German, we would look it up in our
dictionary and see that the German
word

would be “zigarren.”

[6]


Now there

might be multiple words that have the same meaning
for
the language that you are trying
to translate to. So in order to solve which word to use linguists created a corpus for each language
they are translating. A corpus is a large collection of documents
in one language that can be used to
create
statistics

about the language and determine the statistical best option to use. The predominant
statistical
formula

used to
analyze these corpora

is Bayes theorem.
With

the use of Bayes theorem
and the statistics
that are gained for it from the corpus we are able to determine the most
common

word or words that we should translate to.


As long as the corpus is filled with current
document
s
,

the word
s

or phrase
s

that we translate it to
will be able to be understood b
y any liv
ing person that is fluent in that

language. However, the
denotation (central meaning of
a word
) changes

with time.
For example
the word “silly” does not
mean the same thing today as it did in the 16
th

century. Today the meaning of “silly” is “weak

in
intellect
” or “foolish” where as in the 16
th

century silly was used to mean “happy” o
r

“innocent.” So
because of this we must ha
ve special corpora

(the plural form of corpus) to correctly translate the
meaning of a phrase in one language into
another
.
This adds a level of complexity to machine
translation that requires either a great deal of
maintenance

or an ability to learn
in order

to continue
to give complete and correct translations.

[2]


Some words might not have any meaning associated with them i
n another language. For instance if
we try to translate “Peter” from
English

to German we get the same word back because it is a name
and does not have any specific meaning in a language other than to identify a specific person. The
direct translation is e
xtremely simple t
o implement since all you need
is a dictionary of words in
one language and the words they translate to in another language. Because of this, direct translation
is still used on numerous web pages since it is
extremely

cheap to
implement

a
nd easy to maintain.



Syntactic Transfer


O
ne of the problems with direct translation is that it is unable to
account

for syntax, the
structure

of
a language. So in order to overcome this problem we
perform

a syntactic
translation. That

is we
attempt to t
ranslate some text using the direct translation that was discussed about previously
and
then

rearranging the words
using

the syntax of the language that we translated the text into. In order
to be able to do this we have to
understand

the rules of syntax
f
or

the language that we are
translating
to
o
. These rules
are numerous and can
include

where adverbs are placed in a sentence
with respect to verbs, the relation of nouns and pronouns, and the compounding or expanding of
words.

These rules are typically def
ined when creating the translation program. Once they are
defined it is ideal to gather statistics on how often each rule is used.
Once

these statistics are found
3


we can
perform

the same
algorithm that we used in
direct translation and choose the statistic
al best
rule that would apply to a situation if multiple rules could be applied to a
particular

translation.


One of the more popular formulas that
solve

this syntactic translation is called “IBM Model I.” In
order to use this algorithm you must have a lis
t of rules for a particular translation, the information
used in the direct translation method and the text that we are translating. We will then create a word
to word matrix, where every word in the phrase that we are translating
corresponds

to a single w
ord
in the language
being translated to
. So

each row of the matrix

would

correspond

to a word in the
text that we are translating and each
column

would stand for a word for the
translated

text. Now
with the word to word matrix, the rules for the syntax of
a language, and a corpus for the translated
language we are able to syntactically translate a phrase from one natural language into another.

[3]



Figure 1
:

Trans
lated matrix used in IBM Model I

[6]


When using this formula not all words may translate int
o an equivalent word and it is important to
just leave the
corresponding

word blank. In the example given in Figure 1 we see the
German

phrase “Peter hat das Buch von Maria gelesen.” in a matrix. We also see the translated sentence in
English which is “Pet
er read Mary's book. As we see there is no longer a one to one translation. We
also se
e

from the filled in
squares

that the arrangements of the words have been changed. Finally
we see that the word “read” has
compounded

the German words “gelesen” and “hat”

in order to
provide the
correct

syntax that is used in English.



Figure 2
: Translated phrase from German to English with word expansion

[6]


Despite the fact that the IBM Model I correctly accounted for
grammar
,

it does have a couple
of
flaws. We see on
e of the flaws coming from Figure 2 where we are translating the German sentence
“Ist heir ein Reisieinformationszentrum.” This example used IBM Model I and as we see the
translation is a complete and correct English sentence. They problem lies in the fact

that we are not
able to translate the
English

sentence that

we just translated, back to the German sentence that we
4


derived it from. This is because we have a many
-
to
-
one
relationship

that IBM Model I cannot deal
with. That is we cannot translate the phra
se “travel information centre” back to its German
counterpart “Reiseinformationszentrum.” This is because IBM Model I can only deal with a one
-
to
-
one or a one
-
to
-
n word
alignment
, where n stands for number of words that are in the target
sentence. It does
not allow for any other word arrangements.

[6]


IBM did come up with subsequent models two through five that did deal with the n
-
to
-
one and n
-
to
-
m word
alignment

where m stands for the number of words in the base sentence. They also
improved upon problems
of
compounding

and expanding of
words. However
, even with these
improved formulas we still come across some problems that
cannot

be solved by a
syntactic

translation

of a text.

[6]



Semantic Transfer


These problems are all caused by semantics, the study
of meaning in a language. While there is a
set of
rules

that allow us to
syntactically

format a text in a particular language, there
are no rules

for
how to translate the semantics of a text into another language. This is because
natural

languages are
extr
emely

ambiguous
, highly
contextual
, implicit,
imprecise
, and contradicting. Because of these
irregularities a syntactic translation is not sufficie
nt for real life translations. [3]


However, before we discuss the problems that we have to
overcome

before w
e can create a
realistic

translator
let’s

take a step b
ack and examine where we are at i
n our
attempt

at natural language
translation. We have used the direct method to translate a word into its
corresponding

word or
words in the target language. We also d
iscussed about the syntactic translation of a text and how we
could utilize an IBM model in order to correctly account for the
grammar

of a language. So now let
us continue our discussion of natural language translation and how
irregularities

in the langua
ges
cause syntactic translation to be an insufficient form of translation.

[3]


One of the predominate problems in linguistics is ambiguity, which is when the meaning of
something is unclear.
There

is lexical ambiguity which is when a word or phrase has mu
ltiple
mean
ing
s. There is also structural ambiguity, which is when a sentence or clause is unclear about
what is meant. An example of this is the phrase “Mary had a little lamb.” From this clause we do
not know if Mary gave birth to a little lamb, if she a
te a tiny bit of lamb with a succulent mint
sauce, or if she owned a little lamb at one point in her life.

[3]


The context of a sentence can provide
insight

into the actual meaning of a
n

ambiguous

phrase but it
does not always clarify the confusion caused

by the
ambiguous

phrase. However, there are some
semantic problems that occur when
looking

at the context of a text. One of these issues involve
s

the
denotation

of words. As I said earlier, denotation is the central meaning of a word, which is why the
mea
ning of the word silly is different today than it was in the 16
th

century. Because of the change in
denotation over time it is extremely important to understand when the text you are trying to
translate was written in order to keep the original meaning of
the text.

[3]


Another semantic problem is word connotation, the personal or emotional associations
aroused

by
words.
Let’s

take for example the word “vicious”, which was derived from the word “vice” and
5


typically is used to mean “extremely wicked.” Howeve
r, in modern British it is used to mean
“fierce”, as in “the brown rat is a
vicious

animal.


Once

again the context may provide
insight

into
what meaning of the word the author meant to use when they were writing the text.

[3]


However, even given the cont
ext of a text it can often be
difficult

to understand what the author
truly
meant
, and sometimes they might not even know what they actually meant by what they
wrote. This

commonly

occurs in allegories, which is the expression by means of symbolic fictiona
l
and actions of truths or generalizations about human
existence
.
Let’s

examine the infamous novel
written by Herman Melville, “Moby Dick.” In this book the great white whale represents more
than

just a large aquatic
mammal

that has white skin. It is a sym
bol of eternity, evil, dread, mortality,

and
even death. Some even think

that it represents something so great and powerful that humans may
never truly understand what it means.

[3]


This implied semantics on a word or phrase is
difficult

to translate beca
use the original writer might
not even truly
understand

what they wrote. However, allegories are not the only
semantic

issue that
is implicit in nature. We also have to understand implication in text, which is when speech or text is
intended

to mean someth
ing that it does not communicate directly.
Let’s

take for example the
phrase “deer.” By the
definition

it would simply mean a four leg
ged animal that lives in the woo
ds
and
is
typically brown and white. But if we say the same phrase while driving along the

road we
imply that it means “We must stop the car now, there is a deer in the middle of the road.” It is easy
for humans to understand the implied meaning because we always take into
account

the situation in
which the phrase was
created;

however, it is no
t as easy for a machine to see the situation in which
the phrase or text was written. Because of this
,

we have
another semantic problem

that we have to
ad
dress when trying to make a successful

translation from one natural language into another.

[3]


Now we

come to a problem that us Americans are constantly guilty of, that is saying or writing
something that is often imprecise. For example we constantly use metaphors, which are phrases that
refer to non
-
literal meanings of the words in the phrase. So if we s
ay something along the lines of
“she
appeared

out of the blue
” we do not mean that she actually
appeared

out of a big blob of blue,
but instead mean that she came out of nowhere. While it might be easy for us Americans to
understand this phrase if we direc
tly translate it to another language, and most likely to another
culture they would think that we are crazy or that the translator is broken.

[3]


Another problem that occurs in linguistics is called a homonym, which is when different words are
pronounced
and sometimes even spelled the same way. For example, we have to, too, and two; all
of these words are pronounced the same but are spelled differently and have different meanings. We
also have the word “bat” which we could use in the phrase “we saw a
bat
,
the
nocturnal

animal”, or
“the baseball player hit the ball with the
bat
”, and finally “I saw he
r

bat

her eyelashes at me.” While
all of these words are spelled and pronounced in the same way they have different meaning, as you
can see from the
different

w
ays they are used in the previous examples. This once again
demonstrates how important the meaning of a word is and how vital understanding the context of a
word can help clarify the confusion
surrounding

a particular word or phrase.

[3]


So we now have an

understanding about how important semantics are and also what are some of the
common

problems
associated

with linguistics and natural languages.
Let’s

talk about
a

concept of
Interlingua
.
Interlingua

is a system for representing the meanings and
communica
tive

intentions of
6


a language. It has three parts to it a set (S), a notation (N) and a lexicon (L). The S stands for a
collection of representation
symbols
, where each symbol denotes a single and particular aspect of
meaning or intention. The N is a notat
ion in which the symbols from S can be composed into
greater and more complex meanings. Finally L i
s a collection of words from a

language, such as
English
, in which each element in the collection is associated directly or indirectly with one or more
of th
e symbols from S.

[4]


It is important to note that in an actual
Interlingua

system we would have a lexicon for each
language we are trying to translate to or from. The lexicon would also include the syntactic
information about how the words should be arra
nged in order to give understandable translations.
So when it is given a collection of symbols it will be able to translate it from the meanings that the
collection represents back into the language the
lexicon

has. Once the symbols have been converted
to
words we are then able to use the tricks we discussed in the syntactic translation in order to
provide a correct and understa
nda
ble translation.

[4]



Figure 3
: Concept of Interlingua

[3
]


One of the benefits of the
Interlingua

concept is that once you ha
ve your set and notation defined all
you need to do is create one lexicon for each language that you are translating it to. As you can see
demonstrated in Figure 3 we are given a source language and give it to a machine, which translates
it to the
Interlin
gua

language that we defined and can then translate it back into any language that
we have a lexicon defined for. This method account
s

for both the syntactic and semantic translation
of a given text. However, as with
everything
, it sounds simpler than it
a
ctually

is. I
t is rather
difficult

to define a language of meaning that can represent every meaning that humans can
understand, since we do not even know if we can truly understand the meaning associated with
everything. But this is getting into the more
p
hilosophical

side of the problem and is not of such
great concerns with us since we are just trying to get a machine that can translate text as correctly
and understandable as a human translator would. The debate between what is understandable and
what is
not can and should be
saved

for another day.

[4]



MultiNet


While there are numerous different
systems

currently in existence that solve the requirements set
forth by the concept of
Interlingua
. I am going to only talk about one called MultiNet (
Multi
laye
red
Extended Semantic
Net
work).MultiNet is both a
knowledge

representation paradigm and a
language for meaning representation of natural language expressions. So in simple words it

is

both
the S and the N required for
Interlingua

but does not include the l
exicons for every language yet. It
defines meanings and concepts by using over 140 predefined
primitive

relation, concepts, and
7


functions. However, before we get
too

far into it lets talk about what a semantic network is, since
MultiNet is a
multilayered

e
xtended
semantic network.

[1]


A semantic network is defined as a model of conceptual
structure

consisting of a set of concepts and the
cognitive

relations between them. It is typically represented by a graph where the concepts are
represented by nodes an
d the relations between the
concepts

are the arcs between the nodes. In Figure 4
we see a simple example of a semantic network. From this example we s
e
e that “slither” is a
“grass_snake”

and is also a

vegetarian
.” From this network we are also able to der
ive that “slither” is a
reptile since a “grass_snake” is a “snake” which is a “reptiles.”
S
imilarly we know that “slither” has no
legs since it is a “snake” and snakes have no
legs. There

are numerous other relations that are
demonstrated in this relativel
y simple example and I hope that you can see the usefulness of sematic
networks. Take a minute and make sure you understand how
semantic

networks work and how you can
use the relations between different nodes to obtain and store a great amount of
informati
on

about a
particular
concept

or idea.

[1]



Figure 4
: Simple semantic network

[5
]

Now one of the key parts of
Interlingua

is how it stores concepts and meaning, that is the S and N
components of
the

system, which are the two components that MultiNet use
s. In MutliNet
a concept

is
defined as a word or group of words
that

designate a concept and
representation

of something
external.
It

is also a collection of relations to other concepts. Similar to the semantic network shown previously
each concept can hav
e any number of relations to other concepts. Finally a
MutliNet
concept can also
contain a pattern of perceptual

origin
, typically visual
.

That is it can contain patterns that actually relate
to the real world. While not all off
these features

must be pres
ent in
a

concept it is
convenient

to have
the options available if it will help in the understanding and storage of a concept.


A MultiNet concept is stored in two
parts;

the first part is the
structural

means of representation for the
concepts and has any

relations,
functions
, concept capsules, and
inferred

rules or relationships about
the concept. A
n

inferred

rule or relationship is like
what
we
saw

in the semantic network example
8


where we said that “slither” was more than just a “garden_snake” but was al
so a “snake” since all
“garden_snake” were also snakes. This is an
inferred

relation since it was not explicitly defined or
specified but was determined
through

the use of inference.


The second part of the MultiNet Concept is made up sorts, features,
attr
ibutes

for nodes, and types of
knowledge for the arcs between the nodes. By using this part of the concept we are able to classify the
concept and from
there

uniquely identify and translate the concept. For the types of knowledge,
MultiNet has several type
s already
declared

and these should be
satisfactory

for most ordinary
concepts
. If you have questions about the different types of knowledge, what sorts can do, or what
features are then please
read

the references for a better understanding
.


MultiNet is a

multilayer

extended

semantic network, so where do the multiple
layers

come into play.
The answer to that is in the attributes for the nodes. Each node is
a multidimensional array

that
is
a
plane for each type of concept that it may have. For example it wi
ll have a plane for
quantifications
.
Within this plane each row will represent every other type of classification for the nodes. Then each
column will have a different type of quantification. So the first column might represent zero, the second
would then
represent
one;

the
third

would represent a couple of items. The diagram in Figure 5 should
help you understand how this multi
-
dimensional array works. Just ignore all of the different nodes and
arcs as I will talk about them later and have a look if you ha
ve any questions about the topic.

[1]



Figure 5
: Example of multidimensional arrays in MultiNet

[2
]


As we see in Figure 5 we have the plane of
quantification

aligned with the x axis and starting
from

the
left side of the page and moving to the right in
the size of the quantification. You may also notice
that
9


on

the y axis we have “GENER” which is short for
generalizations

and has the different degrees of
generalizations
. For example by the first row we
have

“GENER =
sp” which means that the concept of
ide
a being represented is specific in nature, or that it only applies to a select group of things. However,
on the row above it we see that it says “GENER=ge” which means that that row will represent all of the
quantification for the
general

statements about
the concept. You can also see that along the z axis we
have the word “FACT” which
stands

for the facticity of the
concept
. That is if it i
s

real or
hypothesized
,
or just not real. On the
bottom

of the diagram you also see the variability of the quantificat
ion. That is
you know if it is going to be a constant
quantification

of if it is going to be of a dynamic nature and as
such will vary with time. If you have any
lingering

questions about
these aspects

of
MultiNet

feel free
to reference the first source ci
ted for this paper.

[5
]


The main thing that we need to take from this is that MultiNet embeds all of its entities into these
multidimensional

arrays. They do this for a number of reasons, the first of which is that it is a lot easier
to keep this simple a
s you can choose the one data field, or cell, in which to put a particular piece of
information. Since the types of classification are considered primitive types it helps to avoid confusion
on where the concept should be placed and where it is to be
retrie
ved

from. This also
helps in

the event
of an error collision, such as when you have a
paradox, which

is
when
you have two conflicting types
of
information. Rather

than getting rid of the one that you don't want you can keep both and continue to
use your er
ror collision protocol to determine which piece of information you want. The
usefulness

of
this occurs later when you add more information into the system.

[1]


To
further

expand

upon the point that I am trying to make I am going to
briefly

discuss

about I
BM's
super computer Watson. Several months back Watson made
its

public
debut

by being a
contestant

on
Jeopardy
. It did extremely well for numerous reasons. The most important of which was that it did not
only answer questions only when it was a hundred per
cent sure they were correct. It instead answered
questions

when it was
reasonably

sure that they were correct. So it weighed all of the information that
it had gained, the facticity, and the reliability of each piece of information and after
combining

all
of
them together was able to see how correct the answer was. So it did not just discard information, but
instead built upon it as it may become useful la
ter on. We do the same in MultiN
et, as something might
not make
sense

while we are translating the firs
t sentence in a paragraph, it might be
incredibly

important to translating the
remaining

sentences. Once again the context of the document
plays
a big
role as to the correctness of the translation.

[5
]


Now
let’s

get back to actually trying to translate on
e natural language into the language of meaning that
MultiNet offers, and that represents the concept of
Interlingua
.

Now
let’s

go back to the
example

of
quantification

that I talked about while
explaining

the
importance

of the multidimensional array in
Mu
ltiNet. The first thing that we must
understand

for a successful translation is the
relationship

between quantification and linguistics. That is there are two main types of quantification in the study of
linguistics, determiners, and
quantificators
.

[1]


D
eterminers are defined as modifiers which together with nouns, result in the expressions whose
reference is determined with regard to the referent of noun. The determiner is
highlighted

in the
following phrases “
this

house”,

a

house”, and “
every

house.” E
ach determiner specifies
a different
quantity that is

exact, or specific in na
ture. We also have
quantificators
which are defined as modifiers
which, together with nouns, result in the in the expressions whose reference is described by the amount
of a subs
tance. Once again the
quantificators
are underlined in the following phrases “
almost all

houses”, “
some

milk”, and “
many

houses.”

[1]


10


Now that we have learned something about quantification let’s try using a simple example with
MultiNet. Say we were given

the phrase “Max gave his brother
several

apples.”

From this phrase alone
we only know that Max’s bother has at least three apples. If we add the phrase “This was a generous
gift.” We now know that giving several apples is a generous gift. Finally let’s ad
d the phrase “Four of
them were rotten.” Now we now that Max gave his
brother at least five apples since four of them were
rotten. Take a look at Figure 6 and see how this concept might be represented using MultiNet.

[5
]



Figure 6
: Representation of how
a phrase is stored in MultiNet

[2
]


As we see on the right side of Figure 6 we have the last phrase represented. That is we have the idea of
apples, the fact that there are multiple, four, apples and the fact that all of them are rotten. From the left
side

we see that Max is related to his brother, gave him a gift, of apples, in the past. This gift was
included several apples, and is considered a generous gift as defined by the second phrase. Lastly, you
should notice the two levels in the
diagram that

is t
he
Intentional

and Preextensio
n
al level.

The
Intentional level defines how the concept is met and the Preextensional

is what actually makes up the
concept

that is not necessary for defining the concept. Which in this example means that we have
at
least

fiv
e apples, shown by “CARD>4” and four are rotten which is shown by “CARD=4.”




Conclusion


Ever since the dawn of computers, people have been trying to preform natural language translation
using computers.
They first started out using a direct method where

you just translate one word at a
time and ignore
d

everything else. While this might be effective in some cases it was not satisfactory for
everyone

so a syntactic transfer approach was designed. In this approach they directly translated the
word
s
, but the
n rearranged, compounded, and expanded
them

to meet the desires of the software users.
But this was not enough to translate everything
since

natural languages are riddled with problems as we
have seen
.

So not only do we need to translate the words we also
need to translate the
meaning of the
word, phrases, and sentences. This is where MultiNet or an
Interlingua system would be

useful. Natural

language translation is still being researched and studied
constantly so that improvements can be made.

No one knows

what problems the feature holds in store for us, but one thing is for sure a language
barrier is not going to be one of them.

11




References


[1] Helbig, Hermann. 2006.
Knowledge representation and the semantics of natural language,

Retrieved from
Http://www.Springerlink.Com/content/r38433/#section=469070&page=3&locus=6


[2] Helbig, Hermann. 2002.
The use of multilayered extended semantic networks for meaning
representa
tion
, Retrieved from
http://www.google.com/url?sa=t&rct=j&q=the%20use%20of%20multilayered%20extended%20semant
ic%20networks%20for%20meaning%20represen
tation&source=web&cd=1&ved=0CCoQFjAA&url=h
ttp%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.195.1310%26rep
%3Drep1%26type%3Dpdf&ei=jJAvT_qMFci1twe55dzPDw&usg=AFQjCNFbLstb4wPr0dAW4G0BQ
cUakmil7Q&cad=rja


[3] Muthukrishnan, Pradeep. 2008.
Algorithms for information retrieval and natural language
processing tasks
, Retrieved from
http://clair.si.umich.edu/~radev/papers/csetr547
-
08.pdf


[4] Dorr, Bonnie. 2010. Natural lang
uage processing and machine translation. Retrieved (2012, 1 30)
from
ftp://ftp.umiacs.umd.edu/pub/bonnie/Interlingual
-
MT
-
Dorr
-
Hovy
-
Levin.pdf



[5] Helbig, Hermann. 2002
.
Multilayered extended semantic networks as a language for meaning
representation in nlp systems
, Retrieved from
http://pi7.fernuni
-
hagen.de/research/vilab/multinet
-
paper.pdf

[6
]
Stefan Evert. (2009, Jan 29). Retrieved from http://cogsci.uni
-
osnabrueck.de/~CL/classes/08w/StatNLP/download/09_statistical_mt.handout.pdf