Artificial Intelligence News

disturbedtenΤεχνίτη Νοημοσύνη και Ρομποτική

17 Ιουλ 2012 (πριν από 5 χρόνια και 1 μήνα)

414 εμφανίσεις


Artificial Intelligence News

Copyright © John Brokenshire and Peter W. Whitewood - March 2012
Author: John Brokenshire
Editor / llustrator: Peter Whitewood

The copyright act permits fair dealing for the purpose of private study, research, criticism or review of this booklet.
However, no part of this publication may be reproduced by any process without written permission. The rights of John
Brokenshire and Peter W. Whitewood to be identified as the moral rights authors have been asserted by them in accordance
with the Copyright Amendment (Moral Rights) Act 2000 (Commonwealth).

Published by Starburst Publishing Pty Ltd (61) 08 8340 8834
1 Rover Ave Croydon Park, South Australia 5008
Email: starburst@starburstpublishing.com.au
Website: www.starburstpublishing.com.au
www.theoxicon.com

The Oxicon
Artifical Inteligence News
By John Brokenshire
V
V
Index
Artificial ntelligence News
Page 1
Part 1 - Preamble
Page 1
Analysis of the Problem Page 2
Entity 1: Man/Natural Language Page 2
Entity 1: The Computer Page 2
To Summarize Page 3
Part 2 - nserting the Missing Link
Page 3
1. How to Create ‘Computer-Friendly Language’ Page 3
2. ‘Computer -Friendly Language’ Achieved: The Oxicon Page 4
3. Switching on the Light Page 4
Editor’s Comment Page 5
Footnotes
Page 5
1. Natural Language/Computer ncompatability: A Practical Example Page 5
2. Oxicon Technology and the Oxicon lexical Reference System Page 5
3. Multiple Source Text Referencing Page 6
4. Starburst Visualize (Oxicon Licensee) Page 6
A Glowing Report from Curtin University by Professor Elizabeth Chang Page 7/8
Pictures of the Oxicon Seminar Page 10
www.theoxicon.com
Artificial ntelligent News
V
www.theoxicon.com
1
Artificial Intelligence News
C
omputational linguists have long been pursuing their dream of enabling man-made devices to
understand all that we say and to respond accordingly. Yet they have failed to get beyond very
basic man-machine communication. So what is holding back the fulfilment of that dream? n
the following article an intangible impediment thwarting their progress is brought into focus. After
an objective analysis of the situation, the writer goes on to show how a new semantic navigation
technique utilized in a unique type of lexicon, called The Oxicon, may very well hold the key to
breaking this communication barrier.
THE OXICON - THE MISSING LINK IN MAN-MACHINE COMMUNICATION
.....Known only to a privileged few involved in the development of a revolutionary style of lexicon, it turns
out that the task of creating a language enhancement facility to make our speech computer friendly is
not only feasible, but has already been achieved in this new Oxicon system..... Please read on.
Part 1 - PREAMBLE
Attempts so far to design a system that allows efficient two-way communication between computers
and humans have failed: seemingly, an undefined barrier of some sort exists between the two entities.
Admittedly, there are robots that can understand simple, predetermined human commands, or even
carry out simple tasks for someone according to activity in that person’s brain (via sensing electrodes
attached to the head). But neither constitutes articulate dialogue, and the day that we can ordinarily
speak to, and receive responses from, a computer still seems a long way off. n this discussion we
unearth an important factor that dogs progress in this field.
Numerous theories for solutions have been proposed, some of which may be found on the nternet
and elsewhere, but none have been sighted that bear any relation to the practical solution discussed
and demonstrated here. As such theories mostly consist of highly technical jargon and complex
mathematical logic, for most of us they either make dull reading or are incomprehensible, and so
no examples are included here. Nor do we touch upon the subject of pronunciation, which has been
adequately dealt with elsewhere. n this article the aim is to be as clear and concise as possible in
evaluating a major impediment to man-computer dialogue - purely in relation to language - and in
explaining a means of overcoming this problem.
Artificial ntelligence News
2
ANALYSIS OF THE PROBLEM

t has been argued that we are just asking too much of the computer to understand all that we say
and formulate answers accordingly. Well, maybe so, but perhaps it would be better to first find out
why we are asking too much! We know a significant gap exists between our dialogue and what the
computer can fully comprehend, but has this ever been evaluated in depth? Such an investigation
might unearth what is holding things back, and lead us to a way of bridging the gap.
There is an entity on either side of this infamous communication gap: on the one side is man - or
rather his language, known technically as ‘natural language’; on the other side is an inanimate device
that receives this language, the computer.
f either entity fails to be fully comprehensible to the other - and one of them obviously does fail - we
end up with faulty or restricted communication. But which entity is it?
Entity 1: Man / Natural Language
O
ur information storage/retrieval system - the neuronal network called the human memory -
has been constructed through the recording of experiences as they occur in our lives, and the
effective tagging of them according to their areas of usage and associative linkage. t is from this
network that natural language is formed, and vice versa - a curious ‘push-pull’ affair allowing memory
and language to develop concurrently.

But the resulting memory network does not have the order that we expect to see in logical man-made
devices. f one were able to view its whole inner ‘circuitry’, it would be perceived as complete chaos;
nevertheless it serves us very well. Natural language is also disorderly: not only does it reflect the
nature of its source but has itself been built up randomly and haphazardly. Verbal expressions have
been added down the ages when and where the need arose in a bit-by-bit accrual that is a language’s
evolution. The result, then, cannot bode well as a happy medium for a logical computer.
Retrieval of information from our memory occurs on a continual basis (on the fly) as current
events/situations demand it. By these stimuli, relevant stored experiences are reactivated and
pertinent data is released, allowing us to comprehend what is going on around us; we recognize and
understand all that we see, feel, smell and hear (including every facet of speech). Like everything
else, language components are stored with identity ‘labels’ to denote situational usages, a facility
that gives us a clear and unambiguous understanding of another person’s dialogue, and the means
to formulate a reply. To some degree, this facility even lets us know what someone else is thinking
when they speak.
Entity 2: The Computer
U
nlike man, the computer does not have the above situational usage facility. When confronted
with speech, it has to rely on dictionary definitions, allowing many parts of our dialogue to
remain obscure and thus able to confuse. However, obscurity (as seen by the computer) is part and
parcel of our language. We are able to deal with it because our identification facility circumvents
confusion. We do not even notice these potential mix-ups in the flow of our discourse.
A computer, however, notices everything. t sees a language peppered with ‘potential mix-ups,
bewildering barriers to comprehension. Although invisible to us, they are very visible to the computer,
Artificial ntelligence News
3
and many of them, usually words or phrases, are so problematic that it takes just one in a whole
message to misconstrue or scramble its real meaning
(footnote 1)
.
A list of all the types of potentially confusing parts of natural language (especially in English) would
be long indeed. One of the worst types, however, is worth mentioning - words/phrases that have
different meanings in different situations. The worded sketch in the footnote gives one example of the
misapprehension these computer ‘enigmas’ can cause.1
To summarize:
T
he elements of natural language (words, concepts, etc.) are tagged in human memory according to
experienced usage, resulting in a sort of coding system that precludes confusion or misinterpretation;
we thus know what is meant during a conversation - even to the extent of reading a speaker’s thoughts -
but the computer (with a dictionary but no usage tagging) cannot do this. The properties (or deficiencies)
of natural language make it incompatible with a computer: we can understand the computer’s output
because it has been designed for us to comprehend, but the computer cannot understand our natural
language output as it has not been designed for the computer’s comprehension. A computer, fed with
natural language, may well receive erroneous messages and/or give erroneous replies, with the risk of
dire consequences (as described at the end of
footnote1
).
The crux of the problem - incompatibility - could also be explained from a slightly different angle. The
computer’s output, which we have no trouble understanding, uses software that is man-built. Our output,
with which the computer has so much trouble, uses ‘software’ that has not been computer-built.
But, whichever way you look at it, it is clear that Entity 1, our own language, is the fly in the ointment here.
We have situational usage information that we continually and automatically tap into to cope with its
obscurities; the computer does not. The missing link is thus revealed as this usage information itself.
Furthermore, it is now obvious that we need to somehow make available the latter information before
we can bridge the communication gap between man and computer. n the next section a way to do
this will be proposed.
Part 2 - INSERTING THE MISSING LINK
1. How to Create ‘Computer-Friendly’ Language
A
s deduced in the ‘Analysis of The Problem’, the ‘missing link’ precluding proper comprehension
of natural language by the computer is human situational usage information. The unseen tagging
facility we have in our brains must somehow be emulated and the computer given a way of utilizing
it on the fly when addressed. f we can achieve that, then we will open the door on our special world
of language and thought. Software that scans a person’s narrative to determine the general context of
immediate dialogue should be provided for additional help (in the ‘battery’ story in the
footnote1
, for
instance, relevance of the car in the dialogue would have been useful); combination of usage tagging
and narrative scanning will give full knowledge of what a person is talking about.
The ‘battery’ sketch (
Footnote 1
) may seem rather trivial at first glance, but it nevertheless strongly
illustrates the sort of thing that happens without this facility. Because of its absence, the computer
did not know what was in the lady’s mind - that what she actually meant was ‘ wonder how far can
 run the car with this faulty battery’. Note that her English was not at fault yet her message was
Artificial ntelligence News
4
misconstrued, the logical computer deducing that the battery was associated with (was the object of )
the verb ‘run with’. Anybody (human) listening - with the benefit of usage tagging - would have known
exactly what she was talking about but, as far as the computer was concerned, the lady had wanted to
take (run) off with her battery.
To prevent this confusion occurring, the particular ‘run’ referred to here (and there are lots of different
‘runs’), whose normal dictionary definition is ‘to operate some device or equipment’, has to be tagged
to show that it may, more specifically, refer to the ‘operation of a vehicle, that is, vehicular travel’;
whence the computer would have perceived that the situation actually involved using the car, not foot
travel (which it chose).
Therefore, to enhance our language to a ‘friendly’ state, the computer requires every single concept,
word and phrase to be qualified by situational usage tagging, helped by contextual indication through
narrative scanning. Unfortunately, the former necessitates a colossal amount of work; whether or not
it is actually feasible is next to be seen.
2. ‘Computer-Friendly’ Language Achieved: The Oxicon (
footnote 2
)
Known only to a privileged few involved in the development of a revolutionary style of lexicon
called The Oxicon (
footnote 2
), it turns out that the task of creating the aforementioned language
enhancement facility for our speech is not only feasible, but has already been done in the new Oxicon
system (developed over a period of some 25 years).
This lexicon utilizes unique Oxicon Technology’. ts database contains the entire English language
semantically sub-divided into hundreds of concepts (broad areas of meaning) and thousands of
defined sub-concepts (fine areas of meaning). All words, phrases, concepts, etc. are conveniently
usage-tagged according to the semantic sub-divisions. Only the terminology is different - the sets of
vocabulary tags are called ‘word/phrase profiles’, and the language enhancement facility itself ‘Natural
Language Qualifier’, or NLQ. n addition, ‘The Oxicon’ further assists identification of vocabulary
by using multiple source text referencing (
footnote 3
) and numerous other identity/reference aids,
including the provision of 30-200 pertinent pictures for each word/phrase/concept (the wider the
semantic width, the more abundant the pictures).
Very basically, its construction was achieved via the following steps:
(1) conceptually compartmentalizing the language, and sub-dividing each compartment into fine
semantic ranges for the thousands of usage situations, and providing an identity tag code for each
resulting area of meaning;
(2) composing pertinent definitions for these semantic areas;
(3) multi-tagging - according to the above sub-concept compartments - for specific usage of every
one of the hundreds of thousands of words, phrases, concepts, etc. in the language;
(4) treating same-spelling-various-meaning words as separate entities, with discrete sets of identity
tags and discrete identity tag definitions;
(5) cross-linking all the associated semantic areas defined.
3. Switching on the Light
So, with the Oxicon reference system, we have the human identification facility ready-made. When the
NLQ-enhanced vocabulary database and narrative scanning software are installed into the computer,
Artificial ntelligence News
5
these aids will be the key to understanding natural language - the light will be switched on. By this
means, we will suddenly become much closer to bridging the communication gap.
Maybe the answer to a major problem in human-machine communication has been staring us in the
face ever since the Oxicon’s construction began some twenty-five years ago. Naturally, this language
enhancement technology would need further refinement, and even perhaps melding with other
technologies, before the gap is properly closed. But, at the very least, it should help pave the way
towards that goal.
Let us hope that what has been put forward in this article will give light at the end of the tunnel
and finally allow the computational linguist’s dream to come true. f so, we may soon find ourselves
casually conversing with our computers.
EDITOR’S COMMENT
For interested companies or individuals carrying out research into Artificial Intelligence in the area
of human to machine dialogue, or anyone else wishing to find out more, contact details and further
information may be found in Footnotes 2-4.
FOOTNOTES
1. Natural Language/Computer Incompatibility: A Practical Example
The following sketch depicts a future scenario whereby a computer, incompatible with, and confused
by natural language, misinterprets a simple human statement:
“The lady driver, realizing that her battery was not charging, said to her computer:
‘I wonder how far I can run with this failing battery?’ The computer, after using its
own logic to find the appropriate ‘run’ entry from its dictionary and ascertaining the
average weight of a car battery, replied: ‘you cannot run far because it is too heavy.’ ”
Note: potential causes of confusion of the above sort, and many others, are abundant in our language;
the result in this case was harmless enough, but imagine what might occur if a computer misread a
brain surgeon’s bidding.
2. Oxicon Technology and the Oxicon Lexical Reference System
n the unique Oxicon reference system, natural language may be freely accessed and navigated
to search, find and semantically identify any of its elements - concepts, ideas, phrases and words.
Oxicon Technology itself is applicable for constructing search engines in general, and, through full
identification of every single facet of natural language, its ‘NLQ’ language enhancement facility has
the potential to make natural language more compatible with the computer.
A prototype online version of the reference system may be freely viewed at
www.theoxicon.com

A fully refined Oxicon reference system (Version 2) is still in the development stage, and will sooner or
later (
depending on a needed influx of finance
) appear on this web-site to reveal the amazing reference
capability possible through this technology.
An introductory video and a tutorial for The Oxicon reference system (prototype) may be found
at
www.youtube.com
and the above site; there is a more detailed description of the system in the
“Oxicon Development and Review”
supplement at
www.theoxicon.com
Artificial ntelligence News
6
For business or general queries, talk to Peter Whitewood at Starburst Visualize (see details further
along
footnote 4). f software or website engineering information is required, please contact Steve
Roper, telephone: 08 8340 8834 (au) +61 8 8340 8834 (ntn’l), or email:
info@starburstpublishing.
com.au
Linguistic matters regarding The Oxicon reference system or Oxicon Technology may be discussed
with John Brokenshire, telephone: 0457978569, or email: john.brokenshire@three.com.au.
At a Curtin University seminar featuring the reference system and Oxicon Technology in late 2010, a
group of DEB professors, researchers in the areas of computational linguistics and business intelligence,
regarded the reference system and Oxicon Technology in general as having great potential in these
fields - Pictures page 10. A glowing report from the Director of DEB (Digital Ecosystems and Business
ntelligence nstitute) can be viewed on the following pages and in the
“Oxicon Development and
Review”
supplement at
www.theoxicon.com
3. Multiple Source Text Referencing
A process utilized in the Oxicon, whereby a large amount of internal definitive text is scanned and
analysed, and applicable extracts - such as required in a search, particularly when using phrase
reference - are displayed. These results are then added to those found via the NLQ facility.
4. Starburst Visualize (licensee of the Oxicon)
Marketing Manager: Peter Whitewood
Email: peterw@starburstpublishing.com.au
Website: www.starburstpublishing.com.au
Telephone: 08 8340 8834 (AU) or Mbl: 0403 371 386
+61 8 8340 8834 (ntn’l)
Artificial ntelligence News
www.theoxicon.com
7
A Glowing Report from Curtin University
by Professor Elizabeth Chang
A

seminar featuring the reference system and Oxicon Technology was
held in October of 2010 at Curtin University -
pictures page 10
. At this
seminar, a group of *DEB professors and PhD researchers, in the areas
of computational linguistics and business intelligence, regarded the reference
system and Oxicon Technology in general as having great potential in these fields.
A glowing report from the Director of DEB can be viewed on the next page:
The comments made on the next page in regards to forming a joint enterprise, as put
forward in the report by Professor Elizabeth Chang, never came about because of
unfinished dialogue and inevitable changes at Curtin University. However this does
not detract from Professor Chang’s glowing report of the original Oxicon seminar
held in October of 2010.
PLEASE NOTE
: The Digital Ecosystems and Business Intelligence Institute (DEBII)
will no longer be operating as of 1 January 2012. All enquiries should be directed
to Professor Harry Bloch, Dean, Research and Development at CBS.
Telephone: 61-8-92662035
Facsimile: 61-8-92663026
email: h.bloch@curtin.edu.au
*
Digital Ecosystems and Business ntelligence nstitute.
Artificial ntelligence News
8
9
10
n foreground is Professor Elizabeth Chang in red with John Brokenshire on right.
The Oxicon Presentation to a group of DEB professors and PhD researchers.