Gabriel von Max, Prague painter (1840-1915)

cabbagecommitteeAI and Robotics

Oct 24, 2013 (3 years and 5 months ago)

61 views

Monkey Before the
Skeleton (Ecce
simia
),


Gabriel von Max, Prague
painter (1840
-
1915)

Towards C
omputational

Models

of Artificial Cognitive Systems

That Can, in Principle,

Pass the Turing test

Jiri Wiedermann

Institute of Computer Science, Prague

Academy of Sciences of the Czech Republic

Partially
su
pp
orted


GA CR

grant
No. P202/10/1333


SOFSEM 2012 January 21
-
27, 2012

Spindleruv

Mlyn

``I believe that in about fifty years' time it will be
possible, to program computers, with a storage capacity
of about 100
kB
, to make them play
the imitation game
so well that an average interrogator will not have more
than 70 % chance

of making the right identification
after five minutes of questioning.


The original question, "Can machines think?" I believe to
be too meaningless to deserve discussion. Nevertheless
I believe that at the end of the century the use of
words and general educated opinion will have altered so
much that
one will be able to speak of machines thinking
without expecting to be contradicted."

From the discussion between Turing and one of his colleagues (M. H. A. Newman, professor of

mathematics at the Manchester University):


Newman
: I should like to be there when your match between a man and a machine takes place,

and perhaps to try my hand at making up some of the questions. But that will be a long time from

now, if the machine is to stand any chance with no questions barred?


Turing
: Oh yes, at least 100 years, I should say.

Three heretic ideas:

We already have a sufficient knowledge to
understand

the working of

interesting minds achieving a high
-
level cognition


Achieving a higher
-
level AI is
not a matter of a fundamental scientific

breakthrough

but rather a matter of exploiting our best theories of
artificial minds, and a matter of scale, speed and technological achievements


It is unlikely that thinking machines will be developed by purely academic
research

since it is beyond its power to concentrate the necessary amount of
man power and technology.

Approaches to mind understanding:


Understanding by
philosophying


Understanding by designing (specifying)

Understanding by constructing

Outline

1.
Current state: Watson the Computer vs. humanoid robotic
systems

2.
Winds of Change


Escaping the Turing test


Escaping
Biologism


Internal World Models


Mirror neurons


Global Workspace Theory


(
Dis
)solving the Hard Problem of Consciousness


Episodic Memories


Real Time Massive Data Processing


Comprehensive and Up
-
To
-
Date Models of Cognitive Systems

3.
HUGO: A Non
-
Biological Model of a Conscious Agent
System

4.
Conclusions


lessons from what we
have seen

Watson

-


an AI system capable to answer
the questions stated in natural language

Jeopardy
!
(in the CR


the TV game

„Riskuj!“
)



given an

answer one has to guess the question.

E.g.:
5280 (
how many
feets

has a mile)
,
or


79
Wistful

Vista (ad
d
res
s of

Fibber

a
nd

Molly

McGee
)

Category:
General Science

Clue:
When hit by electrons, a phosphor gives off

electromagnetic energy in this form.

Answer:
Light (or Photons)


Category:
Lincoln Blogs

Clue:
Secretary Chase just submitted this to me for

the third
time; guess what, pal. This time I’m

accepting it.

Answer
: his resignation


Category:
Head North

Clue:
They’re the two states you could be reentering

if you’re
crossing Florida’s northern border.

Answer:
Georgia and Alabama

Category:
Rhyme Time

Clue:
It’s where Pele
stores his ball.

Subclue

1:
Pele ball
(soccer)

Subclue

2:
where store
(cabinet, drawer, locker,
and

so on)

Answer:
soccer locker

Source
: AI
Magazine
,
Fall

2010

Winds of Change

New trends in theory:



escaping
biologism



escaping the Turing Test



strengthening the position of embodiment: a common
sensorimotor

basis for phenomenal and functional consciousness



evolutionary priority of phenomenal consciousness over functional one



internal world models, mirror neurons



global workspace theory



episodic memory


Technological progress:



maintenance of supercritical volumes of data, and



searching and retrieval of data by supercritical speed


A shift in popular thinking about artificial minds

-

people generally accept that computers can think (albeit in a different
sense than some philosophers of mind would like to see)

John Searle
:

Watson

Doesn't Know It Won on 'Jeopardy!'

IBM invented an ingenious program

not a computer that can
think.”


Noam Chomsky
:
“Watson understands nothing. It’s a bigger
steamroller. Actually, I work in AI, and a lot of what is done
impresses me, but not these devices to sell computers.”

What these gentlemen failed to see
is the giant leap

from the formal rules of the chess playing to informality of Jeopardy! rules…


J.R. Lipton
: Big insight


a program can be immensely powerful even if it is

imperfect.

A new trend: escaping
biologism

Rodolfo
Llinas

(a prominent neuroscientist):

“I must tell you one of the most alarming
experiences I've had in pondering brain function....
that the octopus is capable of truly extraordinary
feats of intelligence… most remarkable is the
report that octopi may learn from observing other
octopi at work. The alarming fact here is that the
organization of
the nervous system of this animal
is totally different
from the organization we have
learned is capable of supporting this type of
activity in the vertebrate brain....
there may well
be a large number of possible architectures

that
could provide the basis of what we consider
necessary for cognition and
qualia
....

Many possible
architectures
for cognition

Why should we only think about human brain when

designing artificial minds?

Turing test is explicitly anthropomorphic.


Russell and
Norvig
: "aeronautical engineering texts do not
define the goal of their field as 'making machines that fly so exactly
like pigeons that they can fool other pigeons’”.

A new trend: escaping the Turing test

All minds


Human


mind

Alien

minds

Animal

minds


Artificial

minds

A new trend: Internal World Models

IWMs capture a “description”
of that (finite) part of the
world and that part of the
self which has been “learned”
by agent’s
sensori
-
motor
activities. An IWM is fully
determined by the agent’s
embodiment and is
automatically built during
agent’s interaction with the
real world.

Mechanisms situating an
agent in its environment ;
they determine the syntax
and the semantic of agent
behavior and perception in its
environment

Finite
control

Sensory
-
motor


units

World model

The body

(Infinite) stream
of inputs generated
by sensory
-
motor
interaction

A virtual inner world in which an agent can think

A new trend: Mirror neurons



a mechanism for “mind reading” of other subjects

“the discovery of mirror neurons in


the frontal lobes of monkeys, and

their potential relevance to human

brain evolution is the single most

important ``unreported“ (or at least,

unpublicized) story of the decade.

I predict that mirror neurons will do

for psychology what DNA did for

biology: they will provide a unifying

framework and help explain a host

of mental abilities that have hitherto remained mysterious and
inaccessible to experiments“

V.S.
Ramachandran


Mirror neurons
: are active when a subject performs a specific action
as well as when the subject observes an other or a similar subject
performing a similar action (
Rizzolatti
, 199x)


A new trend: Global Workspace Theory

a simplistic, very high
-
level cognitive architecture that has been
developed by
B. J.
Baars

by the end of the last century to explain

emergence of a conscious process from large sets of unconscious

processes in the human brain.

The GWT can
successfully model a
number of

characteristics of
consciousness, such as its
role in handling novel
situations, its limited
capacity, its sequential
nature, and its ability to
trigger a vast range of
unconscious brain
processes.

Interesting:

Watson the Computer works according to the GWT

A new trend: evolutionary approach to phenomenal
consciousness (Inman Harvey)






A naive “incremental”
approach to create
phenomenal consciousness:

1.
Create a “zombie” with
functional consciousness
(the easy problem)

2.

Add the extra
ingredient to give it a
phenomenal
consciousness


(the hard problem)

“evolutionary approach allows emulation without comprehension”

A new trend:

a common
sensorimotor

basis for

phenomenal and functional consciousness




Source: How to build a robot that feels.
J.Kevin

O'Regan

,Talk given at
CogSys

2010 at ETH Zurich

A
sensorimotor

interaction with the
environment involving corporality,
alerting capacity, richness,
insubordinateness
, and the self

Instead of thinking of the brain as the
generator of feel, feel is considered
as a way of interacting with the world

A new trend: Episodic Memory

An agent without episodic memory is like


a person with amnesia

Episodic memory systems allow
“mental time travel”
and can support a
vast number of cognitive capabilities based on inspecting memories
from the past that are ``similar" to the present situation, such as



noticing novel situations,



detecting repetitions,



virtual sensing (reminded by some recall),



future action modeling,



planning ahead,



environment modeling,



predicting success/failure,



managing long term goals, etc.


is what people ``remember", i.e., the
contextualized information about
autobiographical events (times, places,
associated emotions), and other contextual
knowledge that can be explicitly stated.

Efficient management and
retrieval from episodic
memories is a case for
real
-
time massive data
processing technologies.

(Drawing by Ruth
Tulving
)

A new trend: intelligence might be a matter of
scale and speed:
maintaining supercritical volumes of data and
their searching and retrieval by supercritical speed (cf. episodic memories).

Element


Number

of cores

Time to answer one
Jeopardy! question

Single core

1

2 hours

Single IBM Power 750
server

32

<4 min

Single rack (10 servers)

320

<30 seconds

IBM Watson (90 servers)

2 880

<3 seconds

Memory:

20 TB

200 million
pages

(~1

000

000
books)

~1 000 000
million


lines of code

5 years
development

(20
men
)



A lesson from Watson the Computer:

intelligence
might not only be a matter of suitable algorithms, but also, and mainly so,
of the ability to accumulate (e.g., via learning and episodic memories
storing), organize, and exploit large data volumes representing knowledge
at a speed matching the timescale of the environmental requirements (
real
time data processing
).

A new trend: Comprehensive and up
-
to
-
date models
of cognitive systems

An urgent need of
situatedness

via embodiment

(from J. A. Comenius,
Orbis

pictus
, 1658)

An embodied cognitive agent
is a robot i.e., an
embodied
computer
, which is a computer equipped by
sensors

by which it “perceives”
its environment and by
effectors

by which it interacts with its environment

Nuremberg funnel,
Harsdörffer, Georg Philipp:


Poetischer Trichter,
Nuremberg

1648
-
1653

HUGO: a Non
-
Biological Model of an Embodied
Conscious Agent

From: J.
Wiedermann:
A High Level
Model of an
Embodied
Conscious
Agent,
IJSSCI, 2,
2010


Semantic world model

Syntactic world model

Global workspace

Mirror net

Episodic

memory

A high
-
level schema of a robot:

Finite control (a computer)

Sensory
-
motor units

(Infinite) stream of inputs
generated by sensory
-
motor
interaction

World model

Real world

The body

Mechanisms
situating the agent in its environment


must be considered: internal world models

The central idea:

Educating and Teaching a Robot

The purpose of
educating and teaching
an agent is to build its
internal world model

The internal world model gives a “description” of that (finite) part of
the world (inclusively of agent’s (it)self) which has been “learned” by
agent’s S
-
M activities.


The model is fully determined by the agent’s embodiment and is
automatically built during agent’s interaction with the real world

The idea of two cooperating world
models in cognitive systems


“action”

Dynamic world model:

sequences of
sensorimotor

information

Controls the agent’s behavior

Static world model:

Elements of a coupled

sensory
-
motor

information; responsible for situating the agent

Real world

“cognition”

Motor instructions

perception

Sensory
-
motor units

G
r
o
u
n
d
i
n
g


Abstract


concepts


U
nits of S
-
M


information


(World’s


“syntax”)


Embodied


concepts

S
-
M

units


Motor instructions

Multimodal

information


Perception


Motor
instructions


Symbolic
level

Sub
-
symbolic level

Control unit



Body

Environment

Mirror net

An architecture of an
embodied cognitive
agent

The task of the syntactic world model:


Coupling

the motor instructions with the perception information
into so
-
called
multimodal information
;


Learning

frequently occurring multimodal information from the
coupled input streams (one coming from the dynamic model and
one from the S
-
M units)


As
s
ociativ
e retrieval
:
a partial, or “damaged”, or previously
“unseen” incoming multimodal information gets completed so
that it

corresponds to the “most similar” previously learned
information; the result

captures the instantaneous agent’s
situation


The task of the semantic world model:


Learning (mining) and maintaining the knowledge

from the data
-
stream of

multimod
al

informa
tion

delivered by static
(syntactic) world model



Realizing the intentionality
:
with each unit of multimodal
information a sequence of actions (motor commands)


habits

-

gets associated which can be realized in the given context;




Mirror neurons
: are active when a subject performs a specific action
as well as when the subject observes an other or a similar subject

performing a similar action (
Rizzolatti
, 199x)


A generalization
: … a set of neurons which are active when a subject
performs any frequent action as well as when only partial information

related to that action is available to the subject at hand

Implementing the syntactic world model:


Visual inf.


Aural inf.


Haptic



Propriocept
.

Multimodal


information



Learns frequently occurring conjunctions
of related input information



It gets activated when only partially
excited (by one or several of its inputs)



Works as
associative memory,
completing
the missing input information



Mirror net forms and stores (pointers
to)
episodic memories

The basis for understanding imitation learning, language acquisition,

thinking, consciousness.

What knowledge is mined and maintained in a dynamic

world model:




often occurring concepts



resemblance of concepts



contiguity in time or place



cause and effect


An
algebra of thoughts…

David Hume 1711
-
1766

Cognitive tasks:


1.
Simple conditioning

2.
Learning of sequences

3.
Operand
conditioning
(by rewards and punishment)

4.
Imitation learning

5.
Abstraction
forming

6.
Habits formation
, etc.

“Hume’s test” for intelligence

Previously

activated

concepts

Pa
ssive

concepts

N
ewly


a
ctivated


c
oncept
s

Multimodal
information

Currently

activated

concepts

A
cogitoid
: an algorithm
building a neural net for

knowledge
-
mining
from
the
flow

of multi
-
modal information

Emotions

Excitatory and
inhibitory links

aaaa

affect

Wiedermann 1999

Implementing the dynamic world model

Habits: often followed

chains of concepts

What both world models jointly do for an agent:


A mechanism enabling

imitation

of activities of other
agents (without
understanding)


A
germ of awareness



a mechanism for distinguishing
between one’s own action, and that of an observed
agent


A
m
echani
s
m
of
empat
hy


A
substrate for a mechanism for
predic
ting the results


of
an
agent
’s own or observed actions

via their “simulation” in
the virtual model of the known part of the real
world


Understanding
: an agent “understands” its actions in terms
of their embodiment in terms of habits (and thus: of S
-
M
actions plus associated
emotions)


Phenomenal consciousness
(according to

O’Regan
) as a habit of conscious awareness of

performing one’s own skills

Humanoid Robot
Mahru

Mimics a Person's Movements in Real Time

A person wears the motion tracking suit while performing various tasks. The
movements are recorded and the robot is then programmed to reproduce the
tasks while adapting to changes in the space, such as a displaced objects.

The birth of
communication and speaking


By indicating a certain
action an agent
broadcasts
a visual information

which
is completed by the
empathy and prediction
mechanism of an observing
agent into the intended
action


Forma
tion

of the
self
concept


Possibility
for
emotions

to
enter the
game


The birth of the
body
language


Adding of articulation
(vocalization) and
gesticulation tempering







The

verbal

component

of

the

language

gets

associated

with

the

motor

of

speech

organs

and

prevails

over

gesticulation


Development

of

episodic

memory

management

and

retrieval

mechanisms

The birth of thinking

c
ogitoid

Mirror
neuro
ns

Motor

instructions

Multimod
al

informa
tion


Subsequent
decay of whatever
motor activity

(
of vocal
organs)



Perception suppressing


Switching
-
off

motor

instruction

realization


Mirror
neurons
complete
motor


instruc
tions

by
missing
perception


learned by
experience

Wiedermann 2004

Beginning of
thinking as a
habit

of speaking to oneself

An agent operates similarly
as before
,

albeit it
processes “virtual”

data
.



It works in an
„off
-
line“
mode
,

it is
virtually
situated

The birth of
functional consciousness

The agents are said to possess
artificial
functional
consciousness

iff

their communication abilities reach such a
level that the agents are able to fable on a given theme.


More precisely, the conscious agents can


Communicate in a
high
-
level language


Verbally
describe

past and present experience, and
expected consequences of future actions, of self or of other
agents


Realize

a certain activity given its verbal high
-
level
description


Explain

the meaning of notions


Learn

new notions and new languages

Consciousness is a

big suitcase

M.
Minsky

A sketch of the evolutionary development of

cognitive abilities, consciousness included

P
h
e
n
o
m
e
n
a

l


c
o
n
s
c
.

F
u
n
c
t
.



c
o
n
s
c
.

From: J. Wiedermann: A High Level Model of an Embodied Conscious Agent, IJSSCI, 2, 2010



A thinking machine: a de
-
embodied robot

c
ogitoid

Mirror
neuro
ns

A brain in a vat

A robot’s thinking mechanism

in a computer

Lessons from what we have seen



Achieving

a

higher
-
level

artificial

intelligence

no

longer

seems

to

be

a

matter

of

a

fundamental

scientific

breakthrough

but

rather

a

matter

of

exploiting

our

best

algorithmic

theories

of

thinking

machines

supported

by

our

most

advanced

robotic

and

real

time

data

processing

technologies
.




An

artificial

cognitive

system

is

quite

a

complex

system

with

only

a

few

components

none

of

which

could

work

alone

and

none

of

them

could

be

developed

separately
;




It

is

unlikely

that

thinking

machines

will

be

developed

by

purely

academic

research

since

it

is

beyond

its

power

to

concentrate

the

necessary

amount

of

man

power

and

technology
.





This

cannot

be

accomplished

by

large

international

research

programs

either

since

a

dedicated

long
-
term

open
-
ended

effort

of

many

researchers

concentrated

on

a

single

practically

non
-
decomposable

task

is

needed
.





It

seems

to

be

a

unique

strategic

opportunity

for

giant

IT

corporations
.





The

road

towards

thinking

machines

glimpses

ahead

of

us

and

it

only

is

a

matter

of

money

whether

we

set

off

for

a

journey

along

this

road
.




Caspar David Friedrich,

Giant Mountains, cca 1830