Compilation of Notes on CSE 575 Reading materials

bigskymanΤεχνίτη Νοημοσύνη και Ρομποτική

24 Οκτ 2013 (πριν από 4 χρόνια και 20 μέρες)

92 εμφανίσεις

1







Compilation of Notes on CSE 575 Reading materials

By:
Scott Settembre


November

2
7
, 2007









My reading notes include interesting facts, summaries of what I’ve read, as well as my
notes and insights on specific topics usually relating to the topic of programming or
implementing cognitive function. I have included notes from two of our textbooks t
hat I
have been reading, Thagard’s
Mind: Introduction to Cognitive Science

and the Cummins
anthology
Minds, Brains and Computers
. I have also read several recommended articles
and readings from the syllabus as well as from the newsgroup and usually had on
e or two
small notes regarding those readings. Included are also Cognitive Science colloquium
notes and comments as well as class speaker notes and comments. I had previously read
and submitted reading notes for some of the recommended readings on our sy
llabus in
October, even though they were assigned for November and December.



2


Notes from:
Thagard
’s
Mind: Introduction to Cognitive Science

Chapt
er
1
2



Bodies



This and the next chapter show Thagard’s attempt to criticize his own CRUM theory in
order t
o address extending or changing (or abandoning) CRUM to encompass those
critiques. However, any theory of the mind would need to address these points and so it
is useful to list them for my future use.


Embodiment and Direct Perception

-

Gibson says that we

learn about the world directly and use the information in
the world without having to compute or represent it.

-

Best argument, Johnson and Lakoff, considering that much of our thinking
deal with body relations, “such as up and down, left and right, in and
out.” So
thinking would depend on our body.

-

Also, we can imagine a chair or a dog, but not something more abstract

like
furniture or animal.


Being
-
in
-
the
-
World

-

Hammering argument, tasks can be performed by “virtue of physical skills”
and no representatio
n in necessary.

-

Anti
-
representational camp, Winograd and Flores, Heidegger, Smith, Dourish

-

“embedded computing” (like Brooks)

-

Brooks attempt creates simple rules in robotics to perform seemingly more
complex tasks through the rules interactions with the wo
rld, but I would ask,
were the tasks that complex? Perhaps we need a way to rate problems and
then address the type of intelligence by what problems they are able to create.
Like a Big “O” notation, except it is Big “I” notation for how difficult the
pro
blems are that an intelligence can compute. For example, there is a layer
of problems that need to be planned ahead, but can be planned ahead for via
imagery, like the gorilla, banana, and box problem. It can also be solved
representationally, so

it seem
s that the algorithm and the animal have the same
3


Big “I” value for that type of problem. Perhaps we also need a Big “P”
notation to specify the difficulty of the problem, i.e. whether it needs abstract
thinking, or can be solved sequentially, or whether
a child or a raven can solve
the problem.


Situated Action

-

Ah, some p.194. cog scientists believe that many problems do not require
representation in realistic tasks. Maybe Big “P” notation would more clearly
delineate the problem space.


Intentionality

-

S
earle, mental states are “intended to represent the world: they posses
intentionality, or the property of being
about

something. Since people are
manipulating symbols that have a semantic quality because of the interactions
that the people have with the w
orld. In a computer, this cannot be done
because a computer simply manipulates the symbols syntactically. He
proposes the Chinese room argument to illustrate this.

-

I would comment

that

a flaw in Searle’s example is that although the
“pieces” inside the b
lack box of the Chinese room may perform syntactic
operations, that the overall effect of the room as a whole can be semantic. The
old phrase that “the whole is greater than the sum or its parts” can be seen in
many places, like an ant colony. In business

and economy, “added value” can
be created through each step of the process while processing raw materials or
products. The interaction between the smaller steps are a computation that
“may” abstract out the essential parts of the problem, then reassemble

them to
avoid the “fluff” or details of the problem that were not important in solving
the problem itself. Searle is metaphorically not seeing the forest through the
trees!


Dynamic Systems

4


-

Now this is an important way of thinking about AI. The beauty o
f this is that
both symbolic and connectionist methods can interact in a complex dynamic
system. Although it is described here in terms of equations, state spaces, and
attractors, it at its core is the understanding that small rules, small processes
inter
acting together in cooperation at both local and global levels, can produce
systems whose operating power are greater than the operating power of its
parts.

-

“A New Kind of Science” by Stephen Wolfram shows how this can be
analyzed and also how it cannot be

analyzed. One of his conclusions is
reminiscent of the “halting problem” in computer science, where a
n arbitrary

program cannot be always predicted that it will stop. In this case, an arbitrary
series of rules cannot have its eventual outcome determined

without running
them.

-

In this case, Thagard writes about three approaches:

o

We can develop a series of relevant variables and equations to use them to
predict an outcome. This may be hard to do since we do not always know
what is relevant to the solution.

o

Describe changes in a complex system using chaos theory. State space,
attractors, phase transitions and chaos can be used in determining
emotions. (
I would comment

that not everything in cognition need to be
represented in this way. Complex systems can
create non
-
complex
systems, since complexity can create simple building blocks for the next
level of the problem [i.e. architecture].)

o

Neural networks are a dynamic system and can be represented in terms of
chaos theory as well. Though ANN are seemingly
complicated and it does
not seem that they form any sort of representation that can be considered
finite, an ANN can develop areas that represent something, just not in a
single neuronal unit. Depending on what “distance” you view an ANN,
you may not cons
ider it truly a distributed representation.



5


Notes from:
Thagard
’s
Mind: Introduction to Cognitive Science

Chapt
er 13


“Societies”


Cognition in terms of social transactions, as collaboration between agents, is discussed
here in contrast to the focus of
the book.


p.206. I did like the notion of “distributed cognition” as something that can be
investigated. This can be scaled based on the types of agents involved. For example, in
Scientific American October 2007, there is an article on the Semantic Web,

which
examined the use of human and non
-
human agents on a cooperative path to make the
entire network a cognitive entity. I personally like this idea, but I do not find it radical or
interesting. What I do find interesting is the notion that minimally c
ognitive agents can
collaborate and produce a larger cognitive function, like ants or bees.


p.207. Examining “distributed cognition” in the context of distribution, coordination, and
temporal synchrony is intriguing, but similar to the notion of dynamic s
ystems discussed
in chapter 12, and may even be a subset of it.


p.208. DAI, distributed artificial intelligence, can probably be most easily equated to a
production system. If we attempt to break up the production system rules and working
memory into dis
tinctly operating sections, then we get the problem of DAI.


p.209. It seems that DAI is another way of looking at a system of cells, like out of
Wolfram’s “A New Kind of Science”, except the cells are made up of much larger rule
sets.


p.210. Culture, or
anthropological influences, on explaining the way the mind works
seems to acknowledge that these differences are something that is affected on the mind
by environment and not innate. An interesting view, but it does not come as any surprise
that the mind
and cognition, can grow and be attuned to its environment or social
influences. However, it does show that perhaps the “primitives” (
language or
functions
6


or relationships) that we attribute in the mind are not innate and therefore there is another
mechan
ism that may make a primitive a primitive.


“All humans share the same basic perceptual and neurological apparatus, which provides
a biological commonality to cultural variablitiy.”


p.211. Thagard.





7


Notes from:
Thagard
’s
Mind: Introduction to Cognitiv
e Science

Chapt
er 14


“Future of Cog Sci”


Cognitive Science will progress with the possible addition of new disciplines to the intra
-
disciplinary effort, like molecular biology. Maybe other disciplines we have not thought
of including may make important

insights, like physics and theories of quantum states
effecting memory and thought processes. This may be key in understanding the “how”.


Simulation may be the quickest way of creating cognition, but may fail to explain how it
works.


Evolved computer
intelligences, in my opinion, may be the best way to approach the
development of an AI. The use of genetic algorithms in evolving complex systems (or
even actual programs) may be a good way to take one development tack on cognition.


I find it actually qu
ite funny that a proponent, indeed the creator, of CRUM has a hard
time seeing a potential future where machines are quick to rival the power of the human
mind. It is not that difficult to imagine that a “missing link” is discovered through our
cognitive
science efforts to bridge the gap that currently exists. From an evolutionary
standpoint, it seems inevitable! And if we map the progress we have made in so short a
time on a graph, I think it is quite evident that a future of machines that rival our own

general
-
purpose problem solving ability are on the horizon.


Thagard’s main argument relies on computational power, but this is baseless in the world
of algorithms. Calculating 2 to the 16
th

power by multiplying 2 sixteen times, or
calculating it by shif
ting

a binary representation 16 places to the left are completely
different operations. On the same computer, we could look at this problem as being 16x
faster for the latter method. This analogy can be applied quite nicely to computation and
the mind, w
e just need to find a more efficient method for much of what we do. It is
possible that the brain is amazingly inefficient!


8


Issac Asimov, in his Foundation series, explored many of the ideas listed here. Brooks
(2002) did not originate the idea of devic
es implanted in the brains for intrapersonal
communications. Issac Asimov is never given enough credit for AI ideas either.


Concerning my future in Cognitive Science, I am addressing Thagard’s
advice
list:

1.

I find consciousness very interesting and I thin
k it is more important than we
think in playing a role in general cognition.

Self
-
learning machines, I think, will
be the most important advancement, as it is hard for me to imagine creating a
knowledge base from scratch. Instead, the development of tech
niques and
programs that can learn from the real world (or massive corpuses) may be the first
step in the solution.

2.

Genetic programming and algorithms coupled with dynamic systems are the most
interesting to me. Though they are more experimental and I thi
nk will provide us
with less understanding of “how” things happen, they may get results sooner.

I
have a sufficient understanding of how to do this, but as Thagard pointed out
p.221., “it is currently limited by the need for humans to provide a criterion
of
fitness that genetic algorithms serve to maximize.”

3.

I have tried over the years to touch base with other disciplines, primarily through
Scientific American and
Discover, but perhaps I need to dive into the ocean of
journals to actually get the insights
that I need as well as being exposed to other
experiments being performed in our own school. I try to increase my exposure
through Cog Sci colloquiums and conferences, and find it not only enlightening,
but also inspirational.

4.

I attempt to step off the pa
th of “what everyone else is doing”. I assume that if it
can be done in those ways, then someone else will do it. Instead, I attempt to
learn from those methods and bring in additional influences (even though I do not
entirely understand them sometimes).




9


Notes from

"The Organization of Behavior" by D.O. Hebb 1949.

Cummins

and Cummins
Mind, Brains, and Computers

: Chapter 1
9


Growth of the Assembly

-
assuming structural changes make lasting memory possible


There may be "trace" activity that allows memor
y to occur, lacking any structural
changes and only a "function of a pattern of neural activity". Memories that are
instantaneously established probably do not have structural growth, though there may be
some structural change.

Reverberatory trace may "
carry the memory until the growth change is made."


Hebb describes what we understand as Hebbian learning. Though he

states that other forces may be affecting how memory is done, though they could coexist
with the rule that those neurons that fire
together, strengthen their connection with each
other.


He points out something useful to me, for my project, which is proximity of dendrites to
axons may help establish synapses. In fact, the synapse is usually not at the end, instead
it is a thickening
of the cell wall near another cell. Size of synaptic knob may be a factor
as well.


The cell assembly is particularly important to Hebb and is at the basis of his explanation.
Cell assemblies are activated because neurons in the same area tend to excite
other
neurons near them, which in turn causes the original neurons to be excited. This reverb is
at the core of his theory and tends to indicate how areas of the brain can activate,
however, I cannot see exactly how a single area of the brain then can rep
resent more than
one concept, idea, or classification. By having an entire area being excited, that would
quickly close up the amount of space we have to store memories. I think there must be a
different way of looking at a cell assembly where temporal f
iring plays a role.


10


Hebb actually addresses my criticism in the next section. I am still hard pressed to wrap
my mind around the connection count explosion that happens. Each cell can have up to
100,000 connections (that is from another paper). Even th
e 1300 synaptic knobs per
neuron that Forbes in

1939 stated, still creates a cacophony of signals that I cannot wrap my mind around. It is
almost like I am trying to understand the substance of water through the interaction of
molecule by molecule, wherea
s I should really be looking at it from other properties that
are more expansive, like color, coldness, visual smoothness, etc. The only real way to
understand these properties is to go deeper into the substance, yet we can understand
water quite well fro
m the standpoint of what we use and do with it. Perhaps there is an
analogy here somewhere to the brain.


While reading Hebb's statistical analysis of what is going on, I find myself understanding
a little more about how a "brain state" could occur. Thos
e neurons that are out of sync
are not getting the reverb at the right time and therefore stop firing, which would leave
only the neurons to fire that are getting the proper support firing (from the reverb). So
although initially a lot of features are bei
ng recognized and fire the appropriate cell
assemblies, quite quickly only those features that have fired before with others would
support the firing of other cell assemblies, thereby allowing those to fire, and so on. This
would create a level of activat
ion that may be enough to fire another set of cell
assemblies. In essence, the cell assembly is like an "assumption" which then in the
presence of other assumptions gather strength. This is like the experiment with the logic
statement where there is eith
er a K or a J OR a K and no J AND there is a J, so then is
there a K there. We reason one way, but the logical certainty is another way. This may
indicate that our cell assemblies are firing one way at the presence of the information,
though the logic do
es not support such a conclusion.



11


Notes from
"In Search of the Engram" by K.S. Lashley 1950.

Cummins

and Cummins
Mind, Brains, and Computers

: Chapter
20


Henschen
-

(note) speculated that each cell contained a memory or idea. Reminants of
that still
exist in our thinking.


Lashley used two methods, one behavioral and one destructive (physical), to determine
how memories and learning was stored.


Removal of the motor areas of the brain did not prevent activities previously learned from
being carried ou
t. So learning and memory for a higher level task is not stored in the area
of the brain responsible for movement and coordination. At least voluntary movement,
though there was some paralysis and other reflexive damage that was eventually
recovered for
a specific task.


There are some very interesting results of experiments in this paper, some of which are
interesting to note if one was concerned about specific areas of the brain. What these
experiments seem to reenforce in my own mind is the idea that
areas of the brain become
specialized, and perhaps lends credence to the modularity of the mind. For any specific
task, there are subtasks, and through the selective mutilations of a number of different
animals in a number of different experiments, we can

see the breakdown of the task.
Whether this is the only way of breaking down the task efficiently, I do not know, but
evolutionarily this is how it evolved.


In summary, Lashley describes how he feels that the memory engram is distributed across
modules
and not isolated in one area. One of the more interesting conclusions was that
the associative areas of the brain are not storage for specific memories, but rather assist in
the "modes of organization and with general facilitation or maintenance of the le
vel of
vigilance." In addition, he concludes that there must be multiple representations in
different regions of the cortex. These conclusions were probably very insightful for the
time and still are used in thinking of how the brain operates today.

12



He
talks about constant activity in the brain is probably necessary for recall and most
parts of the brain would need to be used. He speculates that the particular mechanism of
learning would still require more than the considering synapses for specific asso
ciations.
So no Henchen idea was proved by his results, and in fact, disproved that mode of
thinking.



13


Notes from

"A Logical Calculus of the Ideas Immanent in Nervous Activity" by
McCulloch and Pitts 1943.

Cummins

and Cummins
Mind, Brains, and Computers

: Chapter
21


Describes how logic reasoning can be implemented in a ANN. I have read this before
and proves a sort of equivalence between symbolic logic and neurally implemented logic.
I guess the main argument that they are satisfying here is that symbo
lic logic can be
implemented in a connectionist network.






Notes on "Minds, Brain and Programs" by John R. Searle, 1980.

Cummins

and Cummins
Mind, Brains, and Computers

: Chapter
9.



I have read this paper before, but I find myself much more attracted
to it now. The
conclusions Searle makes are music to my ears except for the dissonance I hear when
declaring that only a machine (of any type) could experience cognition. Now if he is
talking of consciousness, I tend to agree, but I desperately disagree
with cognition being
dependent on implementation.




14


Notes from

"Is Consciousness a Brain Process" by U.T. Place, 1956.

Cummins

and Cummins
Mind, Brains, and Computers

: Chapter
22



"phenomenological fallacy" or the mistaken premise that descriptions of h
ow things
appear in our mind are descriptions of the actual state of our internal environment. p. 361.


Place is making an argument that conscious is a process in the brain and that it does not
entail dualism. I was at first very eager to read this, but i
t soon became evident that his
logical argument was more on the implications of the word "is" that I had too many
flashbacks to Clinton's argument on the what the "meaning of the word 'is' is." :)


Seriously though, he makes an excellent point (and perhaps

it is the first time it was
made) that what we describe when talking about the world is our perception of the world
and not the world itself. He further makes a distinction that we first learn about the world
and then describe our view of the world in th
ose terms. So we are not describing what
we see to be true, but what we feel is similar to what we have previously been exposed to.
The effect of describing something as green is the describing of something that gives us
the same internal feeling as some
thing that has been defined as green before this point to
our brain.


I personally see nothing wrong with this view, though I would hasten to add that I think
the implementation of the internal representation of the world needs to be considered.
Implement
ation may mean everything here. For example, the Chinese room argument
may produce a black box that is capable of cognition, but not consciousness. In fact, I am
so bold as to say it doesn't have consciousness. The Chinese room does not have either
cons
ciousness in the sense that Place is purporting or consciousness in the sense that
human's have. It may have a different sort of consciousness, but I would doubt its
existence due mainly to a argument of qualia. I would argue that human consciousness
hav
e a type of qualia that is common to humans, maybe not identical, but probably pretty
15


similar, whereas, the qualia of ant consciousness would be different even though it has
similar computational structures inside its head to us humans.


My comments on con
sciousness has also been provided to Professor Rapaport at a
previous time, but I will append my ideas below.




16


Ideas and Arguments on Consciousness, written up after Consciousness lecture by Dr.
Rapaport.


Notes on Consciousness Lecture

Scott Settembre

October 30, 2007


Concerning Dr. Shapiro's idea of sensor hooked up to operating system with 3 states.


The problem with this experiment is that all it may prove is that we can
model

consciousness. In fact, I would equate this exactly to just using our
mind to “think” about
a user and a computer going through the steps. Does that mean we are imagining a fully
conscious being that lives in that situation in our mind? I would think that what we are
doing when we imagine something like that, is we are mod
eling it, just as if we are using
a sensor and a computer.

What exactly is the difference between a model and the actual thing? Maybe
insight will be gained if we imagine what exactly would be the difference between me,
and someone exactly duplicated mole
cule for molecule.

The duplicate is a model of me, yet is not ME exactly. I would say it could be
conscious, since that would to be saying that a complete human was developed and at this
time, and I do not think anyone would disagree with my duplicate bei
ng conscious. But
we may readily agree that the duplicate is conscious expressly only because the
architecture is the same. Would anyone so readily agree that a model made of me, down
to the molecule, but simulated in a giant physics simulator on a compu
ter, would be
conscious, even if it could be shown that it was cognizant?

Now if one believes that something that exhibits cognition
is

conscious, then they
are overlooking the "zombie" argument. Although if it can be shown that consciousness
is a
necessa
ry

part of congnition, then we would have to accept this computer simulated
model as conscious. If we cannot show this, then we are merely simulating cognition and
not
necessarily

simulating

consciousness.

17


This statement would then cause following questio
n to be asked, "Is simulated
consciousness equivalent to real consciousness?" Now I would guess that most people,
when faced with a simulated cognition, would believe real thinking was going on,
however, they would not "necessarily" believe that conscious
ness was going on. Based
on what I questioned previously, they would probably need to be convinced that the
model was extremely similar in structure to the original before even questioning this
intuition. Since it may be impossible to prove that somethin
g is actually conscious from
merely a thought experiment, we can still reason and infer that it may be.

Most of us would readily agree that another person was conscious. Why? Well, we are
similar in structure, in action, and seemingly in reason.
If we c
an prove between a model
and the original being that action is similar and reason is similar, then
maybe we need

only concerned about structure (implementation).

To this idea, I would also like to question the location where consciousness
occurs. Now I am

not asking where in the mind it occurs, I am merely asking where in
the universe that it occurs. For example, I feel myself thinking and existing behind my
eyes and between my ears and above my neck. Is this the case? The Roman's at one
point felt that

thinking was located in the heart, probably because emotion seemed to
come from that area of the body in a stressful situation where the chest may tighten and
the heart may pound. Were they were tricked, and if so, are we being tricked that we
think in o
ur brain?

So I would ask again,
where is consciousness located
? The two areas I would like
to focus on is 1. the brain area and 2. anywhere other than the brain. We
could

be
conscious somewhere other than the brain, like consciousness may be 3 feet above

our
head and our brain tricks us to believe and to feel it is where it is. Why? Well maybe it is
advantageous for survival, we don't hit our heads on branches and such. However,
maybe a more plausible reason for locating consciousness inside our skull i
s because that
is where it
is

generated! It may be actually tied to the physical structure of the brain
itself.

Of course a counter argument could be that consciousness is tied to where
cognition takes place and maybe cognition is what is bound to the bra
in architecture. I
would respond to this by saying that that would imply that consciousness is dependent on
18


cognition and since cognition is bound to location, then by transitivity, so is
consciousness. In essence, consciousness would then be a higher
-
or
der level of thinking,
or at least built from the building blocks of thought (cognition) itself.



So, if consciousness is tied to architecture, then is it
dependent

on architecture?
This is a good question to explore. I would theorize yes. This would,
however, mean
that if cognition is not dependent on architecture, and consciousness is dependent on
cognition, then consciousness may not be dependent on architecture. So why do I
theorize yes? I feel that perhaps consciousness
is

dependent on cognition
(and cognition
"may" not be reliant on consciousness, so to have cognition does not mean that
consciousness is present) but, in addition, consciousness is
also

dependent on
architecture.

Let me propose a specific view of the brain. Perhaps the brain is a
complex
modeling device. Pumping out predictions of what
could

be based on what it understands
of the world, and selecting the outcomes that are most likely. To get a good prediction,
an accurate model may be needed, but to make that model viable, you wo
uld need an
appropriate architecture to support it.

For example, drawing a physics problem on a piece of paper, is indeed a model,
but not an interactive model. But the drawing is not the world and indeed is not the thing
it models. However, the more act
ive the model is, the more useful it would be to
determine what will happen when additional information (forces) are added to the
representation. Perhaps "interaction" in a model is important to the problem of cognition
as well as consciousness. Not all
models can be interactive, thus, not all models can
produce cognition and maybe even a smaller subset of models could produce
consciousness!

This may be a very important conclusion.



19


Notes from Nagel, Thomas (1974),
"What Is It Like to Be a Bat?"

Philosophical Review

83(4) (October): 435
-
450.


“subjective character of experience”

“an organism has conscious mental states if and only if there is something that it is to
be

that organism

something it is like
for

the organism”


Nagel talks about different forms of consciousness. Even though we may not have a way
to conceive of what it is like to be “X”, there is an X and there is something that is being
X. Though he argues
appropriately, I think, he is really seeing that there is a difference in
being conscious and being self
-
aware. I would think that one could be conscious (i.e.
being mental functioning and able to make decisions and comprehend situations), but not
be awar
e that one exists. Sleepwalkers may experience this, as have people who dream
but are not aware that they are sleeping.

Goldfish in a bowl have behavior and are
capable of learning and making decisions, but they are probably not aware that they exist.
Th
ey do make decisions, it seems, to protect itself, yet this is not proof that they know
that they exist, since this can be an innate reflex.


I liked this quote “And to deny the reality or logical significance of what we can never
describe or understand is

the crudest form of cognitive dissonance.”


He also makes an argument that there may be things that humans cannot understand.
“…can be compelled to recognize the existence of such facts without being able to state
or comprehend them.” This is an unprovab
le statement by design, since in order to prove
it, we would have to find something we can never understand, but in that case we have
understood it. I would state that any and all information that can be represented in any
form can be understood by us, gi
ven time. If it can be represented, that implies that there
is a way to simplify it for consumption by a computational machine. If it cannot be
represented,
even in a 1:1 model,
then perhaps it doesn’t exist.




20


Notes from McGinn, Colin (1989),
"Can We Solve the Mind
-
Body Problem?"
,
Mind

98(391) (July): 349
-
366.


McGinn belie
ve that we will never be able to specify the mechanism which creates the
phenomenon of consciousness, but he is sure that it is “not inherently miraculous”.


Cartesian dualism or Leibnitz pre
-
established harmony, invokes supernatural reasons for
consciousn
ess, but these explanations “are as extreme as the problem.”


He brings up a concept called “cognitive closure”
and “perceptual closure” which may be
important to look into.


McGinn hits the nail on the head when he says that “Consciousness, in short, must

be a
natural phenomenon, naturally arising from certain organizations of matter.” Note, that
this is very important to prove for my vision of consciousness, since it acknowledges that
a specific “organization of matter” is at the production of consciousn
ess. This implies
that even if consciousness is computational (and therefore can be simulated on a
computer) that it may not be “real” since it is not situated in the specific organization of
matter necessary to allow the phenomenon to occur the way we ex
perience it. We can
simulate a virtual car on a virtual road on a computer, but that is not a car and that is not a
road.


He brings up p.353. that we may not be able to know everything. This is the same false
argument that Nagel made. I continue to cla
im if something is able to be represented,
even in a 1:1 “mapping”, then it can be understood by humans to some extent. He
apparently “gives up” because we haven’t found a solution to it yet. His proof is one of
lack of patience!


His conclusion is summa
rized by the quote “our concepts of consciousness just are
inherently constrained by our own form of consciousness” on pg 356.


21


I think my unwillingness to subscribe to his conclusions prevents me from seeing the
relevance of his argument. He chastises sc
ientific progress as a whole by succumbing to
the view that “it is deplorably anthropocentric to insist that reality be constrained by what
the human mind can conceive.” I mean, man, give humanity a break and a chance, just
because it seems to be a compli
cated and intractable problem does not give any evidence
to the view that a solution is outside our grasp. He chides humanity for its hubris
unjustly.


Reading this paper made me recall many of the arguments of the anti
-
AI camp.






22


Notes from
Brooks, Rodney A.

(1991),
"Intelligence without Representation"
,
Artificial
Intelligence

47: 139
-
159.


I have read this paper before, but have not commented on it.


My main problem with Brooks’s approach is the lack of the ability for these systems to
evolve to solve less reflexive problems. Mostly,
by attacking the problem in terms of
current input, his creatures lack the ability to plan at any significant level. Just because
the creatures have success, does not mean the problem they are solving is difficult or
anything other than trivial.


He claim
s there is no representation, yet there are “layers”, 3 layers to be exact. To pass
information between layers there must be a representation, some simplification of data
between one layer and the next, otherwise he should consider it all one layer. If t
here is
no input simplification, then there is no need for one layer to be distinguished from
another since there is

no difference in the data being processed. Even the idea that some
data does not make it through a layer, is a representation, a simplific
ation of data from a
set of data to a smaller set of data, which in itself is a smaller representation of the world
input.


I love, however, the approach and the historical evolution of organisms that he brings up
on page 2. His bold conclusion feels intu
itively correct when he expresses “This suggests
that problem solving behavior, language, expert knowledge, and application, and reason,
are all pretty simple once the essence of being and reacting are available.” I think though
that he uses the evolution
ary timeline liberally and may be incorrect in his assessment
that “the ability to move…sensing the surroundings to a degree…to maintenance of life
and reproduction…is much harder [task]”. Evolution is a tricky thing, in that if there is
no niche to fill
or no mutation at the right time, such an advancement may not occur. In
addition, evolution is not guaranteeing any sort of optimal solution, only a solution that
works. There may have been no need for higher intelligence or perhaps just a lack of
mutati
on
, l
uck of the draw
,

may be the cause of the large
gap in development.

23



Complexity allows for simplicity. Once a threshold of complexity is reached, those
building blocks help define a layer which can be simplier. Though we may not have a
grasp at how
a massive amount of neurons creates consciousness or cognition, we can
understand that it took a simplification of bio
-
chemistry and a dynamic cellular system to
produce the building blocks of neurons that seem to simplify the problem significantly.


I agr
ee with Brooks that we should develop the tools to handle the more basic problems
in order to create the framework to simplify the more difficult problems of cognition.



24



Notes on Cog Sci colloquium:

Liz Stillwaggon,
Philosophy Department
,
Univ. of South Carolina

&
Center for
Inquiry
,
Amherst, NY

Are We Practicing What We
Preach?

Methodological Continuity in Cognitive Science


I enjoyed this talk very much, but found many flaws in both reasoning and
conceptualization. I had just written about consciousness two days ago and this talk
helped reinforce my own views by being e
xposed to these flawed views. Now although I
am saying they are flawed, I only mean that they did not convince me of her viewpoint,
not that they were actually wrong.


I think much of what she had to say was insightful, but she fell short in explaining wh
y
human consciousness was so different. All she could point out was that she had an
intuitive feel that there was difference, but nothing she stated supported that. In fact, I
believe she actually made a case against her viewpoint.


What may have helped
her is thinking of how the world is modeled and whether the
implementation of the modeling hardware makes a difference. I think there is a distinct
difference and that implementation does effect if something can be conscious or not.
(See my
Notes on Cons
ciousness Lecture
, Oct 30, 2007 above)




25


James R. Sawusch
,
Department of Psychology
,
University at Buffalo

Signal Variability and Perceptual Constancy in Speech

How Listeners Accommodate Variation in Speaking Rate


Unfortunately, from the standpoint of a connectionist, and especially from the standpoint
of someone with experience in ANNs, I did not see why exac
tly this problem was
difficult to understand. The fact that one morpheme of speech is misheard
(miscategorized) based on what morphemes surround it, is exactly what should happen
when dealing with a probabilistic classifier. It only lends credence to the

fact that

parts
of the brain are composed of probabilistic neural networks.


This sort of experiment, however, is exactly why Cognitive Science approach is
important. A linguist familiar with ANNs may not have spent too much time on this
issue. In this
case, I may be missing something, so perhaps I should spend more time on
this issue as it may help better define how ANN’s work!