The Role of Cognitive Modeling for User Interface Design ...

quaintmayoΚινητά – Ασύρματες Τεχνολογίες

10 Δεκ 2013 (πριν από 3 χρόνια και 6 μήνες)

55 εμφανίσεις

[pesc98]
Peschl,
M.F.
and
C.
Stary
(1998):
The
role
of
cognitive
modeling
for
user
interface
design
representations.
An
epistemological
analysis
of
knowledge
engineering
in
the
context
of
human-computer
interaction

Mind
and
Machines
8(2),
203–236.
The Role of Cognitive Modeling for User Interface
Design Representations:An Epistemological
Analysis of Knowledge Engineering in the Context
of Human-Computer Interaction
MARKUS F.PESCHL
University of Vienna Department for Philosophy of Science Sensengasse 8/10,A-1090 Wien,Austria
E-mail:Franz-Markus.Peschl@univie.ac.at
CHRIS STARY
University of Linz Department for Business Information Systems,Communications Engineering
Freistaedterstrasse 315 4040 Linz,Austria.E-mail:stary@ce.uni-linz.ac.at
Abstract.In this paper we review some problems with traditional approaches for acquiring and
representing knowledge in the context of developing user interfaces.Methodological implications for
knowledge engineering and for human-computer interaction are studied.It turns out that in order to
achieve the goal of developing human-oriented (in contrast to technology-oriented) human-computer
interfaces developers have to develop sound knowledge of the structure and the representational
dynamics of the cognitive systemwhich is interacting with the computer.
We show that in a rst step it is necessary to study and investigate the different levels and
forms of representation that are involved in the interaction processes between computers and human
cognitive systems.Only if designers have achieved some understanding about these representational
mechanisms,user interfaces enabling individual experiences and skill development can be designed.
In this paper we review mechanisms and processes for knowledge representation on a conceptual,
epistemological,and methodologieal level,and sketch some ways out of the identied dilemmas for
cognitive modeling in the domain of human-computer interaction.
Key words:Cognitive Modeling,Cognitive Systems,Human-Computer Interaction Knowledge En-
gineering,Knowledge Representation,Knowledge-based User Interfaces.
1.Introduction
Whenever humans interact with computers,directly or indirectly,information,se-
mantics,data,or knowledge is exchanged,e.g.(Dix et al.,1993,Preece,1994).
More generally,the goal of these interactions is to mutually modulate and in-
uence the respective (knowledge/representation) system so that a certain task
can be achieved.Intentions become behaviors and activities on the human side,
computation results are prepared as output on the machine side.
Expectations are set up by the human party,more or less predened code pat-
terns are activated by the computer.What is referred to as human-computer interac-
204
MARKUS F.PESCHL ANDCHRIS STARY
tion,is a multi-level,multi-channel,and multi-modal interaction process between
two representation systems.
Consider,for instance,the representational dynamics and levels of system be-
havior in a tutoring component of a user interface.Figure 1 shows the structure and
dynamics of the component according to user (in this case,students) states.The list
of`helpful'activities is restricted to the entries shown in the center of the gure.
The same holds for the states that may occur along the interaction with the user.The
left hand of the gure captures all possible state transitions for providing feedback
by the tutoring component.According to the marked entries in the list of`helpful'
activities the behavior of the tutoring component,is adapted (see right hand side
of the gure).In the example,the complete`encourage'-part of the state-transition
diagram is disabled according to the entry in the activity table.
Although this examplerepresents the state-of-the-art in the actual capabilities
of traditional knowledge-based interaction,it is a very poor representation of the
social and cognitive`world'individual users are embedded in and are part of.It
can easily be demonstrated that there exist situations in tutoring when the predeter-
mined behavior of the machine will lead to misinterpretations and misunderstand-
ings.For instance,each task (e.g.,the product of two matrices) has a strategy (pro-
cedure to follow) for problem solving and requires the capability to ll in the right
values in the right position,i.e.to execute this procedure.The modeled feedback
mechanism does only allow to judge completely wrong or right answers.There
are no`in-betweens',such as nding the right procedure but lling in the wrong
values.Such`in-between'-solutions are taken up by experienced teachers to guide
the student until s/he has achieved a thoroughly understood and right solution.This
behavior cannot be modeled by the shown feedback mechanism,although it would
correspond to the behavior expected from a tutor or an articial substitute.
The reason for the deciency described above is given by the fact that there
is neither a representation or understanding about the assumptions concerning the
cognitive capabilities (skills,experiences etc.) nor adjusted elements capturing the
social environment of the users.As a consequence,Yes/No-decisions,as it is shown
in the gure for the students'answers (`student-ok/student-wrong'),will not help
students to improve in a particular problem solving activity.A representation of
mental states (cognitive model) known for this task would increase the understand-
ing of the situation described above.However,such a step requires epistemological
analysis,as we will see below and has been already demonstrated for machine
learning (Stary et al.,1995).
The request for comprehensive cognitive models has also been brought up in
other domains,such as in the domain of manufacturing.According to (Böhle et al.,
1988) human-computer interaction requires more than Yes/No-judgements of re-
sults,namely the consideration of the`empathy and subjective involvement'(Böhle
et al.,1988,p.239) of workers in the course of their task accomplishment:
 Human knowledge and activities are not guided exclusively through rational
behavior.However,conventional knowledge acquisition and representation
mind287.tex;21/10/1998;11:57;p.2
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
205
206
MARKUS F.PESCHL ANDCHRIS STARY
common sense understanding.The less of the shared tradition they represent,
the less likely misunderstandings can be avoided.
As we have seen,various levels of representation are involved and are inter-
acting with each other in the course of human-computer interaction.As a conse-
quence,for the development of a user interface a variety of aspects has to be taken
into consideration,e.g.,(Johnson,1992):
1.The cognitive dimensions,the skills,experiences,and knowledge of users
(groups).
2.The tasks users want to perform,all functions,results,etc.
3.The amount of data to be processed,frequency of functions'use,etc.
4.Organizational constraints,i.e.type of work,kind of systemdevelopment,etc.
Developing a human-computer interface always requires the nding and con-
struction of a representational and computational structure that ts into the con-
straints given by the cognitive dynamics of the potential users.As computer sys-
tems/programs as well as input/output devices can be developed  within certain
technical and economical limits  more or less arbitrarily,the"cognitive con-
straints"have to be considered the starting point and the primary constraint of such
a undertaking.
1
In other words,in order to develop a human-oriented user interface
it is necessary to have at least a rough understanding of the cognitive processes
involved in these interactions (Stacy,1995).
These demands have also been raised in the eld of Software Engineering
requiring higher performance of software systems concerning reliability and de-
pendability,e.g.(Downs,1987),as well as in the eld of Knowledge Engineering,
e.g.(Duda et al.,1981,Van de Riet,1987).In order to avoid epistemologically
and empirically unreected integration of cognitive modeling into user interface
development (for further examples to the one shown above see,e.g.(Fischer,1993)
we suggest to pursue the following strategy:
(i) rst,empirical knowledge about cognitive systems,their structure,and their
dynamics has to be obtained.From these data theories have to be constructed
that describe and explain the observed (cognitive) phenomena when accom-
plishing tasks;
(ii) in a second step some kind of (computational) cognitive model has to be devel-
oped.(i) and (ii) are usually covered by (cognitive) psychology and especially
by cognitive science ((Anderson,1990,Eckardt,1993,Osherson et al.,1990,
Posner,1989) and many others).
(iii) Based on these cognitive models a human-computer interface can be con-
structed.Ideally,such an interface ts into the cognitive dynamics similarly
as a key ts into a lock.Metaphorically speaking,the goal of human-computer
interfaces is to  unlock,modulate,and trigger the cognitive dynamics of
the user(s) in such a way that the intended task can be accomplished with
a minimal amount of (cognitive) effort.
2
Before this strategy can be pursued some fundamental issues have to be clari-
ed:The problem of knowledge representation plays a crucial role in this context,
mind287.tex;21/10/1998;11:57;p.4
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
207
since mental models require some mechanism of representation to be utilized for
user interface development.Consequently,we are going to examine the different
levels and forms of knowledge (representation) involved in the interaction between
humans and computers.Fromnow on we will termthese representations cognitive
models and the activity of mapping the mental models to cognitive modeling.It
will become apparent in the sections to come that developers can not only gain
clarity and learn a lot from these considerations,but they are also enabled to ex-
plain shortcomings and failures as soon as cognitive modeling becomes part of the
user-interface development process.
Related work to our approach is discussed in section 2.Section 3 introduces
and analytes the different systems and levels of representation which are involved
in human-computer interaction.The role of the observer as well as the role of the
systems being affected through human-computer interaction are investigated on
a conceptual and epistemological level.Their mutual relationships are discussed
in detail.In section 4 the relationships between cognitive modeling and human-
computer interaction are elaborated.Shortcomings of symbolic models for repre-
sentation systems,in particular for modeling cognitive processes,are identied.
Methodological and conceptual proposals will be sketched to improve the current
situation in section 5.Section 6 concludes the paper through summarizing the
achieved results from an epistemological and methodological perspective.
2.Related Work
Our work is closely related to the criticism of the rationalistic tradition stemming
from(Winograd et al.,1986).The commonalties of the work are:
 Like (Winograd et al.,1986) we focus on design.Design requires the devel-
opment of negotiable models,before creating technological artifacts,such as
user interfaces.
 We also reveal the implicit understanding of existing design techniques,namely
in a particular eld:cognitive modeling for user interface development.
 We also do not stop having developed a functional understanding (about cog-
nitive modeling).We revisit the genesis of cognitive modeling froman episte-
mological perspective and reveal the rationalistic tradition behind this devel-
opment process.
 In order to lay a common ground for empirically sound cognitive models we
also argue from the constructivist position,focusing on individual perception
and organization of knowledge.
 Mental representations may exist without explicit representations,such as
symbols.We also follow a non-structuralistic and non-propositional approach
to capture intelligent behavior,namely parallel distributed processing.The
focus of human-computer interaction does not require explicit object that have
to be referred to when an activity occurs.
mind287.tex;21/10/1998;11:57;p.5
208
MARKUS F.PESCHL ANDCHRIS STARY
We do not focus explicitly on the role of language and its interface to thoughts
and reality (see e.g.,(Devitt et al.,1987)).We instantiate the epistemological
analysis and methodological consequences for cognitive modeling in the context
of user interface design.In contrast to (Winograd et al.,1986),p.131 (`Detailed
theories of neurological mechanism will not be the basis for answering the general
question about intelligence and understanding...') we will argue that the ndings
fromneuroscientic investigations help us to improve the understanding of natural
and articial intelligent behavior through cognitive models.
In the eld of Human-Computer Interaction cognitive modeling has been rarely
analyzed systematically.Techniques for knowledge representation have rather been
used to represent the contents of mental representations (for an example see (John-
son,1992)).(Caroll et al.,1988) consider mental models to comprise knowledge
about computer systems work.They also identied several perspectives for cogni-
tive modeling:
 the task perspective concerning the goals and subgoals users may achieve
utilizing system features.This perspective is the one that is usually directly
represented through conventional techniques fromArticial Intelligence,such
as semantic networks.
 the interface perspective comprising the knowledge a user needs to accom-
plish a task utilizing the data and interaction styles of a system in a certain
sequence.
 the architecture perspective comprising knowledge about the storage of data,
access functions and internal processes (owof data and control) of computer
systems.
In order for a mental representation to become a cognitive model we have to know
(i) the set of possible inputs,and
(ii) the set of possible outputs,as well as
(iii) this highly complex,non-linear function that relates the inputs to outputs (i.e.
the representational structure).
Such representations then could be used during learning,problem solving and
rationalization.This adopted perspective from natural science is exactly the one
addressed by (Winograd et al.,1986) and is referred to as the rationalistic tradition.
Another attempt of analytical reection of the meaning of mental represen-
tations has been performed by (Moray,1993).In interpreting mental models as
mappings from external systems to human cognition he identied the following
categories of mental models:
 Mental models as imperfect copies of external systems.This type of models
are created when humans are not capable to grasp the full complexity of
external systems (Bainbridge,1991).
 Metal models as logical relationships among abstract entities and operations.
This type of model is utilized to represent the way in which humans reason
about syllogistic problems (Johnson-Laird,1983).
mind287.tex;21/10/1998;11:57;p.6
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
209
Figure 2.Ahuman interacts with a computer - the basic situation.
 Metal models as a description of reasoning.This approach is used to specify
problem solving of scientic,causal or several logical problems (Anderson,
1983,Gentner et al.,1983,Newell,1989).
Similar to Caroll et al.(Caroll et al.,1988) Moray (Moray,1993) also intended
to provide a more concrete meaning for the term mental model.He considered it
not to be sufcient to identify mental models as what users know exactly about
software.With his work,it should be avoided to treat cognitive models like what
users know about a certain aspect about computer systems.In order to bridge the
gap between the various perspectives of cognitive modeling,in contrast to (Caroll
et al.,1988)
3
,he proposed a unifying notation,namely lattice theory,in order to
model hierarchies as well as causal effects of knowledge entries (nodes) in the
lattice.
Reviewing the related work we can conclude that although some general short-
comings have been identied through the work of (Winograd et al.,1986) in gen-
eral and (Caroll et al.,1988) in particular for user interface design representations,
there is still no common ground where new methodologies based on the required
empirical evidence could come up.As a consequence,in the next section we will
study the involved systems and lay ground for a framework that allows a non-
rationalistic but comprehensive development of cognitive models for user interface
design representations.
mind287.tex;21/10/1998;11:57;p.7
210
MARKUS F.PESCHL ANDCHRIS STARY
3.The Constructivist Neuroscientic Perspective of Human-Computer
Interaction
What is the basic situation in which designers or analysts nd themselves,when
developing a user-oriented human-computer interface and/or a model of cognition?
Figure 2 gives a rst glance of this situation.First of all,they have to clarify the
types of systems involved in this interaction:
 the user can be characterized as a cognitive system that tries to solve some
problemor to accomplish a task more efciently by making use of a computer;
 the computer can be characterized as a machine transforming inputs into
outputs in a non-linear manner.
4
 the interaction devices and styles provided by the computer system enable
the interaction between the cognitive system and the computer system.There
exists a wide range of input/output devices:mouse,keyboard,printers,graphic
displays,acoustic in/output devices,data glove,etc.They may be used exclu-
sively for interaction or may be part of more complex interaction styles or
modalities,such as direct manipulation;
 developers should also have in mind that the user has particular interaction
devices as well:namely,his/her sensory and motor systems.They allow that
external stimuli (such as pixels on a screen) may enter the (neural) repre-
sentation system and that internal representations can be externalized via be-
havioral actions.These behavioral actions (might) change the environmental
structure (e.g.,by moving the mouse,hitting a key of the keyboard,etc.);
 nally,in order to describe or predict behavior,there has to be an observer  in
most cases this observer is also the designer of the human-computer interface
and/or of the cognitive model.He/She has access to both the internal structures
of the computer system and the behavioral structures of the user.As will be
discussed,the access to the user's internal representational structures is very
limited,as the user can only externalize a small fraction of his/her knowledge
via behavioral actions,such as language.It is the task of the designer to de-
velop an adequate model of the cognitive processes and of the potential user's
internal representations (and their dynamics) by making use of neuroscientic
theories and ndings fromcognitive science.
Investigating the processes that are going on in human-computer interaction
we have to be aware that we are not dealing with a one-way interaction,but with
systems that try to mutually inuence and trigger each other in a more or less ben-
ecial way.As it is the case in almost any interaction between a cognitive system
and its environment or other cognitive systems,we are dealing with a feedback
relationship  the goal of this relationship is to establish a more or less stable
feedback loop based on a"smooth"interaction and on effectively triggering the
respective representation/processing systems.
In this type of interaction the user triggers the execution of a certain part of the
computer's program leading to a certain action in the computer system.In most
mind287.tex;21/10/1998;11:57;p.8
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
211
cases the result of this action is somehow externalized and made accessible for
the user (e.g.,by displaying it on a screen or by acoustic output).In any case,
the computer system's output perturbs the user's representational system,which
in turn causes some responses by the user.These responses are externalized via.
his/her motor systems leading to a perturbation of the computer system,and so on.
By now it should have become clear that a lot of interactions are going on
between these two systems (i.e.,the human and the computer).These interactions
do not only involve mechanical processes,but,more importantly,the transfer of
knowledge/representations.Consequently,there have to be devices that act as in-
terfaces between these two systems,as their internal representational structures are
not necessarily compatible.
At a rst glance the computer systemand the cognitive systemdo not seemto be
compatible.Hence,the question arises,how can the interaction between the user's
and the computer system's representational structures be designed effectively?The
answer to this question covers a large part of what the eld of human-computer
interaction is all about.Let us start with a short look at the elements for interaction:
(a) the user's motor system (hand,voice,etc.),
(b) the user's sensory system (visual system,tactile receptors,acoustic system,
etc.),
(c) the computer's input devices (keyboard,mouse,data glove,etc.),and
(d) the computer's output devices (graphical displays,all kinds of virtual reality
output devices,etc.).
These systems and devices are responsible for creating some kind of compatibility
between the internal representations of the participating systems in interaction.
Their task is to transform the internal representations into structural changes of
the environment (e.g.,activating a muscle that moves a mouse or activating a pixel
on a screen) and vice versa.
The human and the computer are able to interact with each other via mutually
changing the environmental structure/dynamics (e.g.,key strokes,pointing with the
mouse,patterns of pixels on the screen,etc.).Similar to (natural,spoken,or written
language) communication interaction becomes only possible through the use of the
environment as a carrier for mutual stimulations.Before tackling the problem of
human-computer interaction,let's have a closer look (on a conceptual level) at the
participating systems,which are involved in the process of the interaction between
cognitive systems and computers.
3.1.T
HE
`
USER
'
AS COGNITIVE SYSTEM
The central part of human-computer interaction is the cognitive system that is not
only interacting with the computer systern,but also with the rest of its environment
as well as with other cognitive systems.Fromobserving a cognitive system which
is acting (successfully) in its environment,one can conclude that this systemhas to
possess some kind of knowledge about its environment.Otherwise,it would not be
mind287.tex;21/10/1998;11:57;p.9
212
MARKUS F.PESCHL ANDCHRIS STARY
possible to behave adequately in the environment.
5
Cognitive scientists as well as
(cognitive) psychologists assume that cognitive systems represent knowledge about
the environment and about how to successfully interact with this environment.
Section 4 discusses the epistemological foundations of the problem of knowl-
edge representation and its implications for the design of cognitive models and
human-computer interfaces.
More specically,cognitive science postulates the existence of a representation
system that holds some kind of information about the environment and on how
to survive within the constraints of a given internal and external environment.
Furthermore,these cognitive disciplines assume that the cognitive systemmakes
use of its representation systemin order to generate adequate behavior for survival
and/or successful interaction with the environment (e.g.,(Anderson,1990,Boden,
1990,Posner,1989,Osherson et al.,1990) and many others).
Adequate behavior and survival are used in a broad sense here;namely,to
externalize adequate behavior refers to behavior that ensures survival in a physical
(e.g.,food),social,linguistic,cultural,or even scientic context.The goal of a
cognitive systemcan be characterized as the attempt to establish a stable (feedback)
relationship both inside the organism and with the environment (compare also to
the concept of homeostasis,e.g.,(Maturana et al.,1980,Varela,1991) and many
others).In humans and most other natural cognitive systems the nervous systern is
assumed to be the substratumfor the representation system:The neural architecture
(as well as the body structure (Peschl,1994a,Peschl,1994b)) holds/embodies all
of the particular human's or cognitive system's knowledge.Thus,it is responsible
for his/her/its behavioral dynamics.
A cognitive system can be characterized as consisting of the following subsys-
tems that are heavily interacting with each other:
(a) the body structure and state of activation patterns that are responsible for the
generation of the actual behavioral dynamics;
(b) the structure of the synaptic weights that are responsible for holding the organ-
ism's knowledge and that,when changed,are responsible for the phenomenon
of ontogenetic learning and/or adaptation;
(c) the genetic code and dynamics which underlies all these activities and regu-
lates the phylogenetic development of the organism (as well as of its [phylo-
genetic] knowledge).
3.2.T
HE ENVIRONMENT
Any cognitive system is embedded into the environment.Abstractly speaking,the
environment can be characterized as a ow of energy consisting of meaningless
patterns and regularities.In the perspective presented in this paper the term en-
vironment refers roughly to I.Kant's concept of the thing-in-itself.It is not
directly accessible in principle.Despite of all efforts of science to nd out more
about the true or objective nature/structure of the environment,we can perceive
mind287.tex;21/10/1998;11:57;p.10
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
213
only representatios/constructs of the environment,namely,those representational
constructs that are generated by our nervous system in the course of interacting
with the environment as well as with its own neural states.Section 4 discusses
the epistemological consequences for the concept of representation in cognitive
systems.This alternative view of representation is based on the concepts of con-
structivism (e.g.,Maturana et al.,1980,Glasersfeld,1984,Glasersfeld,1995),and
many others).It will be shown that it goes much further than Kant's epistemology.
It is only in the process of interaction with a cognitive system that environmen-
tal states/patterns receive individual meaning.According to G.Roth meaning or
semantics is the specic inuence or the effect that environmental states/dynamics
have on a specic cognitive/representational system (Roth,1991,Roth,1994).
Thus,meaning is always system-dependent,system-relative,and individually de-
pends on the structure and the current state of the particular cognitive system.I.e.,
it always has to be interpreted in relation to the systemat hand.The representational
structure/state itself is the result of all phylo- and ontogenetic developments of the
particular organism (the total of the organism's experiences).
Having in mind what has been said about the impossibility of accessing the en-
vironment directly,it has to be clear that the same holds for what has been referred
to as regularities in the environment.Environmental regularities do not present
themselves explicitly as regularities;in other words,it is the organism's task to
gure out these regularities that are relevant for its particular survival.This is not
only true for simple organisms that are using light gradients for locating food more
efciently,but also for scientists who are trying to nd out specic regularities
in the environment in order to use them for manipulating the environment more
efciently (e.g.,nuclear power).The important thing to keep in mind is that all
these regularities are system-dependent and are the result of construction processes
that are executed by the particular representation system.
6
Looking more closely at
the structure of the environment,it turns out that one has to differentiate between
two forms of regularities with which cognitive systems are confronted:
(i)  natural regularities:this category includes all reguIarities that are occurring
naturally in the dynamics of the environment.The facts that a stone always
will fall down or that lightning is followed by thunder are examples for such
natural regularities.
(ii)  articial regularities :this category of regularities can be referred to as
artifacts in the broadest sense.They are characterized by the fact that they
are the result of externalizations (behaviors) of an organism's knowledge.In
other words,artifacts are articial changes or alterations in the structure of
the environment.The notion of artifact as it is used in this paper is rather wide
and ranges from simple forms of tools,shelters,houses,etc.(of simple ani-
mals as well as of humans) over symbolic artifacts (e.g.,language,symbols,
etc.),to the most advanced technological achievements or scientic theories.
Everything produced by a single organism or a group of cognitive systems
is included in the domain of artifacts.Although artifacts follow the same dy-
mind287.tex;21/10/1998;11:57;p.11
214
MARKUS F.PESCHL ANDCHRIS STARY
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
215
lost in this process.In order to cut short the  sometimes painful  process of
having to make"direct experiences"in the environment a kind of symbol system
is introduced that describes these experiences in an abstract way (Hutchins et al.,
1992).If another organism is capable of decoding these messages,it may"learn"
from these symbolic artifacts instead of having to experience directly the envi-
ronmental consequences of its behavioral actions.The important property of these
artifacts is their referential function  they are symbols in the most general sense,
meaning that they are environmental regularities referring to something else.This
subgroup of articial regularities includes all kinds of language (written,spoken,
sign language,body language,etc.),symbols,books,paintings,TV,CDs,scientic
theories,architecture,etc.Almost any artifact can be interpreted as having some
kind of referential function:It refers at least to what the creator of this artifact
has intended to express/externalize.Using symbols (Eco,1976,Eco,1984) this
referential function becomes transparent:the environmental pattern of a symbol s
stands for another state/pattern e in the environment or in an organism.In other
words,the symbol s represents e.
This kind of artifacts can be understood as being the substratum of what is
referred to as  cultural knowledge in the broadest sense.It provides the basis for
any cultural process and development.We have to keep in mind,however,that
even these artifacts are completely meaningless per se!The same statement holds
for them as for any other environmental regularity or pattern:they receive their
particular meaning only in the process of being interpreted by a cognitive system.
Their meaning is by no means clearly dened;rather it always depends on the
structure,state,and phylo- and ontogenetic experiences of the perceiving cognitive
system.
In other words,their meaning/semantics is always system-dependent.For a hu-
man reader a book will have a different meaning than for an insect that is interested
in eating paper.But even among humans a certain piece of text or spoken language,
architecture,etc.will have different meaning  it always depends on the previous
(learning) experiences and on the current (representational) state of the partici-
pating cognitive systems what meaning is attributed to an artifact.This problem
of private semantics,communication,and its implications for Human-Computer
Interaction will be taken up again in Sections 4.lf.
3.4.T
HE COMPUTER SYSTEM
A special subgroup of symbolic artifacts comprises computer systems.They are
explicitly designed to transform information and knowledge.The designer's task
is/was to build a mechanism (artifact) that supports humans in accomplishing a
certain task at a higher speed/efciency and/or with higher accuracy.Normally
a human would use his/her own representational system (and body) in order to
accomplish a certain task.The idea of computer-supported task accomplisllment
is to transfer parts of representational structures (knowledge to solve a certain
mind287.tex;21/10/1998;11:57;p.13
216
MARKUS F.PESCHL ANDCHRIS STARY
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
217
tion is,how does the computer system and its representational structure obtain the
knowledge that allows themto solve a problemor to achieve a certain task.There
are at least two answers to this question  they do not necessarily mutually exclude
each other:
(i) The knowledge is transferred from the human (expert) to the computer sys-
tem.In other words,some kind of mapping between the human's representa-
tion system and the computer system's representation mechanisms (e.g.,data
structures,programs) is established.That is the usual procedure on which
most of the current knowledge engineering techniques are based.Figure 4
sketches this mapping process.The human has to make the experiences in
the real world (environment).In doing so he/she constructs knowledge and
theories about the environment.This knowledge is externalized by using a
language.These linguistic expressions are formalized (e.g.,by a knowledge
engineer or a programmer) and transformed into an algorithm,a computer
program,rules,and/or data structures.Hence,the computer systemmakes use
of already predened or pre-represented representations.What makes this
approach interesting
9
is that the computer system (program) can handle huge
amounts of data which normally cannot be overlooked by a human.It allows
to manipulate data with extremely high speed and thereby to make implicit
structures explicit (the solution of differential equations,the application of
rules to a set of input data,searching large bodies of data by certain searching
criteria,etc.).From an abstract perspective an expert system using a rule-
based knowledge representation mechanismis pretty uninteresting.The space
of possibilities/solutions is already predetermined by the set of rules and al-
gorithms as well as by the possible/acceptable input data.What makes these
systems interesting is the fact that this space is extremely large,and that it is
 for humans  almost impossible to foresee,and thus specify explicitly all
possible constellations of parameters describing problems,and the results in
a given problem domain.
The computer system's ability to stupidly follow the rules and apply them
to data with high speed makes these structures,which are implicitly given in
the set of rules explicit and thereby generates results (particular states in the
knowledge space) that are (might be) interesting and/or helpful for humans.
They are interesting,because the user could not have reached this solution by
applying his/her knowledge exclusively.Of course,he/she could have done
exactly the same as the computer system (namely following a huge set of
rules),but this approach would have been too time consuming and,thus,not
worth pursuing.
(ii) An alternative approach to acquiring knowledge is that the computer system
itself learns from its experiences with the environment in an trial and error
process.This is the strategy that most of the approaches in the domain of
articial neural networks,computational neuroscience (e.g.,(Rumelhart et
al.,1986,McClelland et al.,1986,Hertz et al.,1991,Churchland et al.,1992)
mind287.tex;21/10/1998;11:57;p.15
218
MARKUS F.PESCHL ANDCHRIS STARY
and many others),and of genetic algorithms (e.g.,(Holland,1975,Goldberg,
1989,Mitchell et al.,1994) and many others) follow.The basic idea can be
summarized as follows:In the beginning of the learning procedure the com-
puter system has (almost) no useful knowledge (to fulll the desired task),
i.e.,its behavior follows random patterns.Learning algorithms or genetic op-
erators adapt their representational structure (i.e.,synaptic weights,genetic
code,etc.) in a trial-and-error process until some useful or desired behavior
(judged by humans) is achieved through the representational structure.
This strategy is similar to the processes that occur whenever a human or any
natural system has to learn something.He/she/it adapts to certain environ-
mental regularities that are useful for the organism's survival in order to make
use of them in a benecial way.In both cases the result is a representational
structure (in the brain or in the computer system) that is said to be capable
of dealing successfully with certain aspects of the environmental dynamics in
the context of accomplishing a certain task,such as the organism's survival or
solving a problem.The difference to point (i) is that no prefabricated chunks
of knowledge are mapped/transferred to the representation system the cogni-
tive/computer systemrather has to gure out a way to solve a certain problem
by adapting its knowledge structures in a trial-and-error process.
In any case the implicit assumption about representation runs as follows:the
resulting knowledge structure in the computer has some kind of similarity or even
iso/homomorphic relationship to the environmental structure.Looking more closely
at this postulate,it turns out that it implies some kind of homomorphic relationship
between the structure of the environment,of the representation in the (human) cog-
nitive system,as well as of the representation in the computer system.It is argued
that due to this relationship of (structural) similarity it is possible to enable humans
to solve the problem of survival in their environment.Furthermore,if humans are
able to solve problems with this kind of structure preserving representation,
computer systems can do similar things by applying the same representational
mechanism.
However,most models in traditional cognitive science (e.g.,symbol manipu-
lation,propositions,etc.) as well as in GOFAI
10
have not been as successful as
originally promised.The success of AI has been limited to rather specialized do-
mains that can easily be described by formal specication techniques,such as First
Order Logic (FOL).For the remaining part of this paper the reasons why AI-models
have not been so successful will be discussed.Furthermore,the implications of
these problems for human-computer interfaces will be investigated.
4.User Interfaces and Representation in Cognitive Models
In the beginning,user interfaces have been mostly designed command-oriented
because of the machine's limitations to handle a variety of media,channels,and
modalities at a time.Today,the representation and processing of different knowl-
mind287.tex;21/10/1998;11:57;p.16
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
219
Figure 5.Semantic shifts in the process of transferring knowledge from a human to a
computer systemand vice versa.
edge sources allow the development of adaptable and exible interaction modal-
ities.At the same time,the goals the designer has to achieve became more and
more complex,if not contradictory:For instance,computer systems not only have
to be easy for most users to handle,but also to be capable to support trained users
to exploit the artifact for more complex purposes.Moreover,problems are often
fuzzy or wicked in the sense,that they cannot be described precisely in ad-
vance.In such cases,solutions can only be found by moving from partial problem
solving results via learning fromexperience to a more complete understanding.
Cognitive models are expected to provide the ability to predict howsystems will
support problem solving with proposed system designs.Furthermore,cognitive
models are supposed to be well suited for the clarication of design proposals
and rst design ideas from users or developers.Therefore,it is important to take
a closer look at the fundamental concepts on which cognitive models and,thus,
cognitively adequate user interfaces are based.
4.1.A
MBIGUITIES IN THE PROCESS OF INTERPRETATION AND OF
TRANSFERRING KNOWLEDGE
It has become evident in the eld of knowledge engineering (e.g.,(Van de Riet,
1987)),of programming (e.g.(Downs,1987)),and of logic (Peschl,1990) that
in the process of extracting knowledge from an expert and transferring it into a
computer system a lot of information gets lost for various reasons (certain parts of
the knowledge cannot be verbalized,cannot be formalized,etc.).However,what
seems to be more important,and what seems to be neglected in many cases,is not
only the fact that information is lost in this process,but also that the semantics is
altered in many cases (see also Figure 5).In fact,it seems that the so-called loss of
information is only an extreme case of a change in the semantics.This observation
does not only affect symbolic knowledge representation,but also pictorial repre-
sentation (e.g.,visual ambiguities).It is not only a problem for expert systems,but
also,and perhaps in particular,for human-computer interfaces,as most of these
semantic shifts occur at the critical step when one kind of knowledge represen-
mind287.tex;21/10/1998;11:57;p.17
220
MARKUS F.PESCHL ANDCHRIS STARY
tation is transformed into another.What are the reasons for this phenomenon of
semantic shifts?
(a) Natural language is one of our main instruments to externalize our (internal)
knowledge.As is shown by (Polanyi,1966) and by many others,e.g.(Berry,
1987),and as many of us have experienced,any kind of language is capable of
externalizing only a small fraction of the semantics that one has in mind when
he/she tries to externalize a particular chunk of knowledge by making use of
language.Hence,the tacit or implicit knowledge is not only lost in the
moment of externalization,but also some kind of semantic distortion occurs.
Due to his/her differences in onto- and phylogenetic experiences the receiver
of the externalized language will interpret these meaningless syntactic envi-
ronmental patterns (see section 3.3) in a different way as the sender of the
message.
11
(b) The occurrence of semantic shifts cannot be avoided in principle.They occur
whenever someone is externalizing (symbolic) behavior and someone else
tries to interpret these - per se - meaningless artifacts.This fact implies that
the semantics for different users and/or designers and/or experts might dif-
fer considerably.Although they are confronted with the same symbol,icon,
graphical representation,etc.these artifacts might trigger different internal
representations and semantics in the participating cognitive systems.
(c) This distortion is taken even one step further in the process of formalizing
natural language into purely syntactic and formal structures.Despite all at-
tempts to introduce semantic features into symbol systems,natural language
is deprived of its nal semantic features and dimension in the process of for-
malization.Symbolic representations (as well as pictorial representations) re-
main syntactic in principle.Loosing the semantic dimension implies,however,
more freedom in the process of interpreting these syntactic/formal structures
that,in turn,may lead to unintended semantic shifts.
(d) In most articial representation systems a lack of symbol grounding can be
found.Semantics is assumed to be somehow externally dened or given.
Furthermore,it is assumed that the semantics is more or less stable over
a period of time.Epistemological considerations as well as our own expe-
riences reveal,however,that (i) semantics changes individually in minimal
increments (according to the experiences that he/she makes with the use of
certain symbols).(ii) There is no such thing as the one given semantics;
public as well as private semantics are in a steady ow.As we have seen
the semantics of symbols is always system-dependent and communication
is based on mutually adapting the individual use of symbols (compare also
the concept of a consensual domain as basis for a public semantics (Becker,
1991,Glasersfeld,1983,Glasersfeld,1984,Maturana,1978,Maturana et al.,
1980)).Consequently,the idea of (a) an somehow externally (or naturally)
given semantics as well as (b) of holding the semantics stable is absurd any-
way  knowledge representation techniques should rather provide means to
mind287.tex;21/10/1998;11:57;p.18
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
221
deal with the phenomenon of an experience-based individual and dynamical
semantics.
As has been mentioned already,a major distortion of semantics occurs in the
process of transforming one form of representation into another,namely,in the
process when an internal representation is externalized and received by another
systemand transformed into its internal representational format.This process hap-
pens in any human-computer interaction.The problem here is that  contrary to
human-human interaction  it is almost impossible for both parties to ask whether
the respective systemreally understood what the other was trying to express.This
fact is due to the (false) implicit assumption that our language and even our picto-
rial/iconic representations are based on a stable and somehow given semantics.
Hence,misunderstandings cannot be claried the same way as in human-human
interaction.
12
4.2.M
APPING THE ENVIRONMENT TO A REPRESENTATIONAL SUBSTRATUM
Both in propositional and in pictorial representation the underlying idea of rep-
resentation can be characterized as follows:the environment is mapped more or
less passively to the representational substratum.Although most approaches in this
eld distance themselves from the idea of a naive mapping (i.e.,naive realism),an
unambiguous stable referential/representational relationship between the structure
of the environrnent and the structure of the representational space is postulated.In
other words,a symbol or a (mental) image refers to,represents,or stands for a
certain phenomenon,state,or aspect of the (internal or external) environment.
Empirical research in neuroscience gives evidence that no such stable and un-
ambiguous referential relationship between repraesentans and repraesentandum
can be found
13
(Kandel et al.,1991,Churchland et al.,1992,Shepherd,1990).
It seems that neural systems do not follow this assumption of a referential rep-
resentational relationship.As it is discussed in (Peschl,1994) there are not only
empirical,but also epistemological and systemtheoretical reasons why the concept
of referential representation does not apply to neural systems.It can be shown that
in highly recurrent neural architectures (as our brain) neither patterns of activations,
nor synaptic/weight congurations,nor trajectories in the activation space do refer
to environmental events/states in a stable (referential) manner.This fact is due to
the inuence of the internal state
4
on the whole representational dynamics (as well
as on the input) of the neural system.As an implication it becomes necessary to
rethink the representational relationship between the environment and the represen-
tation system (see section 4.3).This process of questioning the traditional concept
of representation is not only important for the development of adequate models
of cognition,but also for designing human-computer interfaces,as their design is
based on assumptions stemming froma referential understanding of representation,
such as icons,symbols,images of desktops,etc.
mind287.tex;21/10/1998;11:57;p.19
222
MARKUS F.PESCHL ANDCHRIS STARY
4.3.I
S IT SUFFICIENT TO DEPICT THE ENVIRONMENT
?
Two different aspects of representation that have to be taken into account when one
studies the problem of representation:
(i) mapping or modeling the environment to/in the representational structure;i.e.,
the goal is an adequate and accurate model,picture,description,etc.of the
environment;
(ii) generating (adequate) behavior:an equally important task of a representation
system is to generate behavior that allows the system to accomplish a certain
task (e.g.,survival or solving a problem).
Both in the propositional and pictorial approach the aspect of mapping the en-
vironment to the representational substratum is more important than the aspect of
generating behavior.The implicit assumption of these approaches is as follows:
if the environment is represented/depicted as accurately as possible,then it will
be extremely easy to generate behavior that adequately ts into the environment
(i.e.,that fullls a desired task).As our language and/or images seem to represent
our environment successfully,
15
it follows that accurate predictions can be made
by making use of these representations.Thus,the environmental dynamics can be
manipulated,predicted,and/or anticipated with this kind of representations ef-
ciently.In other words,if the criterion of accurately mapping the environment to
the representational substratum is fullled,we do not have to care about the aspect
of generating adequate behavior.
Froman epistemological and constructivist (Glasersfeld,1984,Glasersfeld,1995,
Maturana et al.,1980,Varela,1991,Roth,1994,Watzlawick,1984) perspective
the claim for an accurate mapping is absurd.As we have shown in section 1,
no one will ever have direct access to the structure of the environment.Hence,it
is impossible to determine how accurate,true,or close the representation
of the environment (either in our brains or in a scientic theory) compared to the
real environment is.The only level of accuracy that can be determined is the
difference between our own (cognitive) representation of the environment and the
(computational) representation that has been constructed by ourselves.In many
cases it has turned out,however,that the human representation of the environment
is not the best solution to a given problem  consequently,it is questionable to
elevate the human way of representing the environment above other forms of rep-
resentation and to use it as a standard against which other forms of representation
have to compete.It is by no means clear why our (cognitive or even scientic)
representation of the world should be more accurate or more adequate than any
other form of representation.
As has been discussed in the previous subsection,there is no empirical evi-
dence for explicit propositional or picture-like representations in the brain.This
fact also implies that neural systems do not generate adequate behavior by mak-
ing use of referential representations.It rather turns out that any natural nervous
system is the result of a long phylo- and ontogenetic process of adaptation and
mind287.tex;21/10/1998;11:57;p.20
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
223
development.The goal of this process is not to create an accurate model or rep-
resentation of the environment,but rather to develop these physical (body and
representational/neural) structures that embody a (recurrent) transformation capa-
ble of generating functionally tting (successful) behavior.In natural (cognitive)
systems it seems that the aspect of generating behavior is more important than the
aspect of developing an accurate model or internal picture of the environment.
What we can learn from these systems and their adaptational strategies is that it
is not necessary for a system to possess an accurate mapping/representation of the
environment in order to generate successful behavior.As accurate representation
of the environment means accurate compared to our representation of the envi-
ronment,it does not follow necessarily that an inaccurate representation is not
capable of producing more efcient behavior.
5.Methodological Consequences for the Construction of Cognitive Models
and Human-Computer Interfaces
Although most models in cognitivescience as well as in human-computer interface
development are mainly concerned with technical questions,the following para-
graphs will demonstrate that epistemological and methodological considerations in
the eld of cognitive representation have crucial implications for the structure and
success/failure of a cognitive model or HCI.The most important problemconcerns
the question of how we see and experience the environment/world.Whenever one
speaks of the world,we have to be aware that  at least since I.Kant  it is im-
possible in principle.As has been discussed,our access to the environment always
occurs indirectly;it is mediated by our sensory systems and by the nervous system.
Thus,when we talk about the world,we actually speak of our representation
of the world.It is the result of a complex process of construction which is embodied
in our neural structure.Looking more closely,one realizes that this view has to be
taken even one step further:when we speak about the world we are not directly
externalizing our neural representation of the world,but we rather make use of
another representational medium,namely language,pictures,icons,etc.Hence,
what we are dealing with is a second-order representation,namely,the represen-
tation of the representation of the world.Of course,language is also represented
in neural structures  it is,however,a second-order representation,because it is a
representation of what has been triggered by the (rst order) neural representation
of the environment.It is used for describing these representations.
5.1.I
NTERACTION VIA SECOND ORDER REPRESENTATIONS
As our access to the environment is always mediated by the sensory systems and by
the structure of the nervous system,this access is highly theory-loaded (in the sense
of (Feyerabend,1975,Feyerabend,1981,Feyerabend,1981a,Churchland,1991)).
In other words,any natural sensory system,body system,or nervous system can
mind287.tex;21/10/1998;11:57;p.21
224
MARKUS F.PESCHL ANDCHRIS STARY
be interpreted as some kind of  theory about the environment  all these systems
have developed in a complex phylogenetic/evolutionary and ontogenetic process of
adaptation and learning.Only those organisms have survived (and were capable of
reproducing) whose neural/body structures embody a functionally tting (viable)
knowledge about the environment.Consider,for instance,our visual system:the
rods and cones in our retina are sensitive to a very small fraction of the whole range
of electromagnetic waves (Sterling,1990,Tessier-Lavigne,1991).It has turned out
in the course of the evolutionary development that this range of electromagnetic
waves holds sufcient (contrast) information for maintaining survival of the human
body.Bees,on the other hand,are highly sensitive in the UV-range (where humans
are insensitive)  for themit has turned out that this range is important for spotting
blossoms.
16
From these simple examples one can see that this neurally and structurally
embodied theory about the environment does not depict the environment in the
sense that certain body parts or neural entities refer to environmental structures,
but they represent a strategy of how to survive in a specic environment with a
specic body structure.Both in the phylo- and ontogenetic case the environmental
structure does not determine the representational structure,but in the best case
triggers and constrains the development and the function of the neural and body
(representation) system.The representation of the environment is constructed by
the dynamics embodied in the nervous system.From these considerations follows
that the representation of the world is always system-dependent and system-relative
in the sense that it represents a correct theory of the world for a specic organism
with its own specic onto- and phylogenetic history in the context of the necessity
to accomplish a certain task.
This implies that,whenever we are speaking about the environment,we always
speak about the representation of the environment in a specic brain/body (by
making use of a specic form of (second order) externalization mechanism (e.g.,
language,pictures,etc.)).Thus,we are always dealing with one possible interpre-
tation/construction of the environmental structure which is the result of a specic
neural system.These interpretations might differ even within a single species over
time.We cannot claim that a certain representation/interpretation/theory (even a
scientic theory) is objective,true,or ultimate.It is only true insofar as
it contributes to the survival and the reproduction of the particular organism (i.e.,
insofar as it is capable of generating functionally tting behavior).What might
represent a true theory/representation for one organism,might be wrong for
another.This view cannot only be applied to simple organisms from different
species,but,for instance,history of science is full of these examples (Kuhn,1970).
A methodological issue that should be of great interest to the design of cogni-
tive models and human-computer interfaces is the fact that most models are based
on second order representations,i.e.,the internal representational structure of the
model/interface is based on linguistic or pictorial externalizations of humans.It is
postulated that these externalizations represent some aspect(s) of the world.From
mind287.tex;21/10/1998;11:57;p.22
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
225
the previous paragraphs follows,however,that these externalized representations
represent  if at all  only a fraction of the externalizing organism's representation
of the world.Whichever artifact we are encountering,it is the externalization of
an organism's internal (neural) representation (see also section 3.2).Thus we are
confronted with the result of a long and complex chain of neural processes and
transformations.
The problemthat arises for the design of cognitive models and human-computer
interfaces can be characterized as follows:most of these systems are based on
propositional or pictorial representations.An example is given in section 1.In
Figure 1 the propositions for machine behavior are listed for a tutoring component.
Although it is postulated that these forms of representation are internal repre-
sentations,they are second order observer categories:An observer (in case of
system development the analyst or/and designer) observes the externalized lin-
guistic,pictorial,logical,problem solving,etc.behavior of a (human) cognitive
system and tries to nd out regularities and/or patterns in these behavioral actions.
By making use of these patterns and of his/her own representational experiences
(of the world,of problem solving,etc.) s/he projects these second order observa-
tions/phenomena into the observed organism and postulates that they correspond
to the organism's internal representation system (without ever having opened
and examined the internal structure of this system).In other words,an internal
mechanism for generating behavior is postulated without ever having a look at the
actual internal mechanism.This is exactly the (methodological) situation in the
domain of pictorial and propositional representations.
Another problem with propositional or pictorial representations is that they are
projected into a cognitive model and/or a human-computer interface.In contrast to
natural systems that are actively acquiring/constructing knowledge in a continuous
process of interaction,adaptation,and learning,knowledge is mapped to these
articial systems (see section 3.4)-the designer projects his/her pre-represented and
pre-processed representations that themselves are the result,of his/her own neural
construction processes to the system where they are used as internal representa-
tional structures.In these articial systems they do not only serve as explanatory
vehicles,but also as mechanisms being responsible for generating so-called cog-
nitive phenomena.In other words,the results of (natural/neural/cognitive) phe-
nomena (e.g.,propositional or pictorial representations are used for generating
cognitive phenomena).In this sense we are dealing with a highly supercial and
self-referential view of representation and cognitive dynamics.As a matter of fact
externalized cognitive behavioral patterns are used as internal mechanisms for gen-
erating exactly these (external) patterns.Instead of projecting these externalized
representations to cognitive models and declaring themas internal representations,
we should rather look at the processes and dynamics of the brain relevant for
human-computer interaction (Cherniak,1988).Only if we learn more about its
internal structures,dynamics,and representational categories,we will be able to
create more human-oriented cognitive models and human-computer interfaces.
mind287.tex;21/10/1998;11:57;p.23
226
MARKUS F.PESCHL ANDCHRIS STARY
5.2.S
YMBOL CRUNCHING
Contrary to the claim above,most models of cognition are based on the (im-
plicit) assumption that humans are perfect logicians,that are able to determine
(complete a task) in a nite time span (e.g.(Lenat,1988),if a formula can be
derived in FOL with a given set of premises.Several years ago Church proved
a theorem stating that there is no algorithm within the predicate calculus that
can achieve that.These limitations concerning the nite articulation and nite
run time behavior do not correspond to human cognition and behavioral dy-
namics,and consequently,to conceptualized cognitive models.Due to this lack
of correspondence,the accuracy of user interfaces can hardly be veried.
Rather,it turns out that the underlying neural structures of natural cognitive
systems follow a radically different dynamics.Cognitive systems have to be un-
derstood as dynamic systems (e.g.,(Gelder et al.,1995,Port et al.,1995)) rather
than as logical theorem provers.As we have seen above,logic is only a second
or third order (representational) phenomenon that emerges at the level of so-called
higher cognitive processes.In any case,it is based on fundamental neural processes
that do not follow the structure and dynamics of FOL.Traditional AI techniques,
such as symbol processing or FOL,make the categorical error of postulating that
the emergent level of logic represents an adequate description,explanation,and
model for the underlying internal neural/cognitive processes.Empirical research
in cognitive neuroscience (Kandel et al.,1991) gives evidence that no such logical
structures can be found on any neural level.
This fact concerns the syntactical as well as the semantical layer of represen-
tation.Even if we allow some semantics and pragmatics to be part of cognitive
models,such as proposed in (Newell,1982) and (Newell,1989),the resulting
software for problem solving does not lower the efforts for checking its validity
and thus,its reliability.
Secondly,following the tradition of other subdisciplines in computer science
the Human-Computer-Interaction like the AI-community considers humans to be
information-processing systems,regardless of the problem at hand or the intended
modeling purpose,i.e.,behavior modeling or cognitive simulation.
17
In addition,it
is assumed that exchanged information can be adequately represented by systems
exchanging symbols,i.e.including semantic and pragmatic aspects.
...Thus it is a hypothesis that these symbols are in fact the same symbols
that we humans have and use everyday of our lives.Stated another way,the
hypothesis is that humans are instances of physical symbol systems,... Newell,
(Newell,1980),p.116
Newell's"Physical SymbolSystems Hypothesis"(PSSH) has been proven use-
ful for behavior-oriented models in commercial AI-applications that can only be
applied in very limited problemdomains,e.g.(Ernst et al.,1986).As long as the ap-
plication of symbols remains restricted to the level of syntactic processes,the PSSH
may be justied.Whenever it is applied to more complex domains,such as human-
mind287.tex;21/10/1998;11:57;p.24
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
227
computer interaction,it has to be reected epistemologically (Peschl et al.,1990).
In both the pictorial and the propositional approaches to representation a sim-
ilar concept of processing is applied:an algorithm manipulates/operates on the
representational structure (i.e.,on the symbols or mental images).There is a clear
distinction between the processing part and the representations,on which these
processes operate (i.e.,processor-memory distinction).The processing part seems
to be actively involved in the dynamics of the system,as it operates on the represen-
tations.The representations,on the other hand,seemto play a rather passive role for
two reasons:(a) as mentioned above,they are the result of having been projected
from the human representation system to the articial representation system (i.e.,
they are passive in the sense of being preprocessed and passively mapped);(b) an
algorithm executes operations on these representations which are assumed to stay
in a stable relationship to the environment (i.e.,they remain rather passive as they
are manipulated by an algorithmsimilarly as we manipulate the (passive) matter of
the environment).
This concept of distinguishing between processing and memory has its roots in
the structure of the Turing machine that inspired the whole computer metaphor for
cognitive processes.In neural systems,however,no such distinction can be found.
Usually,the synaptic connections/weights are considered to hold the knowledge
of the neural system.Patterns of neural activations are assumed to be responsible
for the representation of the current representational state.It is not clear which part
of the systemtakes over the role of the processor.Furthermore,the synaptic weights
(the neural system's knowledge) turn out to be not passive at all  they are
responsible for controlling the ow/spreading of the patterns of activations.It can
be concluded that it is the interaction between the patterns of activations and the
conguration of the synaptic weights that is responsible for both the representation
of the knowledge and for generating the system's behavioral dynamics.
As we have seen in the course of the previous sections,empirical/neuroscientic
evidence for the propositional as well as pictorial approach is rather poor.Of course
there are areas in the brain that seem to be related to the processing of language,
semantics,propositions,mental images,etc. the only thing that is known from
these areas is that if they are damaged in one way or the other,certain cognitive
abilities are not present any more (Kandel et al.,1991,Churchland et al.,I992).
Neuroscience provides almost no knowledge or theories concerning the processing
mechanisms/architecture underlying these cognitive phenomena.From this poor
evidence it seems to be questionable to postulate representational concepts,such
as the pictorial or propositional paradigm does.
That is why both approaches restrict themselves to a functionalistic account in
most cases;i.e.,they describe the functional properties that can be derived fromthe
behavioral surface of the observed cognitive system.These behavioral descrip-
tions are used as explanatory vehicles for internal representational processes -
it is clear that a lot of speculation and common sense concepts are involved in
these explanations/theories about internal representational processes,as the real
mind287.tex;21/10/1998;11:57;p.25
228
MARKUS F.PESCHL ANDCHRIS STARY
internal/neural structures are never actually taken into account.This might have
been a valid approach 20 years ago,when neuroscience still had a comparatively
poor understanding of cognitive processes.However,with the advent of modern
teehniques,theories,and methods in empirical neuroscience,as well as of new
concepts from computational neuroscience (Churchland et al.,1990,Churchland
et al.,1992,Anderson,1991,Hanson et al.,1990,Gazzaniga,1995,Sejnowski
et al.,1990) the picture has changed dramatically;although there is still a long
way to go to fully explain higher cognitive functions in neuroscientic terms,
many basic concepts have been discovered that can be applied to any level of
neural processing (spreading activations,distributed processing and representation,
adaptive processes,Hebbian learning as the basis for any kind of learning (LTP,
LTD,etc.) (Hebb,1949,Brown,1990,Nicoll et,al.,1988),etc.).These ndings
already suggest a completely different concept of (neural) representation mech-
anisms/concepts than the propositional and/or pictorial approaches postulate.It
seems that the time when researchers can use the excuse that the brain is too
complex to be understood will come to an end soon.
5.3.P
ROGRAMMABLE USER MODELS AND DEMONSTRATIONAL USER
INTERFACES
Two developments in the community of user interface researchers are worth to
be mentioned in the context of this paper:The development of alternatives to
stereotypical user modeling,as introduced by (Kobsa et al.,1989) and shown
in its consequences in Figure 1,and the proposal of constructing individualized
macros for end users through demonstration.Both approaches have evolved from
rethinking the role of the users in the course of developing and utilizing inter-
faces.Programmable User Models (Young et al.,1989) convey non-functional
considerations to the designer,namely through predicting user behavior for design
ideas when they occur.This way PUMs enhance the number of perspectives when
designing user interfaces with the user's.
Another enhancement is achieved through demonstrational user interfaces (My-
ers,1989).They qualify users through machine-generated proposals of novel con-
trol inputs (generated through the construction of macros) and thus,to individualize
their interface as close as possible to`cognitively adequate'behavior.In the follow-
ing we will sketch both inputs briey,since they might contribute to a novel ground
to be laid for epistemologically sound developments and`cognitively adequate'
user interfaces.
In order to assist the process of design (Young et al.,1989) have introduced a
psychologically constraint architecture termed PUM(Programmable User Model).
The intention has not been to provide just another mechanism for modeling mental
representations,but to simulate user behavior for particular design solutions.
18
This
way,not only the usability of an interface can be predicted before implementation,
but also the designer's mind and work focus can be directed towards the context of
mind287.tex;21/10/1998;11:57;p.26
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
229
an application at a very early stage of development,such as demanded by (Böhle
et al.,1988,Whitaker et al.,1988,Johnson,1992) and many others.
PUM provides the capabilities to elaborate the second order knowledge that
is expressed through the design solution of an interface.It requires the designer
to explicitly represent the behavior of users,since the content of a PUM is the
knowledge of a user.All assumptions about the expected user behavior become
visible through the`User Program'in PUM.For instance,the sequence of func-
tions technically to be followed in order to accomplish a task may be checked
against a cognitive model (PUM).The PUM provides the designer with insights
into the human`executables'.Hence,it inuences for example the granularity of
the functionality,e.g.,how many cognitively demanding dialog steps have to be
provided for serial letter writing.
In contrast to traditional approaches,PUMs do not provide the behavior of
a user acting like a perfect rational being.They rather provide approximations.
Matching against a perfect user would not provide information for further im-
provements.The cognitive model of PUM is an early mapping of user behavior
to a representation,namely as early as the rst version (i.e.the specication) of
a user interface exists.It is specied in a particular language instead of leading
to facts (like the list of possible encouragements in Figure 1).In contrast to simi-
lar approaches,such as the cognitive complexity theory or ETAG (Tauber,1991),
PUMs try to capture knowledge-intensive instead of procedural behavior.
PUMdenitivelyimproves the situation fromthe methodological point of view,
since users (and as such the conditio-sine-qua-non) for human-computer interac-
tion become involved very early in system development.However,it depends on
the elements of the PUM language how active a PUM ever may intervene the
design process.Up to today the language strongly relies on the explicit represen-
tation of objects and operations to be performed on those objects (similar to ACT

(Anderson,1983);i.e.the paradigm requiring explicit representation schemes for
cognitive behavior.It has still to be elaborated in how far connectionist approaches
can be used for PUMs to enable the simulation of self-organization and structural
coupling.
Demonstrational user interfaces have also been proposed to involve the user
in the development of interaction in a different way than before,and thus,to
enhance the usability of artifacts.However,there are major differences between
PUMs and demonstrational user interfaces.Figure 6 illustrates the major differ-
ences.PUMs directly represent the knowledge required for a certain user behavior,
whereas demonstrational user interfaces emphasize the procedural approach to in-
teraction.They create procedural abstractions based on existing interface features,
i.e.functions and dialog sequences.
Another difference concerns the assessment of usability.PUMs have been de-
signed to predict usability in terms of mental loads,and subsequently,to lead to
modications in design (representations).Demonstrational user interfaces expect
users to be sufciently qualied to decide whether the use of the generated abstrac-
mind287.tex;21/10/1998;11:57;p.27
230
MARKUS F.PESCHL ANDCHRIS STARY
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
231
has to be gained.Based on that knowledge more context-sensitive cognitive
models may be constructed for user interface development.
3.Starting out with reviewing empirical knowledge about cognitive systems it
turns out that meaning or semantics is always individual and always depends
on the structure and the state of a particular cognitive system.
(a) Interaction has to be considered as externalized knowledge at the level
of behavior.It lacks general validity due to the observer's position and the
involved cognitive system.Hence,unmediated access to the environment is
neither completely nor in terms of general validity possible.
(b) This lack of completeness and generality is also caused by several trans-
formation steps occurring in the process of knowledge engineering.The
involved parties are individual cognitive systems of users,analysts,and
programmers exchanging per se meaningless patterns which generate dif-
ferent meanings,and,thus,shifts in the semantics of representations and
behavioral actions,in the respective system.
4.The second concern in our study addressed the step before considering the
use of cognitive models in user interface development.What is a proper cog-
nitive model to be used in the course of development?It turns out that a
shift of semantics is almost inevitable due to ambiguities in the course of
communication,such as language.
(a) Semantics is redened in the design process and has to become an in-
herent part of cognitive models.It is neither externally given or provided
nor can it be excluded froma cognitive model.
(b) The environment is perceived individually through every representation
system,and thus cannot be assumed to be mapped to a stable internal
representation of an artifact,such as the computer system.
(c) Due to the lack of an epistemological criterion for determining what is
an accurate representation or mapping of the environment and due to the
lack of a stable and generally accepted semantics (of artifacts),one can con-
clude the following:it only makes sense to focus on generating adequate be-
havior through a (not necessarily referential) representation system,rather
than focusing on mapping the environment in an isomorphic way to an
articial representation system.The only criterion of a successful repre-
sentation is a successful manipulation or prediction of the environmental
dynamics.
5.Finally,the construction of cognitive models is an issue of whether to use
traditional techniques for knowledge representation incorporating all their
epistemological and methodological shortcomings,such as the limitations re-
lated to symbol processing or taking into account the dynamics of cognitive
representation systems.
(a) The construction of cognitive models should not be focused on separating
static and dynamic parts for processing,such as facts and rules.It should
rather reect the active role of interaction patterns.
mind287.tex;21/10/1998;11:57;p.29
232
MARKUS F.PESCHL ANDCHRIS STARY
(b) Avariety of results fromneuroscience concerning cognitive modeling will
become available in the near future.It should facilitate the identication
of the scope and limits of cognitive models,turning the black box of the
human representation system into a white one steadily.
In this paper we wanted to induce a process of analyzing all involved systems
(neural,sensory,artifacts,etc.) in human-computer interaction on an epistemologic
level,before dening the scope and limits of cognitive models.We proposed to
interpret human-computer interaction froma constructivist perspective,concerning
the human and the computer to become nally compatible systems that trigger
processes mutually,and considering interaction as the transformation of data,in-
formation or knowledge from one representation (system) to another one.Once
such a type of embedding of cognitive models into the development process of user
interfaces is achieved,new generations of design environments which claimed to
take into account the actual context of computer systems (such as users),but have
not actually achieved their claims (Fischer,1993),can be constructed as claimed in
(Dreyfus et al.,1986) and (Winograd,1995).
7.Notes
1
Of course,cognitive systems are able to adapt to new situations,such as a specic user interface.
The goal of human-computer interfaces should be,however,to provide a common sense access
to the structures being used by the computer program at least for handling interaction styles and
devices.
2
This property of interactive systems is also part of principles for designing and evaluating user
interfaces,namely suitability for tasks in Cognitive Ergonomics (Ravden et al.,1989) or task appro-
priateness in Software Ergonomics (Stary,1996).
3
Caroll et al.proposed some methodological guidelines in the sense of the rationalistic tradition
(Winograd et al.,1986).
4
Apparently,cognitive systems can be characterized as transformation systems as well  and this
paper assumes that this is the case.
5
0f course,there is always the possibility to behave in a random way.For obvious reasons,this
strategy seems to be a rather faulty one to ensure the organism's survival.
6
This implies that even so-called objective or true scientic knowledge/theories are only system-
sensitive and always depend on the structure of these cognitive systems that are responsible for
constructing them.
7
Think,for instance,of genetieally altered organisms or plants.
8
This observation concerns any computer systemthat performs a certain task.
9
What is interesting about making use of already prefabricated knowledge?The designer and,in
many cases,the user already knowmost of the knowledge that is represented in the computer system.
So why should we use computer systems to re-represent this knowledge?
10
Good Old Fashioned AI.
11
Although both parties are referring to the same environmental pattern/syntactic structure,they have
different meanings in mind when they are using it or referring to it.
12
Even in human-human communication a 100 per cent overlap cannot be guaranteed (cf.(Glasers-
feld,1983)).
13
A referential representational relationship can be found only in peripheral parts of the nervous
system.But even in these areas there is no evidence for real stability,as the original stimulus is
distorted in the process of transduction.
14
This internal state is the result of the neural system's recurrent architecture.
mind287.tex;21/10/1998;11:57;p.30
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
233
15
Think about the success of our language,symbolic communication,etc.
16
Flowers reect not only in the (human) visual range,but also in the UV-range.Thus,they show a
strong contrast in the UV-range,that helps the bees to position themselves and to nd the owers.
17
Note,that the hypothesis humans are systems'is another epistemologically unreected assump-
tions typically made in natural science.
18
Note that prediction of behavior has been the original goal of cognitive modeling in the course of
developing user interfaces.
References
Anderson J.R.(1983),The Architecture of Cognition,Cambridge,Massachusetts,Harvard University
Press.
Anderson J.R.(1990),Cognitive Psychology and its Implications,3rd edition,New York:W.H.
Freeman
Anderson J.A,Pellionisz A.and Rosenfeld E.(1991),Neurocomputing 2.Directions of Research
Cambridge,MA,(eds.) MIT Press.
Bainbridge L.(1991) Mental Models in Cognitive Skills,in:(eds.):A.Rutherford and Y.Rogers,
Models in the Mind,New York,Academic Press.
Becker A.L.(1991),`A Short Essay on Languaging',in F.Steier,(ed),Research and Reexivity,
London;Newbury Park,CA,SAGE Publishers.pp.226-234.
Benett J.L.,Lorch D.J.,Kieras D.E.,and Polson P.G.(1987),`Developing a User Interface Technol-
ogy for use in Industry,Proceedings INTERACT'87,IFIP,pp.2126,Elsevier (North Holland),
1987.
Berry,D.C.(1987),`The Problemof Implicit Knowledge',in Experts Systems,4,(3).
Boden M.A.(ed.) (1990),The Philosophy of Articial Intelligence,New York,Oxford University
Press.
Böhle,F.and Milkau,B.(1988),`Computerized Manufacturing and Empirical Knowledge',in AI &
Society;2,(3),pp.235243.
Brown T.H.,Ganong A.H.,Kariss E.W.and Keenan C.L.(1990),Hebbian Synapses:Biophysical
Mechanisms and Algorithms Annual Review of Neuroscience,13,475511.
Caroll J.M.and Olson J.R.(1988),`Mental Models in Human-Computer Interaction'in:Handbook
of Human-Computer Interaction,M.Helander (ed),Elsevier,pp.4565.
Cherniak,Ch.(1988),`Undebuggability and Cognitive Science',Communication of the ACM,31,
(4),pp.402412.
Churchland P.M.(1991),`A Deeper Unity:Some Feyerabendian Themes in Neuro-Computational
Form',in G.Munevar (ed.),Beyond reason:Essays on the Philosoplay of Paul Feyerabend,
Kluwer Academic Publishers,Dordrecht,Boston,pp.123.(reprinted in R.N.Giere (ed.),
Cognitive models of science,Minnesota Studies in the Philosophy of Science XV.
Churchland P.S.,Koch C.and Sejnowski T.J.(1990),`What is computational neuroscience?'in E.L.
Schwartz,(ed),Computationnl Neuroscience.Cambridge,MA.MIT Press.
Churchland P.S.and Sejnowski T.J.(1992),The computational Brain.Cambridge,MA,MIT Press.
Devitt M.and Sterelny,K.(1987),Language and Reality.An Introduction to the Philosophy of
Language,Cambridge,MA,MIT Press.
Dix A.,Finlay J.,Abowd G.and Beale R.(1993),Human-Computer Interaction,NewYork,Prentice
Hall.
Downs T.(1987),Reliability ProbIems in Software Engineering  A Review,in Computer Systems,
Science and Engineering,2,(3),pp.131147.
Dreyfus H.L.and Dreyfus St.E.(1986),Competent Systems:The Only Future for Inference-Making
Computers,in Future Generations Computer Systems,2,pp.233244.
Duda R.O.and Gaschnig J.G.(1981),Knowledge-Based Expert Systems Coming of Age,in:Byte,
6,(9),pp.238278.
mind287.tex;21/10/1998;11:57;p.31
234
MARKUS F.PESCHL ANDCHRIS STARY
Eckardt B.v.,(1993),What is cognitive science?Cambridge,MA,MIT Press.
Eco U.,(1976),A Theory of Semiotics.Bloomington,Indiana University Press.
Eco U.,(1994),Semiotics and the Philosophy of Language.Bloomington,Indiana University Press.
Edwards,P.N.(1988),The Closed World:Systems Discourse,Military Strategy,and Post WWII
American Histrical Consciousness,in AI &Society,2,(3),pp.245256.
Ernst M.L.and Ojha H.,(1986),Business Applications of Articial Intelligence KBs,in Future
Generations Computer Systems,2,pp.75116.
Feyerabend P.K.,(1975),Against Method.London;New York,Verso.
Feyerabend P.K.,(1975),Realism,Rationalism,and Scientic Method.Philosophical papers I,Vol.
I,Cambridge;New York,Cambridge University Press.
Feyerabend P.K.,(1981),Problems of Empiricism.Philosophical Papers II,Vol.II.Cambridge;New
York,Cambridge University Press.
Fischer G.(1993),`Beyond Human-Computer Interaction:Designing Useful and Usable Commuca-
tional Environments',in Proceedings HCI'93',Cambridge University Press,pp.1731.
Fodor J.A.,(1988),Psychosemantics:The Problem of Meaning in the Philosophy of Mind,
Cambridge,MA,MIT Press.
Gazzaniga M.S.,(ed.) (1995),The Cognitive Neurosciences.Cambridge,MA,MIT Press.
Gelder T.v.and Port R.,(1995),It's About Time:An Overview of the Dynamical Approach to
Cognition,in R.Port and T.v.Gelder,(eds),Mind as Motion.Cambridge,MA,MIT Press.
Gentner D.and Stevens A.L.(eds).(1983),Mental Models.Hillsdale,NJ,Lawrence Erlbaum.
Glasersfeld E.v.,(1983),On the Concept of Interpretation.Poetics,12:254274.
Glasersfeld E.v.,(1984),An introduction to radical constructivism,in P.Watzlawick,(ed.),pp.1740.
The Invented Reality,New York,Norton.
Glasersfeld E.v.,(1995),Radical Constructivism:A Way of Knowing and Learning.London,Falmer
Press.
Goldberg D.E.,(1989),Genetic Algorithms in Search,Optimization,and Machine Learning.
Reading,MA,Addison-Wesley.
Hanson S.J.and Olson C.R.,(1990),Connectionist Modeling and Brain Function:the Developing
Interface.Cambridge,MA,MIT Press.
Hebb D.O.,(1949),The Organization of Behavior;A Neuropsychological Theory,NewYork,Wiley.
Hertz J.,Krogh A.and Palmer R.G.,(1991),Introduction to the theory of neural computation,volume
1 of Santa Fe Institute studies in the sciences of complexity.Lecture notes.Addison-Wesley,
Redwood City,CA,1991.
Holland J.H.,(1975),Adaptation in natural and articial systems:an analysis with applications to
biology,control,and articial intelligence.Ann Arbor,University of Michigan Press.
Hutchins E.and Hazelhurst B.(1992),Learning in the Cultural Process,in C.G.Langton,C.Taylor,
J.D.Farmer,and S.Rasmussen,(eds.) Articial Life II,Redwood City,CA,Addison-Wesley,pp.
689706.
Johnson P.(1992),Human-Computer Interaction.Psychology,Task Analysis and Software Engineer-
ing,London,McGraw Hill.
Johnson-Laird P.N.(1983),Mental Models,Cambridge,MA,Harvard University Press.
Johnson-Laird P.(1993),The Computer and the Mind.An Introduction of Cognitive Science,London,
Fontana.
Kobsa A.and Wahlster W.,(eds) (1989),User Models in Dialog Systems.Heidelberg,Springer.
Ktandel E.R.,Schwartz J.H.and Jessel T.M.(eds.) (1991),Principles of Neural Science.New York,
3rd edn.Elsevier.
Kuhn T.S.(1970),The Structure of Scientic Revolutions,Chicago,University of Chicago Press,2nd
edn.
Lenat D.B.(1988),When Will Machines Learn?,in Proceedings`Int.Conf.on 5th Generation
Computer Systems',ICOT,pp.12131245.
mind287.tex;21/10/1998;11:57;p.32
COGNITIVE MODELINGFOR USER INTERFACE DESIGNREPRESENTATIONS
235
Maturana H.R.(1978),Biologie der sprache:die epistemologie der realität,in H.R.Maturana,(ed.),
Erkennen:die Organisation und Verkörperung von Wirklichkeit,Vieweg (1982),Braunschweig,
pps.236271.
Maturana H.R.and Varela F.J.(eds.) (1980),Autopoiesis and cognition:the realization of the living,
volume 42 of Boston studies in the philosophy of science.Dordrecht,Boston,D.Reiclel Pub.
Co.
McClelland J.L.and Rumelhart D.E.(eds.) (1986),Parallel Distributed Processing:explorations in
the microstructure of cognition.Psychological and biological models,vol II.Cambridge,MA,
MIT Press.
Mitchell M.and Forrest S.(1994),`Genetic Algorithms and Articial Life,'Artcial Life,1(3):267
291.
Moray N.(1993),`Formalisms for Cognitive Modeling',in:Human-Computer Interaction.Ap-
plications and Case Studies,Smith,M.J.;Salvendy,G.(eds.),Elsevier,Amsterdam,pp.
581586.
Myers B.A.(1989),`Demonstrational Interfaces.A Step Beyond Direct Manipulation',IEEE
Computer,25(8) pp.6173.
Newell A.(1980),`Physical Symbol Systems',Cognitive Science,4:135183.
Newell A.(1982),The Knowledge Level,in Articial Intelligence,18,pp.87127.
Newell A.(1989),Unied Theories of Cognition,Harvard,University Press.
Newell A.,Rosenbloom P.S.and Laird J.E.(1989),Symbolic architectures for Cognition.in M.I.
Posner (ed.) Foundations of Cognitive Science,Cambridge,MA,MIT Press,pp 93131.
Nicoll R.A.,Kauer J.A.and Malenka R.C.(1988),The Current Excitement in Long-Term Potentia-
tion Neuron,1(2) pp.97103.
Norman D.A.(1983),Some Observations on Mental Models,in Mental Models,Lawrence Erlbaum,
Stevens,A.L.and Gentner,D.(eds.),Hillsdale,New Jersey,pp.714.
Olson J.R.and Olson,G.M.(1990),`The Growth of Cognitive Modeling in Human-Computer
Interaction Since GOMS',in Human-Computer Interaction,5,(2 and 3) pp.221266.
Osherson D.N.and Lasnik H.(eds.) (1990),An Invitation to Cognitive Science,vol 13,Cambridge,
MA,MIT Press.
Perner J.and Garnham A.(1988),Conditions for Mutuality,in:Journal of Semantics,Vol.6,pp.
369385.
Peschl M.F.(1990),Cognitive Modelling,Deutscher Universitätsverlag/Vieweg,Wiesbaden.
Peschl M.F.(1994),`Autonomy vs.Environmental Dependency in Neural Knowledge Represen-
tation',in R.Brooks and P.Maes,(eds.),Articial Life IV,Cambridge,MA,MIT Press,pp
417423.
Peschl M.F.(1994),`Embodiment of Knowledge in the Sensory System and its Contribution to
Sensormotor Integration.The Role of Sensors in Representational and Epistemological Issues',
in P.Gaussier and J.D.Nicoud,(eds.),FromPerception to Action Conference,Los Alamitos,CA,
IEEE Society Press,pps.444447.
Peschl M.F.(1994),Repräsentation und Konstruktion.Kognitions- und neuroin-formtische Konzepte
als Grundlage einer naturalisierten Epistemologie und Wissenschaftstheorie.Vieweg,Braun-
schweig/Wiesbaden.
Peschl M.F.and Stary Ch.(1990),`IKARUS - Interdisciplinary Knowledge Reconstruction Based
on Rules and Science Theory',in Proc.of 8th Int.Conference on Systems and Cybernetics,Vol.
2,New York.
Polanyi M.(1966),The Tacit Dimension.Garden City,NY,Doubleday.
Port R.and Gelder T.v.(eds.) (1995),Mind as Motion:Explorations in the Dynamics of Cognition.
Cambridge,MA,MIT Press.
Posner M.I.(ed.) (1989),Foundations of Cognitive Science.Cambridge,MA,MIT Press.
Preece J (ed.) (1994),Human-Computer Interaction,Wokingham,Addison-Wesley.
mind287.tex;21/10/1998;11:57;p.33
236
MARKUS F.PESCHL ANDCHRIS STARY
Ravden S.and Johnson G.(1989),Evaluating Usability of Human-Computer Interaction.APractical
Method.Chicester,Ellis Horwoor.
Roth G.(1991),`Die Konstitution von Bedeutung im Gehirn',in S.J.Schmicit,(ed.),Gedächtnis,
Suhrkamp,Frankfurt/M.,pps 360370.
Roth G.(1994),Das Gehirn und seine Wirklichkeit.Kognitive Neurobiologie und ihre philosophis-
chen Konsequenzen,Suhrkamp,Frankfurt/M.,1994.
Rumelhart D.E.and McClelland J.L.(ed.) (1986),Parallel Distributed Processing:explorations in
the microstructure of cognition.Foundations,vol l.Cambridge,MA,MIT Press.
Sejnowski T.J.Koch C.and Churchland P.S.(1990),Computational Neuroscience,in S.J.Hanson
and C.R.Olson,(eds.),Connectionist Modeling and Brain Function:The Developing Interface,
Cambridge,MA,MIT Press,pps 535.
Shepherd G.M.(ed.) (1990),The Synaptic Organizntion of the Brain.New York,Oxford University
Press,3rd ed.
Stacy W.(1995),Cognition and Software Development,in Communications of the ACM,Vol.38,
(6),p.31.
Stary Ch.and Peschl M.(1995),`Towards Constructivist Unication of Machine Learning and
Parallel Distributed Processing',in Ford,K.,Hayes,P.eds,Android Epistemology,MIT Press.
Stary Ch.(1996),Interactive Systems.Software Ergonomics and Software Engineering (2nd edition),
in German,Vieweg,Wiesbaden.
Sterling P.(1990),Retina,in G.M.Shepherd,(ed),The Synaptic organization of the Brain,New
York,Oxford University Press,3rd ed,pps 170213.
Sticklen J.(1990),`Problem Solving Architectures at the Knowledge Level',in Journal of Experi-
mental and Articial Intelligence,1,(1),pp.152.
Tauber M.J.(1991),`ETAG - Extended Task Action Grammar',Proceedings INTERACT'91,IFIP,
Elsevier.
Tessier-Lavigne M.(1991),Phototransduction and Information Processing in the Retina,in E.R.
Kandel,J.H.Schwartz,and T.M.Jessel,(eds.),Principles of Neural Science,Elsevier,NewYork,
3rd edition,pps 400419.
Thorndyke P.W.and Stasz C.(1985),`Individual Differences in Procedures for Knowledge Acquisi-
tion fromMaps',in Cognitive Psychology 12,pp.137175.
Turkle Sh.(1984),The Second Shelf:Computers and the Human Spirit,New York,Simon and
Schuster.
Van de Riet R.(1987),Problems with Expert Systems,in Future Generations Computer Systems,
Vol.3,pp.1116.
Varela F.J.,Thompson E.and Rosch E.(1991),The Embodied Mind:Cognitive Science and Human
Experience.MIT Press,Cambridge,MA.
Watzlawick P.(ed.) (1984),The Invented Reality,Norton,New York.
Weitzel J.R.and Kerschberg L.(1989),`Developing Knowledge-Based Systems:Reorganizing the
SystemDevelopment Life Cycle',in Communications of the ACM,Vol.32,(4),pp.482488.
Whitaker R.and Östberg O.(1988),`Channeling Knowledge:Expert Systems as Communication
Media',in AI &Society,Vol.2,(3),pp.197208.
Winograd T.and Flores F.(1986),Understanding Computers and Cognition,A New Foundation for
Design,Ablex,Norwood.
Winograd T.(1995),From Programming Environments to Environments for Designing,in Commu-
nications of the ACM,Vol.38,(6),pp.6574.
Wood S.(1986),New Technologies,Organization of Work,and Qualications:The British-Labour-
Process Debate (in German),in Prokla 2,Rotbuch,Berlin.
Young R.M.,Green T.R.G.and Simon T.(1989),`Programmable User Models for Predictive
Evaluation of Interface Designs',in Proceedings CHI'89,ACM,p.1.
mind287.tex;21/10/1998;11:57;p.34