Chapter 6 Belief

topsalmonIA et Robotique

23 févr. 2014 (il y a 3 années et 3 mois)

203 vue(s)


1





Chapter 6

Belief


Jan.’10


“The action of thought is exerted by the initiation of doubt and ceases when
belief is attained; so that the production of belief is the sole function of
thought
.







Charles S. Peirce


“The greatest obstac
le to discovery is not ignorance


it is the illusion of
knowledge.”









Daniel J. Boorstin




6
.1
Belief
-
change



Logicians have long recognized a distinction between categorical, conditional and
hypothetical reasoning. Roughly speaking
,

ca
tegorical reasoning exhibits the form

Since
α, β
¬
. Conditional reasoning exhibits the form

If α then β
¬
. Hypothetical reasoning
exhibits the form

Since α, it is reasonable to suppose (conjecture, hypothesize) that β
¬
.
Categorical and hypothetical reason
ing is a matter of drawing consequences. Conditional
reasoning is a mat
ter of spotting consequences
,

not

drawing them.
Categorical reasoning
maps belief to belief. Conditional
reasoning engenders implicational
belief. Hypothetical
reasoning maps belief to
supposition (conjecture, hypothesis). Since
the notion of
belief
is a constituent of reasoning
in
all these forms, it is only natural to suppose that it will
have a role to play in the differentiation of good and bad reasoning. A logic of reasoning
should
have something to say about this.

In the account developed her, belief has a
central importance.


In
chapter two, I made favourable mention

of the Can Do Principle. This is the
principle that bids us, to the extent possible, to solve our theoretical proble
ms in
frameworks that are up and running and successful, using method
s that are tried and true.
I

said that, provided it does not over
-
reach itself, Can Do gives to the theorist
methodological guidance of the first importance.
For the better part of twenty
-
five years,
belief
-
change theories have been a prominent part of the research programmes of AI
1

and
formal epistemology
.
2

If reasoning comprehends

belief
-
change,
why wouldn’t a theory of
reasoning be

an adaptati
on of a theory of belief
-
change?

Wouldn’t a
theory of belie
f
-
change be the natural place to look?

Isn’t this what Can Do would suggest?
Similarly,
belief has been the focus of attention of modal logicians for nearly fifty years. Would a
belief logic also be a natural place for a logic of reasoning t
o seek instruction? Wouldn’t
Can Do also direct us here?


This is not, in fact,
the course that I am

going to take. As will shortly become
apparen
t, the concept of belief needed

for the account of error is not well
-
catered for by
these accounts. The purpos
e of the present section is to make a brief review of the main
theories of belief
-
change

and belief logics
, pointing out

reservations as we go
.




1

Important early contributions

are the truth
-
maintence system of Doyle (1979) and the database
-
prioritization approach of Fagin,
et al.
(1983).

2

The early philosophical contributions of note include Levi (1977), Levi (1980) and Harper (1977).A
valuable recent work is Hendricks (2006).


2


A central challenge for AI is to provide an account of the competency of an
artifici
al agent in amending

its be
liefs when its situation changes (updating) or when its
knowledge of an unchanging situation increases (revision). In this approach, a belief is
identified as a sentence that is believed true or false or
is
uncertain. It is postulated that an
agent’s belie
fs
are always consistent, and that the main business of belief
-
change is the
preservation of consistency in th
e face of new information. C
lassical belief change
theories
have a Millian cast to them, made so by the fact that the

principal instruments for
th
e management of belief
-
change are consistency and consequence, and these are defined
in the same way as they are in classical deductive logic. The classical theories are also
routinely identified as nonmonotonic. Le
st this be a source of
confusion, it shou
ld be
noted that what is here meant by the term “nonmonotonic” is that previously accepted
consequences of prior beliefs may be dropped. This is not how logicians typically
understand the term. A conseq
uence relation is said to be
monotonic just in case a

sentence α bears that relation to a sentence β only if, for any sentence λ,

α, λ


also bear
s

it to β.
Accordingly, a consequence relation is nonmonotonic just in case it is not
monotonic.
Accordingly, in
these clas
sical theories of belief
-
change
,

update
and revision
are nonmo
n
otonic, but consequence is monotonic.


In addition to the classical AI/FE theories, there are psychological accounts of
rea
soning which have a bearing on
belief
-
change. For the

purposes of this section, I’ll
confine my

remarks to th
e th
ree dominant models of the AI/FE

orientation


these are the
syntax
-
based
approach, the

model
-
based

approach and the
hybrid
approach


and the two
main orientations developed by psychologists


these are the
proof theoretic
approach

and the
mental mode
ls
approach.


In the syntax
-
based AI approach, an agent’s beliefs are made up of sentences
,

and
his background beliefs likewise
are
sets of sentences. A belief
-
change is one in which an
agent’s current beliefs together with some background information ded
uctive
ly imply
different new sentence
. In this model it is
not required that an agent
actively draw any of
these consequences, although in most

versions

immediate or obvious consequences
would likely be drawn. On the othe
r hand, in model
-
based accounts

bel
iefs are equated to
models, which are semantic interpretations


functions


making some designated group
of beliefs true. Syntax
-
based theories deal with explicit beliefs. Roughly speaking, a
consequence of an explicit belief is drawable only when it itse
lf is an explicit belief. In
model
-
based theories, beliefs include implicit beliefs. This motivates the “rationality
postulate” that an age
nt will believe every deductive

consequence of anything he
believes. No one disputes the radicalness of the cl
osure
p
ostulate. Human beings at their
best come nowhere close to implementing it
,

yet is widely held in the AI and FE
communities that not only does the closure idealization simplify the account of belief
-
change, but it would be next to impossible to produce a d
ecent formal theory without it.
Syntax
-
based theories also reject the principle that if among the candidates for admittance
to an agent’s beliefs there are some logically equivalent sentences, then it is a matter of
indifference as to which of them is chos
en, hence that mere syntactic difference is
irrelevant to belief
-
change.

Model
-
based theories accept this

principle.


The hybrid approach


also called the “theory
-
based” approach
3

--

blends vari
o
us
features of the syntax
-
based and model
-
based orientation
s. The best known hybrid theory


and in may ways still the core account
--

is the AGM
-
model, so
-
named after the



3

Elio and Pelletier (2008).


3

s
eminal paper of Carlos Alchourró
n, Peter Gardenför
s a
nd David Makinson (
Alchourró
n
et al.

(1985)
). As with syntax
-
based theories
,

in the AGM
-
mod
el a belief state is a
theory
.
4

But,
as with model
-
based theories, A
GM
-
belief is closed under consequence.
Hybrid theories also accept the principle of the irrelevance of merely syntactic
difference
s

to belief
-
change.


In most treatments, a doxastic lo
gic is a modal logic of belief
-
sentences. A belief
-
sentence is a sentence prefixed by the belief
-
operator
B
. That
B
is a sentence operator is
beyond doubt. The dominant view among belief logicians is that
B
is also a modal
operator. Typically, then, a logi
c of belief is an extension of some or other logic of the
alethic modality for possibility. Then, in turn, the semantics of a belief logic will be a
suitably doxasticize
d semantics for possibility. Virtually
5

all
the mainstream semantic
treatments of possi
bility are, in one way or another, possible
-
world theories, although at
certain levels of abstraction, the name “possible world” names nothing essential


or even
discernible


in the logic’s interpretative structure.
6

The transition of a possibility logic



S4 say


to a belief logic involves two crucial additions. One is that belief is made to be a
logical particle. The other is that these are
people
in the ensuing logic. The people of
doxastic logic


the entities that do the believing


resemble in key
respects the agents of
AI in belief
-
changes. They close their beliefs under consequence; and they believe all the
truths of logic.


Classical belief
-
change theories of all types are highly idealized models of belief
-
change competence.
The same is true of m
ainstream approaches to doxastic logic.
Psychological theories are accounts of belief
-
change performance. Performance theories
attend to the conditions under which human belief
-
change actually
take place. Two of the
most prominent performance
-
oriented theo
ries of reasoning

which have been developed
by psychologists. One is the mental
-
model approach of Johnson
-
Laird and his
colleagues,
7

and the other is the proof
-
theoretic approach of Rips and others.
8

In Rips’
approach a state of belief is a partial proof,
and the transition operators that move states
to states are a subset of the classical natural deduction inference rules. The operators are
proof
-
building rules. Belief
-
change is then likened to proof
-
development or proof
-
completion.


On the mental models a
pproach a state of belief includes an i
nterpretation
of a
given set of sentences,

, each with a
truth value picked up by virtue

of how the world is
at that time. Operators map models to other models in fulfillment

of the requirement
to
maintain consisten
cy. Let α be any sentence not in

, but whose truth is a condition on
the consistency of

’s model. In each case, both proof
-
theoretic and mental model, the
mechan
isms of belief
-
change are search

procedures for the identification of such αs.


Finally, a b
rief word about similarities and difference between these AI (or
competence) theories and the psychological (or performance) accounts. AI model
-
based
and psychological mental
-
model approaches share the notion of model, but differ in other



4

In certain FE approaches, theories in this sense are “belief sets” or “knowledge sets” (and sometimes
“corpora”).

5

Jubien (2009) is a non
-
worlds approach to possibility.

6

See here Blackburn and van Benthem in Blackburn, v
an Benthem and Wolter (2006)

7

Johnson
-
Laird, Byrne and Schaeken (1992), Johnson
-
Laird and Byrne (1991).

8

Rips (1983), Rips (1994), Braine and O’Brian (1991).


4

ways. AI model th
eories accept the irrelevance
of syntax principle, but mental
model
theories do not.


The aim of

this book is to develop an empirically sensitive logic wi
thin which to
advance a thick

theory of errors of reasoning. As I’ve

had occasion to say i
n preceding

chapters, I

take it that our commitment to empirical sensitivity requires a certain
resistance to accounts of huma
n behaviour
deduction

and consistency

carrying a bias that
favours to accounts that are

factually hopeless, over
-
idealized and
normatively
du
bious
,
and
which
place a premium on technical virtuosity over conceptual accuracy and
behaviour
al fidelity. It is true that our

subscription to empirically

sensitivity requires
respectful

attention to lawlike disclosures of descriptively adequate
empiri
cal

theories

and that such theories sometimes make empirically distorting idealizations precisely for
the purpose of facilitating the framing of lawlike connections. We are not Luddites. It is
not open to us to reject any theoretical postulate just because of

its empirical distor
tion.
But, to echo a remark of Quine’s about another thing, the returns had better be good. That
is, the theory’s descriptive laws must score well on the score
of intelligibility and
predictive support. Still, there is no theory of bel
ief revision, in AI or psychology,
and no
logic of belief
that pays serious
mind

to what belief actually is, and in particular to what it
is like to
have
beliefs. In the account we will develop in this chapter, what believing is
like plays a fundamental ro
le. We must look elsewhere for a credible account of what this
is. When we do, will find some useful guidance in belief’s
phenomenological
aspects.


Later in this chapter and in the chapter to follow it will also become apparent
that
the mainline theories

of belief
-
change make no credible contribution to the problem of
error
-
detection.


6
.2

Inapparency and apparency




We are three kinds of beings
at once. We are causal

beings. We are d
ialectical
beings. We are phenomenological

be
ings. There is reason to s
uppose

that our successes
overall at knowing and reasoning would not have been possible save for the
engagement
of all sides of our t
rinitarian make
-
up. It is the same for our doxastic and inferential
failures. A theory of error will turn out to be an acco
unt of the misalignment of our
tripartite selves. Error
-
correction, in turn, would be a matter of realignment.

A theorist of
error should

say something about the conditions under which error
-
m
aking is possible; he
should attend to
the conditions under whic
h error
-
detection
is possible; and he should
strive for
a
n explanation of why our facility

at error
-
detec
tion

doesn’t enable us to avoid
making errors in the first place. The present chapter is a response to these challenges.

In tackling

the inapparency/a
pparency transformation that attends the

making and
detecting of error, a
guiding principle, as we saw, is t
hat cognitive agents enter the
practical

logic of reasoning as they actually are in reality, wa
rts and all. In this section, I

want to consider one
of the warts. It arises from an epistemological commonplace, and it
bears an essential link to the first person/third person asymmetry.


Proposition 6
.2
a

THE PSYCHOLOGICAL PRIORITY OF BELIEF OVER KNOWLEDGE
:


For large ranges of cases, we

quest for

knowl
edge, but
settle

for belief
.
We settle
for belief
thinking

that it is knowledge.









5


The thrust of Proposition 6
.2
a is just about perfectly encompassed by Peirce:


We may fancy that


we seek, not merely an opinion, but a true opinion. But put
this
fancy to the test, and it proves groundless; for as soon as a firm belief is
reached we are entirely satisfied, whether

the belief be
true or false (Peirce, CP
5.375).
9




The i
ntent of
Proposition
6
.2
a is that we settle
for belief in that sense of “bel
iev
ing”
which signifies

experiencing oneself as knowing.
Of course, the attribution of the belief
that α

to a person who thinks he knows it

is from the third person perspective only. From
t
he first person perspective, he

doesn’t believe that α;

rather he

know
s it. From the third
person perspective, you can consistently at
tribute the belief to someone

and go on to
attribute falsehood to the belief. But from the fi
rst person perspective, that would be
the

course of Moorean

blindspots

:
10


I believe that α
,
but α

is false.
¬

Of course,
other
situations exist, and other uses of “belief”, in which we sometimes settle for beli
ef,
knowing that we do. Someone
might wish to know whether α. She might examine all the
relevant evidence and determine that, while it bears fav
ourably on α, it doesn’t do so
conclusively. So, having wanted to know whether α, she now settles for believing it.
Since the state she is now in is one which she herself would describe as believing that α,
she is not in a state in which she experiences he
rself as knowing α; she is not, as we shall
say, in a
k
-
state with respect to α.

K
-
states are essential for the work of this chapter. Their invocation is meant to lay
emphasis on the phenomenological unreality of the distinction between actually being in
a state of knowledge and being in a state one experiences as being in a state of
knowledge. So we have it that for the requisite sense of “believe”,


Proposition 6
.2
b

K
-
STATE COMMONALITY
:

An agent’s believing that α at time
t
and his
knowing that α at
t
ar
e
underlain

by

the same
k
-
state.


When you don’t know that α, being in a k
-
state with respect to it disguises that fact. This
is, so to speak, the dual of dialectical erasure.
When you do know that α, not being

in a k
-
state with respect to it disguises tha
t fact

as well.

The

phenomenological inapparency of false belief is a founding datum for
classical epistemology and metaphysics. As we noted pages ago, philosophers have
invested heavily in strategies by which the individual cognizer can mitigate the
entra
pment of the first person, of Pe
rry’s egocentric predicament. By reflecting on what
one thinks one knows, by subjecting it to the burdens of critical justification, we
sometimes see the error of our ways.
This dialectical factor is tricky. There are classe
s of
cases in which taking the third person position with respect to your own
k
-
state by



9

Cf. 5.397: “And what, then, is belief? It is the demi
-
cadence which closes a musical phrase i
n the
symphony of our intellectual life. We have seen that it has just three properties: First, it is something that
we are aware of; second
it appeases the irritation of doubt
; and, third, it involves the establishment in our
nature of a rule of action

.
” (Emphasis added).

10

Sorensen (1988).


6

considering whether the requisite justification exists, dialectical erasure may be the result.
(This
is
how otherwise sensible people get turned into sceptics). But th
ere are other cases


cases in which justification is available if asked for


in which the call for it is not
dialectically erasing. Even so, there are problems here.
By

giving, as the
CC
-
model
requires, due dialectical weight to the critical views of ot
hers
who
, in relation to us, are in
the place of the third person, we sometimes acquire the wherewithal to diagnose a
mistake, to see that what we ex
perienced ourselves as knowing
we

didn’t. The whole
dialectic of critical thinking is bound up with this vi
ew of corrigibility. Errors are
correctible by assuming (or simulating) the third
-
person position. Errors are avoidable by
assuming the posture strategically


in advance of th
eir commission, so to speak. Even
so, it
should
n’t

be overlooked
that every cogn
itive perspective that is a third person

perspective with respect to you
is the

first person perspective with respect to itself. This
being so, cognitive agents o
perate within “k
-
state bubbles


or, for the requisite sense of
belief
,

“belief bubbles

11
, a no
tion that we
may
find, albeit without the name, in an old
article of Rozeboom.
12


Proposition

6
.2
c

K
-
STATE

BUBBLES
:

A c
ognitive agent is in a k
-
state

bubble with respect to
proposition α if and only if the distinction between his knowing that α and his
experiencing himself as knowing it is phenomenologically
inapparent to him in
the there and now
.


COROLLARY:
The same embubblement holds with respect t
o one’s
thinking

that
α is true and its being true.


Thus is born the impression of corrigibility. If, having reflected on the matter or
having listened to your critics, you now see that your former belief (as you can now say)
was mistaken, what you
curren
tly
take yourself as knowing you might not. Current

knowledge is subject to the same phenomenological inapparency of the distinction
between knowing α and
k
-
experiencing it. Equally, your critics and interlocutors, whose
insights have led you to see your e
rror, proceed on the only basis available to them; they
proceed, that is to say, on what it is they take themselves as knowing, including what they
think they know that you (now) know. But what they think they know they might not
know. Discovery of the err
or you made
then
flows from the fact, as you
now

see it, that
what you thought you then knew you only believed. In pressing this present view of the
matter, you yield to what you think you now know. What you now think you know is
incompatible with what you

used to think you knew. Given what you now think you
know, what you used to think you knew you now think you know to have been an error.
Accordingly,


Proposition 6
.2
d

ERROR
DETECTION
:

For agent
s

A
, error
detection


when it is possible at all,
is an aft
er
-
thought. It
is rooted in the possession
by
A
of phenomenological states



11

In Woods (2005a), they were called “epistemic bubbles.”.

12

Rozeboom (1967).


7

the

recognition of whose own erroneousness

is not
concurrently

possibl
e

for him
.
It is dialectically stimulated but phenomenologically resolved.


The first person/third person asymm
etry bites hard here. For the person who
brings it off, error
-
detection is a kind of coming to his senses. He comes to his senses in
recognizing the incompatibility of what he now sees to be true with what he used to think
was true.
13

But the asymmetry is s
uch that what is experienced in these ways may not be
as those ways suggest. For one possibility is that, from the third person point of view, it is
no more than this: that what one now believes seem to be incompatible with what one no
longer believes. Sin
ce for large ranges of cases this seems to be so, it appears that we
must reconcile ourselv
es to the following poss
ibility


suggestive

of the ineliminability
of
subjectivity:
14


Proposition 6
.2
e

THE NO
-
ESCAPE THESIS
:

Except for those limited ranges

of poss
ible cases in
which knowledge is immediate, self
-
evident, or incorrigible, every knowledge
claim of a cognitive agent


even one that purports to correct a prior error or the
present error of another person


is lo
dged irreducibly in a k
-
state

bubble. K
-
st
ate

bubbles cannot ultimately be “popped.”


COROLLARY:
The no
-
escape thesis also holds for embubblement with r
espect to
the phenomenological

inapparency of the distinction between thinking that
something is true and is being true.


Armies

of epistemologis
ts have lavished their e
nergies on attempts to solve

the
egocentric predicament
. I myself

have neither the time nor the inclination to join that fray
here. It suffices to remark o
n how it plays on our abundance
theses and the principle that
purports to rec
oncile them. As we have it now, the reconciliation achieved by the Enough
Already Principle provides for the concurrence of two abundances, one of knowledge and
the other of error. But as we see, it is open to question as to whether Enough Already
strictly

requires the cognitive abundance thesis to be
true.

It suffices that what we take
ourselves as knowing should motivate the appropriate actions, actions that constitute our
doing well. Relatedly, it is not in all strictness required that the principle’s re
ference to
“getting things right” be equated with epistemic success. For we survive, prosper, and so
on, by acting in ways that bring these things about, and we act in such ways when we are
motivated by the requisite
beliefs,

by the requisite
thinkings

tha
t we know.

It is not my position

that the gettings right that constitute our doing well are not
grac
ed with epistemic virtue. I am

not saying that these gettings
-
right aren’t true. Any
subscriber to the Cognitive Abundance Thesis would have a tough time h
o
lding so
contrary a view. I am

not, in other words,
a Stichian radical

about such things, still less

a
Sartwellian
.
15

Writes Stich: “


it is hard to see why anyone but an epistemic chauvinist



13

At times even this is a bit CC
-
ish. Consider the following exchange. “Tomorrow is Barbara’s birthday.”
“No, it’s the 3
rd
.” “Oh, I see, t
hanks.”

14

Edelman (2006), p. 84: “Subjectivity is irreducible”.

15

Sartwell (1991, 1992).


8

would be much concerned about rationality or truth.”
16

My
view is
that the tightness of
the correlation between gettings
-
right and arrivings
-
at
-
the
-
truth of things is an open
question.

Part of what keeps it open is fallibilism. Cognitive Abundance is not a dogma; it
is a
thesis.

But it is a thesis that plays a critical
strategic role. Enough Already is not a
thesis but a fact. It tells us that we handle ourselves well enough to multiply and flourish
(and
sometimes
go to the Tate). We have suggested that it may well be a condition of that
success that
,

in myriad ways
,

we
attempt, and see it as right to do so, to execute the
routines of the CC
-
model. If this is right, it gives us clear guidance about Stich’s
dismissiveness toward truth and rationality (actually, “rationality”). It is that Stichian
disdain cannot be given be
havioural expression without damage to our prosperity. In this,
Stich’s is a run
-
of
-
the
-
mill scepticism. There may be point in reflecting upon it


in
giving it
due consideration; but acting
on it is out of the question.

One might say that the possibility
that doing well is not unif
ormly attended

by
epistemic virtue threatens knowledge with a kind of demotion. But
, again,

what should
not be lost sight of is that the beliefs that we have, including the beliefs that guide the
actions in which our doing well c
onsists, are beliefs acquired in the drive to achieve
knowledge
,
and, once acquired, are beliefs that we ourselves take to
be

knowledge. It is
arguable therefore that our doing well depends irreducibly on the fact that we are questers
after knowledge, rath
er than attainers of it, that knowledge is an unmet object of a
persistent and recurrent drive. What the philosophical problem of
belief
-
embubblement
gets us to see is that there is a crucial distinction between
satisfying a desire
and
attaining
an

epistem
ic target
that
it embeds, a distinction which in its own right is
phenomenologically inapparent in the first person here and now.

Our attachment to knowledge is instinctual. It rivals in primitive thrust the
necessity to breathe. This gives to the No Escap
e Thesis a sharp importance. It helps us
see is that, although we have a drive to know,
we are so

constituted

as to settle for belief
.
That is, we settle for belief in that sense of the word that honours this f
irst person/third
person duality
. This is not
because we are lazy or over
-
casual, but rather because we
suffer from an involuntary epistemic incontinence. We are embubbled cognizers. In our
takings to know, we occupy the perspective from which belief does duty for knowledge.
Judged from the third pers
on perspective, this is a decidedly down
-
market form of
satisficing. But the last thing it is is satisficing
on purpose
. This gives us two points to
take formal notice of.


Proposition 6
.2
f

EPISTEMIC QUESTING
:

For large ranges of cases we could not
settle

for
belief unless we
quested

for knowledge.


Prop
osition 6
.2
g

INVOLUNTARY SATISFICING
:

Satisfi
cing

is a consequence of the way in
which the individual is

cognitively structured
. Judged from the third
-
person
perspective, he clamours for knowledge and make
s do with belief. He sets his cap
for tr
uth and settles for apparent truth
. This is satisficing all right, but it is
satisficing
malgré lui
.
From the first person perspective, the drive for knowledge



16

Stich (1990), pp. 134
-
135.


9

is stilled by beliefs which are taken for knowledge and
which disguise its absence
when it is not there.


No one who found

h
imself drawn to these claims could

be a
happy
Stichian about
truth and knowledge. Stich wonders why anyone whose bel
iefs serve him well practically


beliefs that engender his surviva
l and

prosperity


would care about truth. What these
observations tell us is that not caring about truth, not questing after knowledge, will deny
us the beliefs on which


true or not


our flourishing depends.

Peirce is good on belief. He speaks of belief as
thraldom to the Insistence of an
Idea. It may appear that Peirce is anticipating something like what psychologists call the

illusion of certainty.” T
here are differences as well as similarities in these views. There
are psychologists who think that being
certain of things is an adaptive response. They
think that it is an adaptive response that disposes us to error; and that the errors we thus
commit are concealed b
y an illusion. Our view


and I

think Peirce’s

too



is that
believing things is an adaptive
response and that, when believing something is an error, it
is the structure of belief itself


of
k
-
stat
es


that conceals the error. I

don’t think that
much is added to this analysis by suggesting


in the manner of the Boorstin epigraph

under the title
of this chapter



that belief is an illusion. Sometimes false beliefs do arise
from illusions


for example, the Neker Cube or Shepard’s Turning the Tables


but it
seems wr
ong to think of belief a
s
inherently
illusory.
Belief is the appearance of
knowledg
e. There is, as such, nothing wrong with this, even when the appearance is not
the reality. It is true that there are beliefs whose possession by an agent would indicate
that something is amiss with him, that he is not functioning properly. But it is typic
ally
the case that when something that isn’t knowledge appears to us to be knowledge, there is
nothing about us that is not functioning properly. This is a core truth of fallibilism. Even
when our wherewithal for knowledge is in apple
-
pie order, we will so
metimes be let
down with its mere appearance. It is true that illusions are a kind of false appearance; but
it seems wrong to say that every false appearance is an illusion. “Illusion” suggests an
aberration, something out of the ordinary, or something tha
t isn’t work
ing quite as it
should. T
he truth of fallibilism defeats any sugg
estion of the intrinsic illusori
ness of false
belief.

A more plausible
-
seeming candidate for inherent illusoriness is our disposition to
experience ourselves as
CC
-
knowers. As w
e saw, in achieving our
CR
-
successes we
often see oursel
ves as trying

to execute

CC
-
strategies

and succeeding
. That we see
ourselves in this way is connected to the fact that what the word “know” means for us is
pretty close to the
CC
-
conception of it. Acc
ordingly


Proposition 6
.2
h

MISCONCEIVING KNOWLEDGE:

I
f the CR
-
model is right, what knowledge
actually is is not what the CC
-
model conceives it to be. In matters mediate
,

we are

often

CC
-
experiencers

but
CR
-
knowers.


It

is necessary to guard against a pos
sible confusion. A c
hapter ago, I

spoke
approvingly
of satisfization with regard to the strictness of our cognitive targets and the
rigour of the standards required for their attainment. But our present claim appears to say
the opposite. It says that we se
t our cap for knowledge yet often unwittingly settle for

10

belief; that is to say, we strive to maximize epistemic payoff and unbeknownst settle for a
lesser outcome. True, this is what that proposition says, bu
t it does not contradict what
the

earlier chapt
er says. Putting the two together, we have the following: On those
occasions when an individual agent aims for knowledge rather than belief, he does not by
and large encumber himself with the necessity of meeting the standards of validity or
inductive stre
ngth. Taken together, we have it that knowledge is typically acquired in the
absence of considerations that either validate it deductively or lend it strong inductive
support.

There is a moral to be drawn from this.
As
observed in chapter three, i
t is tha
t we
are philosophically ill
-
served if we deck out the concept of knowledge in such a way that
knowledge of α is impossible except in the face of considerations which, in the logician’s
technical sense, validate it or lend it strong inductive support. Ther
e is nothing to be
gained from conceptualizing knowledge in this way


and lots to be lost. Doing so would
cost us the Cognitive Abundance Thesis, and it would require us to concede that our
successes with the Enough Already Principle has virtually nothing

to do with our
successes as gatherers of knowledge.

The word “satisfaction” does double
-
duty in a way that encompasses this contrast.
So we should keep it in mind that:


Proposition 6
.2
i

THE AMBIGUITY OF SATISFACTION
:

In its psychological sense, an agent’
s
cognitive target is satisfied only when he is in some requisite state of belief. In its
epistemic sense, that same target is satisfied only when he is in some requisite
state of knowledge.


Proposition 6
.2
j

BLURRING THE DISTINCTION PHEOMENOLOGICALLY
:

For

these large

ranges of cases
,
psychological satisfaction is experienced in the first person as
epistemic satisfaction.


A logic of error should have something to say about this. Perhaps the fundamental thing
for it to take note of is that


Propos
ition 6
.2
k

THE PHENOMENOLOGICAL CONSTRAINT
:

The inapparency of error flows
in part from the phenomenological constitution of the human reasoner in the first
person here and now.


Proposition 6
.2
l

THE CONCEALEDNESS CONSTRAINT
:

The inapparency of error flows in
part f
rom the fact that in the first person here and now belief
-
states are
experienced as knowledge
-
states. When a belief
-
state is not a knowledge
-
state,
belief conceals this.



What
are we
now
to make of the No
-
Escape Thesis? Is it anything more than
vivid expr
ession of a commonplace about human experience? Experience is subjective

11

intentionality, the attempt to get from in
-
here to out
-
there. We have a standing interest in
knowing whether these penetrations have been achieved. Since experience is subjective
inte
ntionality, we experience ourselves in ways that suggest th
at breaking through to the
real

is a matter of course. What
No
-
Escape reminds us of is that our experience of things
as realizati
ons of the quest for reality

is itself subjective. So there is a pro
blem and, with
it, a founding impetus for epistemology. In its various forms, scepticism makes the point
that in our quest to know whether

our experiences are epistemically

revealing, we are
subject to this same question. If there is

a problem about the ep
istemic upshot

of our
experiencing ourselves as in a state of knowledge about α, there is the sa
me problem
about the security

of our believing that our

experience of α was epistemically

successful.


I am

not of a mind to make light of

scepticism. But this is not my

focu
s here. The
challenge is not scepticism. It is fallibilism. Fallibilism is compatible with scepticism, but
that is not what it is in business to say. Its claim rather is that sometimes things go badly
when cognitive mechanisms are going well. Fallibilism d
oesn’t foreclose the possibility
of discovering the conditions under which these things come to pass. But it carries the
admonition that any progress in this regard is itself fallibilistically conditioned. This takes
an especially pointful form in our recu
rring question of why we should repose any more
confidence in our error
-
detection capacities than our error
-
avoiding capacities. This is a
question with two inequivalent motivations. One is scepticism. The other is fallibilism. It
is fallibilism that count
s here.


6
.3

Belief
revision



We have not yet got to the bottom of detection and correction. The detection
-
and
-
correction of error must be distinguished from belief
-
revision, of which it is a special
case. It

is here that the founding problem of AI

may a
chieve

a certain resonance
. This is
the
frame problem

of McCarthy and Hayes
,

originators of the situation calculus.
17


The frame problem first arose in the theory of automated pla
nning. The theory
postulates
an agent
and attributes
a goal

to it. A goal is
a
ny statement of a kind that could
be made true by the appropriate kind of action. Planning is a search for actions or
sequences of act
ions that might achieve

a
given
goal. In the STRIPS planner of Fikes and
Nilsson

(1971)
, planning

has three elements. The
first component is a

given state of the
world, called for convenience an “initial” stage. The second component is a set of
operators taking world states to world states. The third element is the planner’s goal. The
planner’s task is to find the operators t
hat give the transitions from
initial state to the state
that

fulfills the planner’s goal. In the original theory, planners execute something quite
similar to a logic of automated theorem
-
proving. It was one of those many cases in which
the Can Do principl
e directed the

theorist to an already su
ccessful theory, in hopes that
the new theory
could be an adaptation of it, if not outright absorbed by it.

The theory of planning encumbers the agent with some troubling goal
-
facilitation
tasks. In the process of ma
pping an initial state to a goal
-
realiz
ed st
ate, a planner must
factor in changes in
world

states

as the plan unfolds
, and must take due note of the things
that do not change
.
For example, it is intuitively clear

that formulating a good plan for
changing t
he flat tire on the car down the road is that
presupposed the car will still be
there and

that you will not be felled by a st
roke. Your tire
-
changing

plan must make



17

McCarthy and Hayes (1969).


12

predictions as to whether changes that would wreck the plan will actually occur. In the
ori
ginal theory, these predictions are formulated as “frame axioms”

or assumptions of
inertia
. In

our present example, it is

a frame axiom that the car won’t have been moved
,

and another frame

axiom that the tire
-
changer won’t have been

knocked out of action

by
a stroke.
Of course, even in situations a
s commonplace as tire
-
changing, the number of
changes that could occur is extradordinarily large. One of the problems created by this is
that it is not in any sense practically

feasible to identify all the

possib
le changes

whose
negations could

be written up as frame axioms. Even if we could somehow get round
this
problem, the ensuing
axioms would be very large in number and massively complex.
This leads to a further difficulty. Supp
ose we inputted these axioms to

an automated
planner

and instructed it to predict
the
outcomes

of the pla
n deductively. In so doing,
taking

adequate

note of what doesn’t change would so deplete the resources of the system
as to preclude its working out the other aspects of the plan. In
other words, the burden
imposed b
y the system to work through

the

frame axioms exhausts its planning resources
.
Accordingly.


Proposition 6
.3
a

THE FRAME PROBLEM:
The frame prob
lem in AI is the problem of how to
make

reasoning about nonchange efficient.


Al
though it arose as a problem in AI engineering, it is clear that the frame problem is also
a challenge for epistemology. And since human beings are successful planners, it must
follow that they have a solution to the frame problem. Is it possible
to identi
fy this
solution

and to study its essential characteristics? We said that the original AI
-
planner
was an idealized executor of the routines o
f automated theorem proving. It was

a
deductive system

of a certain kind. It was not long before AI theorists and o
thers were
drawn to the view that f
rame difficulties were occasion
ed by the deductive character of
automated planners
. S
ince humans are successful planners,
it was concluded
that
they
must employ non
-
deductive modes of reasoning. Starting in the early 1970
s, the fl
ight
from
this deductive orientation

began in earnest. AI researchers began to take seriously
the idea that, in the absence of indications to the contrary, it was reasonable to assume
that
things remain as they are unchanged
.
18

Reasoning of this ki
nd

is called “temporal
projection”, and is an early form of defeasible and nonmonotonic reasoning.



Proposition 6
.3
b

PRINC
IPLE OF TEMPORAL PROJECTION (RAW

FORM):
Believing that α is
true at
t
is defeasible reason to believe that it is true at t* (t


t*)
.


Proposition 6
.3
b

is very rough, needless

to say. Even in this crude

form
,

it carries
implicit constraints on the range of α.
For it is certainly not even defeasibly correct that
tomorrow will be the same day as today, or that anyone living now will be a
live in the
year 2150.
It is widely accepted that
,

as of now
,

providing necessary and suff
icient
conditions for the temporal

projectibility of propositions is an unsolved problem.
19

This
notwithstanding, it is easy to see the kind of conjectures that are in
tended for planning



18

See, for example, Sandewall (1972), McDermott (1982) and McCarthy (1986).

19

Equally so for the case of pro
jectible
predicates.
(Goodman (1983) and Stalker (1994)).


13

theory.
It is widely accepted by “anti
-
deductivists” in the theory of planning that at least
part of the reason why

humans are successful planners
is that their planning

is defeasible
rath
er than the deductive,
that automated planning s
ystems will not succeed until their
deductive engines are replaced with defeasibilistic
engines. We will revisit these
conjectures in the chapter to follow. For the present we will reflect briefly on how closely
linked, or other wise, the frame problem is
to the pr
oblem of error dete
ction and
correction.


One
of the driving forces of belief
revision is change through time, change that
makes what was once true now false. It requires that the once true and now false be
replaced by what is now true, by a contr
ary of the original if available or by its
contradictory otherwise. Error correction is different. It requires that what was then false
be replaced by what is true, both then and now. It is a nice question as to how this
contrast presents itself to the cog
nitive agent in the first person here and now. On the
telling of the section above, it is a matter of now holding a belief believed to be
incompatible with a belief previously held. But it is clear that this is insufficient to
capture the distinction at ha
nd. Believing now that α and believing that α is incompatible
with a prior β leaves it undetermined as to whether the prior β was true rat
her than false.
If true, then your

present subscription to α and the concomitant dropping of β is a matter
of pro
vidin
g a consistent update of your

belief
-
set under conditions of worldl
y change. If
β is false, then your

present subscription to α and the concomitant

dropping of β is a
matter of your

correcting the error that β was all along. So, again, one’s present
subscr
iption to an α that one sees to be incompatible with a prior β is not, just so, error
-
correction. Something more is required. But what?

We do not have a well worked

out account of how this distinction plays out in
actual practice. But there is little dou
bt that our facility in discerning it in the cut
-
and
-
thrust of real life is closely tied to our adeptness in
tensifying
and
indexing
our semantic
assessments. Clear cases can be cited, but the devil is in the details, needless to say. What
Quine calls eter
nal sentences are perfect for the first half of the contrast. If, for example,
yo
u now see
Disjunctive Syllogism as invalid, you see that it was invalid

all along. On
the other hand, what Quine calls occasion sentences are perfect for the contrast’s second

half. If it is false that you have a headache today, it is far from ensured that you were
headache
-
free yesterday.

Does the principle of temporal projection provide us with further guidance? It
does not. It provides that α’s truth at
t
is reason to assume

its truth at

t*, unless we know
better. “K
nowing better” here is knowing or having convincing reason to believe that α at
t* is not true. But the determination of α’
s falsity at t* le
aves entirely undisturbed α’s
truth at
t
. This being so, the principle o
f temporal projection gives us systematically
unsatisfactory guidance about error detection. It is that α’s
falsity now is never
adequate
reason to suppose that α was also false then.
True, we could always postulate a
“backwards”
-
looking temporal projectio
n rule, which would tell us, for example, that
Ozzie the ocelot’s not being two
-
footed now is indeed reason to suppose that neither was
Ozzie two
-
footed then. Like the original principle, the backwards
-
looking principle
works plausibly for certain specific

cases. The trouble with both principles is that they are
not available to us in convincingly general formulations.
We will come back to temporal
projection when we dis
cuss default logic, in the chapter to follow
.



14


Still, it remains fundamental

to the dis
tinction between belief
-
update and error
-
correction is the contrast between change and non
-
change. When you replace your old
true belief that the cat is on the mat with your currently true belief that the cat is on the
chair, two changes have occurred, one

in nature and one in you. The change in nature i
s
the location of the cat,
there
then
and

now not. The change is you in

a change in belief,
the displacement of what was true then by what is true now. Fixing errors is different.
Beliefs are changed, but na
ture isn’t. If your belief was that the cat is on the chair and you
see now that the cat wasn’t on the chair

after all
, there is no change

of location
corresponding to your

change of belief. Memory has a constructive role to play here.
Whether something ha
s changed in nature depends for you on how memory operates. In
the case of update, you remember two things. One is where the cat was and the other is
what you then believed about it. This is supplemented by new information. If the
information leaves the me
mory standing, it provides the occasion for replacing the old
belief with a new one. But if the new information prompts the correction of an error, it is
essential that the old memory be retired. New information can be memory
-
preserving or
memory
-
cancellin
g.


6
.4

Anti
-
formulas
: an interlude




The going accounts of
belief revision are linguistically

oriented.
One might say
that they exhibit a bias for the sentential. T
hey inherit this from the favouritism that
mainstream logic extends to language, as well
as from a preference for the
CC
-
orientation
to knowledge. A person’s total beliefs at a time are represented by the set of the sentences
that express their propositional contents. Seen this way, the motive force of belief
-
revision is a matter of sentential

inconsistency. Any new information that is consistent
with what you now believe is added to the set of sentences representing your beliefs. It is
said that new information that contradicts those beliefs requires adjustments

that restore
consistency. If yo
ur

new information is that β, and if β is incompatible with α then β is
represented as a sentence entailing the negation of α. Consistency restoration is then a
matter of dropping α or refusing β. Dropping and refusing are accomplished by
application of th
e negation operator.

It is, of course, a crude picture, bearing little relation to what actually happens in
the belief
-
boxes of reasoners in real life. Consider the case in which a new β contradicts
an old α, and the reasoner restores consistency by dropp
ing α. In the orthodox accounts, β
is added and α deleted. Deletion is a kind of expulsion; α is evicted from the reasoner’s
set of beliefs. There is a joke (of sorts) that makes the rounds of belief
-
revision theory. It
is “Where, after eviction, does an a
bandoned belief
go
?”


Eviction is not a psychologically realistic metaphor for what happens to a belief
foregone in favour of a contradicting newcomer.
Better that we call back into play the
notion of erasure.
20

Earlier I

said that dialectical erasure i
s th
e extinction of your thinking
you
know under pres
s of a challenge as to whether you

do in fact know it. What is this



20

To the best of our knowledge, the term “erasure” was introduced in Sperber and Wilson (1986). Theirs
too is a linguistic orientation. Erasure operates on contexts, and contexts are sets of senten
ces. The erasure
of a sentence α from a context C produces a new context C
+
from which α is absent, and C
+
’s further
membership is adjusted to take that absence appropriately into account. In which case, other sentences are
added or removed.


15

“extinction”?
If we wanted to give erasure its psychological due, we would attend to its
destructive aspect. We would see it in causal term
s. When new
information comes in,
some of your

former beliefs are
extinguished
; they are
eradicated.
The new information
not only contradicts the sentence expressing the old belief. It destroys the old belief; it
wipes it out. A problem for theories that s
eek mathematical models of such phenomena is
whether the operation of belief
-
extinction can be captured by the theory’s formalism.
Sentence
-
deletion and sentence
-
negation are the wrong way of thinking of this. Th
e right
way is something that I

have been wo
rking on recently with
Dov Gabbay and
Odinaldo
Roderigues.
21

The basic idea is that of the
anti
-
formula.
Anti
-
formulas do not function as
sentential negations. The anti
-
formula *α does not negate the formula α. It annihilates it.
The anti
-
form
u
la perspectiv
e offers promise beyond the reaches of belief revision
occasioned by the contradicting character of new information. It also extends to memory,
especially the phenomenon of memory
-
decay. It is an approach that favours a
CR
-
orientation. Our

papers on anti
-
f
ormula were written in a swashbuckling way. What they
aimed for was technical virtuosity rather than convincingly worked out empirical
applications. But, as now appears, the quite general question of changes of mind is
something that could benefit from a m
odelling technolog
y of this sort. I am

not prepared
to say that, as we have it now, our formal apparatus is up to these challenges. But it is
something to think about and to work on.


6
.5

Equilibria


We have an evolving picture of how error crops up in th
e lives of individual
agents. It is a picture in which belief plays the leading role. Belief is a busy concept.
Belief is an adaptive response. It is a condition on knowledge and it is indispensable for
error. Yet it obliterates the distinction phenomenolo
gically.

There is a kind of
equilibrium

which the human cognitive system strives to keep
itself in
.
Viewed from the inside out, individuals attain their cognitive targets by putting
themselves in the requisite epistemic states. Since human beings are in d
ynamic tension
with constant changes in informational flow
-
through, and alterations of their own local
interests, the drive for knowledge requires the individual to adjust his epistemic states to
such variations. The successful cognizer is someone who mana
ges to keep his balance in
the face of these unceasing fluctuations. Seen from the first person point of view, an
agent maintains his epistemic balance, achieves his cognitive equilibrium through time,
by being
put
in the right sequences of classes of epis
temic states with requisite efficiency.
From this perspective, nothing less will quell the drive to know. Whatever else may be
said of the content and organization of human cognition,


Proposition 6
.
5

PERSONAL IDENTITY
:

Persistent and systematic cogniti
ve disequilibrium is
fatal to the integrity of the human personality; so equilibria are a deep condition
on the stability of selfhood itself.
They are

constitutive of personal identity
.


Perhaps the personal identity claim will strike some people as too fl
orid by half,
as cheapside metaphysical bombast. Why, in any event, should a logician of error have



21

Gabbay
et a
l.
(2002, 2004).


16

the slightest interest in personal identity? This i
s not the place for
excursions into matters
of no particular relevance to our project here. But the perso
nal identity claim is not
irrelevant. It hits an important bull’s eye. Although it is contested in various places by
cognitive scientists, it remains the dominant view that personal identity is a function of
self
-
awareness, of the kind of consciousness tha
t is called “intentional” or object
-
directed.
22

If this is right, one’s abiding sense of self is as of the possessor intentional
consciousness. To be conscious is to be aware of things, and to be aware that it is oneself
who is aware of those things.

Judge
d from the
third

person perspective, what the personal identity claim comes
to is that awareness is inherently
doxastic
, that the way to be aware of things is to have
beliefs about them. Since such awareness occurs quite early in those that have it


certa
inly in humans before the acquisition of lan
guage


it is necessary to release

the
concept of belief from its philosophical thraldom to linguistic formulation, but this
needn’t be done in an
ad hoc
way. It must, in any event, be done to account for the
phe
nomena in adult humans of sublinguistic and other forms of down
-
below cognition,
never mind what provisions we think it advisable to ascribe to the very young. So, then, if
being yourself is a matter of the doxastification of awareness, then being a health
y self is
a matter of being in doxastic equilibrium.

Rescher sees our equi
libria
as the absence of cognitive disorientation. He says that
“the need


for cognitive orientation is as pressing a human need as that for food.”
23

This is right, but it is right
with a difference. Leaving the need for food unmet will cost
you your life. Leaving the need for belief unmet will cost you your self.


6
.6

How are equilibria possible?


Most of our beliefs have no shelf
-
lives to speak of. Their careers are fleeting. It
wo
uld not be too much to say that most of our beliefs aren’t very stable. They come and
go with a rapidity that defies stability. They suffer the opposite of what psychologists call
“graceful degradation.” How, then, are doxastic equilibria possible? Part of

the answer is
that some of our beliefs do have long and uninterrupted careers, and that in the requisite
combinations they create the frameworks within which the short
-
careered come and go.
Also important, as we saw in the previous section but one, is the

transience of rapidly
turning
-
over beliefs experienced as the transience of nature, taken as reflections of how
the world actually is. Since it is desirable that one’s beliefs keep pace with what goes on
in the world, the transience of short
-
careered beli
ef is simply a reflection of the
dynamisms of nature. Considered as dynamic totalities, equilibria are stable by definition.
Considered one by one, long
-
careered beliefs have the stability of persistence, and the
short
-
careered have the stability provided
by intentionality. If with respect to the rapid
pace of blinkingly
-
brief belief, all you experienced were the tumbling chaos of hell
-
bent
-
for
-
leather
doxastic generation and decay
, it is hard to see how you could long survive.
Equally, if in their intentio
nal focus, you found yourself construing all this chaos as the
dynamic unfolding of nature, you would be unable to find your way in nature except for
your capacity to intentionalize the tumble as changes in a nature anchored requisitely in
an underlying pe
rmanence.




22

Brentano (1874/1995), Prior (1971), Chisholm (1981), Jacquette (1990
-
91).

23

Rescher (2007), p. 44.


17

Equilibria are doxastic coherencies, as fragile and fleeting as may be. But
when

they exist

they have robustness enough to stabilize the self. It is nothing but plausible to
say that awareness craves coherence, that an untroubled self
-
awareness i
s one that on the
whole keeps its doxastic balance, and regains it affordably and quickly when it is lost.


Why should an error
-
theorist care about this? She should care because the beings
who commit errors are beings of whom the personal identity claim i
s true. We keep on
maki
ng the point that a logician would be

asking for trouble if, in fashioning his account
of errors of reasoning, he failed to take proper account of what it was like to be a
reasoner. The burden of those pages was to show that the hum
an reasoner is the
embodiment of a certain view of cognitive agency. A further requirement is to show how
this conception of cognitive agency bears on the phenomena of error. I
t bears as follows.
A
lthough belief is both the occasion and concealer of error
, the deferment of belief


of
k
-
states


is not possible for human reasoners as general policy.


All this throws further light on the irrelevance to our project of the thesis of
reflective equilibrium, which is an attribute of communities


indeed of
spe
cies.
In our
telling of it so far, doxastic equilibria are properties of individuals, but the notion extends
agreeably to communities as well. Given that we and our conspecifics have been nudged
along by common evolutionary forces, together with the other
commonalities of human
experience, doxastic equilibria can reasonably be proposed as conditions on collective
identities. There is no reason to exclude from the reach of doxastic equilibria all those
consciously held beliefs about the tenability of our rea
soning processes. To that extent, it
may well be a condition on our collective integrity that we share beliefs about what
reasoning strategies to employ. But, again, if normative legitimacy is implicated in these
arrangements, it is because the reasoning p
rocedures we think might by and large be right
are by and large are right, not that our common embrace of them helps make for our
communal equilibria.


6
.7

Popping the bubble?



If error arises from thinking we know things we don’t or from believing to b
e

true
things that aren’t, and i
f the state we are in when we think we know

or believe
to be true

are states which give the appearance of knowledge and truth while at the same time
disguising their absence; if, in other words, being in error
-
disguising k
-
st
ates is a large
part of the story of how our errors come about, would we not be better advised to stop
being in k
-
states? Would it not be better never t
o experience

oneself as knowing t
hat α, or
as seized by α’s truth?


Bayesian epistemologists, and others too, have a view of belief which emphasizes
its degreedness. Beliefs are likened to subjective probabilities, which in turn, are mapped
many
-
one to real numbers in the unit interval. N
ot only in the theoretical environment of
the probability calculus but in the context of felt
-
experience,
there
is something to be said
for the idea that some things are believed more
or less
strongly than others
, and that some
people hold a

common belief
more
or less
strongly than others. There are loose linguistic
indications of the strength or weakness of an agent’s belief, ranging from “I’m absolutely
sure” on the strong side and “Well, I’m somewhat inclined to think so” on the weak side.
No one thinks
that these are especially reliable ma
rkers, tied as they are to
individual
traits of a person’s communicational exu
berance or lack of it. Bayesian
s equate the

18

strength of an agent’s belief with the size of the biggest bet he would take against it.
These ar
e highly idealized measures of belief
-
strength, imposing on Bayesian agents the
further featur
es we have already spoken about:

Bayesian believers have highly structured
preferences. They pay no heed to the cost of expected
-
utility computations
.

They are
lo
gic
ally omniscient: E
very logical truth is believed to the maximal degree; that is, its
subjective probability is 1. Not everyone will think that Bayesian idealization is the way
to go with belief. But there is something entirely right about the idea that
the strength of a
belief has some reflection in the beli
ever’s disposition to act on it; or as Peirce observed
“the establishment in our nature of rule of action

.
” (
CP,
5. 397).


Perhaps the degreed character of belief will give us a way out of error
-
occ
asioning difficulties. If having k
-
states is a large part of the problem, why not, as we
say, stop being in k
-
states? Why not substitute for
k
-
s
ta
tes beliefs o
f weaker measure?
Was this not the advice of Groucho Marx’s doctor to the famous funny man? Grouc
ho
consulted his doctor about
a sharp pain in his shoulder. “W
hen I move my arm like this
, it
hurts like the very devil
”, complained Groucho. “My advice”, said the doctor,

is don’t
move your arm like that. That will be $500, please.” Would not the same

ad
vice be
helpful for reasoning

agents?



Proposition 6
.
7
a

GROUCHO’S DOCTOR’S PRESCRIPTION:
In matters of what you seek to
know, stop being so sure of yourself.


Proposition 6
.6a recommends that we stop having beliefs whose strength
occasions the
phenomenolo
gical

inapparency of the distinction between having the belief tha
t α and
knowing that α. Since beliefs come in degrees, this is something that we might consider.



Case one
: Consider the belief condition on knowledge, a condition held in
common by the CC and CR
-
models. If the belief condition with respect to your
knowle
dge of α requires you to think that you know that α, that is, to be in a k
-
state with
respect to α, then any degree of belief weaker than this fails to fulfill the belief condition
on knowledge. So the extent to which we comport with Groucho’s doctor’s pre
scription
with respect to α, knowledge of α

is out of the question. The course of case one is the
course of a rather hefty scepticism about knowledge. It provides that lowering one
’s
sights with respect to α

wipes out the

possibility of our knowing it
.


C
ase two.
The belief condition for knowing that α
does
not require that you think
that you know that α, that is, that you be in a k
-
state with respect to α. All that’s required
is some degree of pro
-
attitude towards α above a certain point. Let us stipulate

n
as that
point. Then any belief satisfying the belief condition on knowledge will be of strength
n
+
m
, for some non
-
zero
m
. But, given the reasoning of case one, no belief of strength
n
+
m

constitute
s

a
k
-
state. Accordingly, case two
provides that the
re is nothing of which the
following are both true: First, that you know it. Second, that you think that you know it.
Thus, the course of case two is the course of what we might call Knowledge
-
Lite, in
which knowing something is never something you experie
nce yourself as doing.

Given
the outerlyingness of truth, there are lots of philosophers who in their theoretical
moments are entirely reconciled to the prospects of Knowledge
-
Lite. Why would they not
welcome its provenance in case two?


19


Nothing in the emp
irical record

lends the slightest support to the suggestion that
preventing ourselves from being in k
-
states is something that lies within o
ur capacity to
bring off an a
scale that would qualify i
t as a general policy. Possibly

we might train up
for it in
some Pyrrhonian monastery in the wilds of a distant land, re
-
programming
ourselves to think small. Perhaps one day before long the same adjustments might be
available to us surgically. There might be package specials


a tummy tuck and a k
-
state
suppressio
n procedure
,

for the price of the tuck alone.


Still, the idea that suppressing our

experienced knowing

is not
in
compatible with
knowing the ve
ry things we don’t think we do



the course of Knowledge
-
Lite


raises an
obvious question. It is the question of

the degree to which, if any, the project of
Knowledge
-
Lite spares us the trouble of being in states of mind that disguise the presence
of error. The answer is “To no degree that matters.” It may be that being in a state of
belief of degree
n
+
m
with resp
ect to α

does not create the felt
-
experience

of being in a
state of knowledge about it. But it remains the case that if α is indeed an error, believing
that α with a strength sufficient to fulfill the belief condition on knowledge will disguise
that fact.
We may even allow that where α is believed to degree
n
+
m
,



α
¬

might
concurrently be believed with degree
1



(
n
+
m
). This might happen when one is of two
minds about α, when one is
able to see the pros and t
he cons. But, again, if
believing that
α to
degree
n
+
m
is sufficient
belief f
or knowledge; then

if error is present it will be
inapparent to the believer.


As we have it now,
n
is the arbitrary point above which belief is strong enough to
fulfill the belief condition on knowledge. But how high is
this? Some further guidance is
available to us in the idea of
reason for action.
Intuitively, a belief of degree
n
or higher
is a belief of sufficient strength to serve as a reaso
n for acting on it. The “on it”
qualification is important. Acting on a propo
sition


a belief, a hypothesis, an assumption


is tr
eating it as true or
defeasibly true and re
-
organizing one’s behaviour accordingly.
Thus if you believe to a degree that satisfies the belief condition on knowledge that
Barbara’s birthday is on the 3
rd
, you will
be
move
d

now to buy a present and a card.


There is an obvious distinction between thinking that something gives one reason
to act on it and its being the case that it gives a reason to act on it. If being in a belief
-
state sufficiently strong t
o fulfill the belief condition on knowing that α

is also sufficient
for thinking there is reason to act on α, then thinking so will

disguise the absence of that
reason in those cases in which it is indeed absent. So again weak belief does not solve the
con
cealment of error problem.



Proposition 6
.7
b

THE FUTILITY OF THE ADVICE THAT GROUCHO GOT:
The expungement
of
k
-
states is not presently possible or fors
eeably likely. Even if it were, weak
belief with respect to both α’s truth and reasons for acting on it
does not erase the
phenomenological inapparency of error.



Before bringing this section to a close, it remains to say a brief word about
acceptance or its more expressly linguistic variant, assent. It is sometimes held that
asserting to and dissenting fr
om propositions is a way of putting them in and out of play
without the burdens of having to believe or disbelieve them. Equally, reasoning would be
a matter of asserting to or dissenting from sentences based on whether
they are
linked by


20

the appropriate r
elations. F
or example, anyone who assented to α and

If α then β
¬

would
be right
to assent to β provided his prior assents remain concurrently in force. A
fundamental fact about assent is that, unlike belief, it is not subject to the usual run of
belief
-
pa
radoxes. “I believe that α, although I don’t believe it” is inconsistent, but “I
assent to α, even though I don’t believe it” is not.


A good many systems exist in which assent, dissent and other speech
-
act types
play load
-
bearing roles. Logics of asserti
on
24

and the many systems of dialogue logic
25

stand out in this regard. One thing to like about these approaches is that they afford no
formal occasion to recognize the private states of agents. They are logics without
k
-
states.
They are logics whose concept

of agency is stripped of phenomenological aspect. Some
of these systems are highly developed mathematically. Some of them simulate rather well
the behaviour of real
-
li
fe agents.
They say their piece about reasoning


both good and
bad


without the encumb
rance of their agents’ dark interiors. Clearly, then, logical
theories of the r
easoning of doxastically neutred

agents are possible; and assent is belief
-
like without the necessity of being the real thing. So why don’t we drop belief? Why
don’t we de
-
psych
ologize our logic?


The short answer is that we want a logic that is suitably naturalized, one that takes
official note of how beings like us are actually constituted. This gave rise to the warts
-
and
-
all principle. In generating an account of
how
beings l
ike us make errors of
reasoning, we admit ourselves to theory, warts and all. One of the warts is that

we are not
doxastically neutred
.


A related consideration is that when it comes to how the human individual
actually operates, his own assent and dissent

behaviours are very often driven by the
extent to which it accurately reflects what he believes and doesn’t believe. Even in those
cases in which a person assents to something he disbel
ieves (or fails to understand
), the
assent is
un act gratuite

unless a
ttended by the belief, or the disposition to believe, that in
the circumstances that was the right, or appropriate, or prudent thing to concede. When it
comes to beings like us, there is no getting rid of belief. A naturalistic theory of errors of
reasonin
g would do well not to forget it.


6
.
8

Shrinking violet fallibilism






If our present

reflections are

allowed
to stand, there is a version of fallibilism that
can’t be made to work for individual

agents.
For lack of a name, we propose to call it
“shrink
ing
violet

fallibilism”. Shrinking violet fallibilism comes in two parts. Its first part
it shares with all forms of fallibilism. It asserts
, as usual,

that our

cognitive procedures are
not only not error
-
proof, but are attended by
non
-
trivial frequencies
of it
. Its second part
separates it from the version that we ha
ve been promoting here. The second part asserts
that

given the practical unavoidability of error
,

together with error’s concealedness, it is
better not to believe the disclosures of our cogniti
ve procedures, but rather to accept them
with a tentativeness appropriate to their vulnerability. What this second condition
provides is that cognitive agents stop having
k
-
states, and substitute for these s
omething
like weak belief or acceptance
. In a gen
tler (and more realistic variation), what shrinking



24

See, for example, Rescher (1968), chapter 14.

25

Barth and Krabbe (1982) is a technically impressive early work. Recent contributions of
note are
Rahman (
---
).


21

violet fallibilism requires is that cognitive agents stop
acting

on
k
-
states, reserving their
actions for those that are sanctioned by the requisite “acceptance”
-
states.


It is historically interesting th
at the version of fallibilism to which the likes of
Peirce and Rescher have been drawn is the shrinking violet variety. They were drawn to it
in the course of reflecting on the methodological requirements for epistemically
responsible science. In that very

respect, the interest they evince in fallibilism proceeds
from an interest in the behaviour of institutional agent
s, of the various disciplines and

collective enterprises that science is made of. At a certain level of abstraction it is easy to
see that sc
ience can manage very substantial levels of tentativeness, well beyond what an
individual is built for. An individual marine biologist can go to bed each night crammed
with
k
-
states about the ecology of the Pacific Rim. But he will
be well
-
advised to
const
rain himself utterly in rushing these convictions into print. In this he is able to
accept the admonition of shrinking violet fallibilism. He is able to do it as a practicing
biologist. He is able to restrain his
publication behaviour
in the requisite ways
. But he is
not capable of such restraint across the board, not even in matters relating to the ecology
of the waters that ebb and flow over the beaches of his city.


On some tellings, Peirce is a strict shrinking violet fallibilist about science. So is
Po
pper and, in some of his manifestations, Feyerabend. A certain support for such
conservatism, for such risk
-
aversiveness, is that institutional entities don’t have
psychologies, except in the metaphorical sense of some imagined relation of
supervenience. T
hey aren’t subject to the provisions of the
CR
-
model of knowledge,
except again metaphorically in some imagined relation of supervenience. And they aren’t
in
b
-
states experienced as
k
-
states, except with the same proviso.


Even if he could stop himself fr
om being in
k
-
states, and, if not, even if he could
sever the causal tie between such states and the actions they give rise to
,

and reserve all
his actions for those sanctioned by acceptance
-
states or states of partial belief, the
individual would place hi
mself at a staggering disadvantage. He would lose
range
and
timeliness,

what with very large classes of actions now either forgone or left too late or
both. In the grim and unforgiving economies in which the individual’s cognitive dynamic
plays out, this i
s diffidence on a scale that would kill him. Shrinking violet fallibilism is
not on for individuals. Something less battened
-
down is required.


6
.9

A thought experiment



What would it be like never to have beliefs? By this we mean, what would it be
like
never to be in
k
-
states? “Well”, some people will say, “it wouldn’t be all that bad.
We would simply go through life with a Peircean lightness of touch”. But, as we saw, this
is a way of proceeding that not even Peirce thought was uniformly available, espe
cially in
matters of “vital importance”. Here again the attractiveness of the popping hypothesis
beckons. But it beckons to no avail. Not having
k
-
experiences at all would in the most
radical of ways be not having a clue, a disorientation so total as to de
fy the victim’s
conceptualization of it. There are lots of things we know we don’t know, and more still
that we think that we might not know. But knowing that there are lots of things that you
don’t know is itself a
k
-
state, and so is suspecting that you m
ight not know. “I know my
own mind”, as the saying has it, and it is not all wrong. It is the cornerstone of
philosophical idealism and of much of the foundationalist impulse in classical

22

epistemology. If we didn’t know our own mind there would be no mind.

Suppose,
contrary to what Peirce allows, that it were always possible for us to avoid being in
k
-
states with respect to vital affairs. Suppose, after putting yourself in the expert hands of a
venturesome neurosurgeon, that you are able to suppress your
k
-
experiences of the
ordinary world in favour of first person
b
-
experiences. So instead of experiencing
yourself as knowing the person across the room is your wife, you experience yourself as
merely believing that she is, and you act on the belief in much th
e same way you would
had you k
nown it. No. Suppose that you are

minded to kiss your wife warmly. Will you
now proceed to kiss the woman across the room?

Suppose now that your accommodating neurosurgeon went further than he
intended. Sup
pose he extinguished

utterly your

capacity for
k
-
states. Then, again, you
couldn’t know your own min
d. You couldn’t even know that y
ou’re pretty sure that the
person across the room is your wife
; you couldn’t know

that you’re practically sure of it.
In losing the capacity to
know your own mind, you also lose the wherewithal for
responding to the drive to know. You are alienated from your nature as an epistemic
yearner. You become a sublimater of what St. Augustine calls “the
eros

of the mind”.
Even in these surgically diminish
ed circumstances, wanting to know couldn’t plausibly
be replaced with wanting to have a belief, for then knowing what you want to know
would be believing what you want to have a belief about.


It is necessary to flag a possible confusion. It is a confusio
n we
first encountered
in chapter 3
, when we consider
e
d the suggestion that fallibilism commits a form of the
Preface Paradox. Whatever your
k
-
state may
be
, you are always free to
report

it as a
belief
-
state. It is useful to call to mind the dialectical ch
aracter of our social natures. We
said before that when the report of a
k
-
state is challenged, the human individual has a
disposition to take a
CC
-
position towards his own assertion. For how is such a challenge
to be understood if not as the demand for a c
ase to be made for the assertion that evinced
it? It is one thing to accept the appropriateness of a challenge and quite another thing to
make the case that answers it. There are lots of cases in which, having conceded the
former, we’d prefer to delay the
latter. So we rephrase and down
-
grade: “Well, that’s
what I
think
, anyhow”.


What matters here is that talk is cheap. Re
-
reporting a
k
-
state as a belief
-
state is
not a way of making it not a
k
-
state after all. It is not a way of transforming it into a mere

belief
-
state. Our question is not one of how an agent may or not report his states but
rather of how it might come about that beings like us stop having
k
-
states altogether.


There is a related point. When you are in a
k
-
state you exhibit a kind of dialec
tical
vulnerability. What this means is that you are disposed to be receptive to challenge. One
way involves a disposition to re
-
report your
k
-
state as a mere
b
-
state. The other involves
a disposition to acknowledge your fallibility: “Of course, I might be

wrong”. Even so,
as
we remarked earlier,
just as a re
-
reported
k
-
state as a
b
-
state needn’t extinguish the
former in favour of the other, neither does the acknowledgement of one’s fallibility
convert a
k
-
state into a
b
-
state. For someone in a
k
-
state, th
e fallibilist concession “But I
might be wrong” is not
psychologically
a hedge, whatever it might be dialectically.


6.10

Mental health




23

On reflection, it may strike us as odd that epistemologists, especially those of
naturalistic bent, are little interes
ted in exploring the connection between cognitive
functioning and mental health. It is common knowledge that mental illness can

occasion
cognitive deficits.
For example,
our personal identity claim receives substantial support
from what we know about schiz
ophrenia
:


Normal individuals perceive mainly task
-
relevant information. A schizophrenic
individual, however is hypothesized to little relevant, or too much irrelevant
information.
26


There is little doubt that schizophrenic patients experience ‘persistent
and systematic
cognitive disequilibrium’ which in turn has serious implications for personal identity.
Diminished identity is a recog
nized feature of the disorder:


Schizophrenia often involves a profound experience of one’s identity as
diminished, which
complicates adaptati
on to the demands of d
aily life

(Lysaker
and Hermans

(
2007
)
, p.
129
)
.


Schizophrenia is diagnosed on the basis of positive and negative symptoms. Although
diagnostic criteria are fully laid out in the
Diagnostic and Statistical Ma
nual of Mental
Disorders
, it would be in order to

say something about th
e positive symptoms of
schizophrenia
. They include: (1)
thought disruption

(disorganiz
ed and illogical thought)
;
(2)
delusions

(the holding of false and bizarre beliefs)

and (3)
halluc
inations

(perception
of
things that don’t exist,
particularly hearing voices
)
. In a
CC
-
way of putting it, each
of

these symptoms represents a failure on the part of the schizophrenic individual to regulate
his epistemic states, particularly belief.
But the

CR
-
idiom also comes trippi
ngly to the
tongue, as witness
Langdon

and Colthear
t

(1999)
:


[I]f patients cannot reflect on beliefs as representations of reality, then the
distinction between subjectivity and objectivity collapses, lead
ing to maintenance
of
delusions

(
p.
44).


Schizophrenic language has all the hallmarks of cognitive disequilibrium.
The patient
below
has just been asked how he came to be living in a particular US city. His response
displays the intrusion of irrelevant thoughts, as if he is i
ncapable of damping down the
‘constant changes in informational flow
-
through’


he sees the tie and shell and is
immediately compelled to introduce them into his verbal output:


Then I left San Francisco and moved to…where did you get that tie? It looks li
ke
it’s left over from the 1950s. I like the warm weather in San Diego. Is that a conch
shell on your desk? Have you ever gone scuba
diving?
(
Thomas

(
1997
)
, p. 41)
.


In clanging or glossomania


a feature of formal thought disorder in

schizophrenia


there

is

no
convincing
semblance of the patient being ‘in the right sequences of classes of



26

Hirt and Pithers (1991), p. 140. I’ve been guided in what remains of the present section by helpful
correspondence with Louise Cummings.


24

epistemic states with requisite efficiency’ (glossomania is also found in mania). His
thought process pursues meaning and sound associations that are cognitive
ly
unprodu
ctive and inefficient. For example, a subject

is asked the colour
of an object. It is
salmon pink. The patient responds:


A fish swims. You call it a salmon. You cook it. You put

it in a can. You open the
can.
You look at it in this colour.


Looks like cla
y. Sounds like gray. Take you for a roll in th
e hay. Hay
-
day.
Mayday. Help.
I need help
(
Cohen

(
1978
)
, p. 29)
.


Essential to successful cognition is the capacity to process relevant information and the
related ability to ignore
informational clutter. What
these studies indicate

is that
impairments of these capacities may be pathological.


On the view of belief that we have been developing here, much the same may be
said about the connection between doxastic equilibria and mental health, possibly even
the i
ntegrity of selfhood itself. Perhaps it is over
-
dramatic and more than slightly purple
to say that we are gluttons for belief, that belief is the oxygen of the soul, but it is evident
that our doxastic equilibria are engineered unstoppably, that they canno
t in the general
case be deferred, and that they do not tolerate and are not penetrated by doxastic black
-
holes, by stretches of self
-
awareness devoid of all belief. We suggested earlier that even
if we grant that errors are wrongs that flow from the ways
in which we are constituted as
cognitive agents, being constituted in these ways produces advantages which in aggregate
outweigh in aggregate the wrong done by our mistakes. We are now well
-
placed to judge
that speculation favourably and to summarize as fo
llows:


Proposition 6
.
10

THE DOMINANCE OF BELIEF AGAIN
:


Doxastic equilibria are the occasion
of error and the source of its concealment. Yet doxastic equilibria are necessary
for survival and selfhood
. Belief is abundant, irre
sistible, cheap and
indispen
sable.


6.11

Belief
-
determinism



From its early days, philosophers have been made nervous by the presence of
causal factors in human affairs. Action
-
determinism is the doctrine that in as much as
they arise from causal forces sufficient for their existenc
e, human actions cannot be free.
The very idea of action
-
determinism creates great consternation among philosophers and
non
-
philosophers alike. Anyone teaching an introductory class in philosophy knows that
there is no better way to reinvigorate flagging i
nterest than to raise the issue of free
action. There is a like problem with the causalization of knowledge and belief. We might
call it belief
-
determin
ism. In action
-
determinism, you are

made to have the desires or
inclinations linked to the actions they
in turn make hap
pen. In belief
-
determinism, you
are

made to have the beliefs which, when true, are what knowledge is made of. Action
-
determinism threatens the presumption of free action. Belief
-
determinism threatens the
presumption of free intellection. Al
most everyone has something to say about the threat

25

of action
-
determinism. By comparison, almost no one has anything to say about the threat
of belief
-
determinism.
27

On the face of it, this is perplexing.

28



There are two different ways in which we might r
espond to
these determinisms.
We might take belief
-
determinism as canonical, pointing out that the doctrine that
believing a true α is the output of
cognitive
devices functioning as they should

does
nothing to disturb
the knower’s
presumed
rationality
. In that case,

why would we not also
hold that the doctrine that the deciding and doing of something are the outputs of
optative
and
conative
devices functioning as
they
should does nothi
ng to disturb
the agent’s
presumed
freedom.
Or we might reverse the reasoning. We m
ight argue that just as causal
underpinnings wreck the presumption of freedom, so too do they wreck the presumption
of rationality.


I

don’t want to go on and on about determinism.

It suffices to note tha
t in each
case, there is
a more effortless explanati
on of our not liking i
t (to the extent that we
don’t): It is
a causal condition on our

doing well on the causal model

of knowledge, and
of decision and action, that for large ranges of cases we experience ourselves in ways that
are oblivious to the embedde
d causal necessities. In matters of knowledge and action
alike, what we believe and what we do aren’t down to us. In which case


and never mind
how we feel about it


rationality and freedom are properties of processes that are
working properly. The ratio
nal agent is the person who believes rightly, and the free
agent is the person who decides and acts rightly. That is, they are causally stimulated in
the right ways, by belief
-
forming and action
-
embracing processes that are doing the right
jobs.


Those of
contrary mind, those disposed to resist these determinisms, find
themselves harnassed to explanations that vary between mystification and nihilism. On
the mystification side is the view that
somehow
actions are f
ree and believings are
rational

precisely wh
en they come about acausally. On the nihilism side is the view there
can be no such thing as rationality and freedom.


It might occur t
o readers that ours is a compati
bilist view, that rationality

and
freedom are the superveniences

of causal forces unfoldi
ng in the right way. There is a
good deal of rubbish written under

the protective cover of compati
bilism. But if the
present view turned out to be locatable within that broad tent, thi
s is not something that I
should

mind
.






27

Belief determinism is discussed in greater detail in Gabbay and Woods (2003), pp.
---
. For similar views,
see Arpaly (2006).

28

Off and on since the mid
-
1980s I have informally polled my introductory and theory of knowledge
students, following the discuss
ion of the two determinisms. They are asked to respond to the following
questions by ticking the appropriate box. The questions are: “Suppose that you came to accept that action
-
determinism [belief
-
determinism] is true. How would you feel about this? “The
boxes are: “AWFUL”,
“DISSATISFIED”, RESIGNED”, “FINE”. Each year, the majority response to the action
-
determinism
assumption splits between “AWFUL” and “DISSATISFIED”, and the majority response to the belief
-
determinism assumption splits between “FINE” and

“RESIGNED”.