Documente Academic
Documente Profesional
Documente Cultură
Terry Dartnall
Computing and Information Technology, Griffith University
Qld 4111, Australia
terryd@cit.gu.edu.au
A b s t r a c t . I provide what I believe is a definitive argument against
strong classical representational AI--that branch of AI which believes
that we can generate intelligence by giving computers representations
that express the content of cognitive states. The argument comes in two
parts. (1) There is a clear distinction between cognitive states (such as
believing that the Earth is round) and the content of cognitive states
(such as the belief that the Earth is round), yet strong representational
AI tries to generate cognitive states by giving computers representations
that express the content of cognitive states--representations, moreover,
which we understand but which the computer does not. (2) The content
of a cognitive state is the meaning of the sentence or other symbolism
that expresses it. But if meanings were inner entities we would be unable
to understand them. Consequently contents cannot be inner entities, so
that we cannot generate cognitive states by giving computers inner representations that express the content of cognition. Moreover, since such
systems are not even meant to understand the meanings of their representations, they cannot understand the content of their cognitive states.
But not to understand the content of a cognitive state is not to have
that cognitive state, so that, again, strong representational AI systems
cannot have cognitive states and so cannot be intelligent.
Keywords. Strong AI, cognition, content, Chinese Room Argument,
psychologism, meaning.
Introduction
well, I agree. But strong AI tells us that under these circumstances the system
would be in a cognitive state, for to be in a cognitive state is essentially to contain
a representation that expresses the content of that cognitive state. This is the
clear claim of the Knowledge Representation Hypothesis, which underlies and
drives strong representational AI, it is what the literature tells us, and it is what
practitioners of strong AI actually do.
I will argue that this failure to distinguish between cognitive states and the
content of cognitive states explains how, in Searle's words, "AI got into this
mess in the first place" (Searle, 1990). In the 19th century it led to fundamental
confusions about the foundations of logic and mathematics, and I will look at
this history.
Nevertheless, there remains the possibility that AI might be able to bring
it off. In stating the Knowledge Representation Hypothesis, Smith says that he
does not think that cognition is necessarily representational, but he is not sure
about the weaker claim that representational intelligence is at least possible. So
could we generate intelligence by giving a system symbolisms that express the
content of cognition, and then by giving it something else--I leave it entirely
open what this might be--which would enable the system to exploit these inner
entities to give it intelligent states? (One possibility is that the symbolism should
generate appropriate behaviour.)
The second part of my argument provides an answer to this question, and
I will argue that internalising content actually makes cognition impossible, and
that is the main message of this paper.
I will first revisit the Chinese Room and show that the real issue is not the
shortcomings of symbol manipulation, but the failure to distinguish between
cognition and content. Then I will show that strong representational AI fails to
draw this distinction, and tries to generate cognitive states by giving computers
representational repertoires that express the content of cognition. I will then
examine the state/content distinction in more detail. Finally I will show that we
cannot generate cognition by giving a system a representational repertoire that
expresses the content of cognition.
2
The
Chinese
Room:
an Entr6e
Everyone in AI knows about the Chinese Room, but here it is again. Searle is
sitting in a room in front of two windows. Pieces of paper covered in squiggles
come in through one of the windows. Searle examines the squiggles and looks
them up in a rulebook, which is written in English. The rulebook tells him how to
manipulate them: he can reproduce them, modify them, destroy them, create new
ones, and pass the results out through the other window. 3 Unbeknown to Searle,
these squiggles are in Chinese, and there are Chinese computer programmers
outside the room, feeding sentences into it, and, they believe, getting sentences
3 Searle does not specify these operations, which I have borrowed them from Schank
& Abelson, 1977. He talks more generally about "perform[ing] computational operations on formally specified elements."
back in reply. The rule book is so sophisticated, and Searle so adept at using
it, that the room appears to understand Chinese, and this is certainly what t h e
p r o g r a m m e r s believe. But Searle says that the room understands nothing, for he
does not understand Chinese, nor does anything else in the room, and nor do the
room and its contents as a whole. Prom this, he says, it follows that computers
do not understand, for they too manipulate squiggles according to formal rules.
We can tighten this argument up a bit: digital computers syntactically manipulate formally specified elements according to formal rules; such manipulation
cannot give us content; cognitive states have content; therefore digital computers
cannot have cognitive states. 4
But there is content in the Chinese Room, and the room generates new content through syntactic manipulation. These facts are an explicit part of Searle's
story. He says that the room produces contentful symbolisms so efficiently that
its p r o g r a m m e r s (the Chinese speakers outside the room) believe t h a t it understands its input. These contentful symbolisms are generated by what Searle calls
"computational operations on formally specified elements" (Searle, 1980). Now,
it is true that the room performs such operations, but it is also the case that
the elements are interpretted, so t h a t the room is semantically well-behaved.
This semantic good behaviour is why we write computer programs, whether to
perform arithmetical calculations or to (after a fashion) answer questions about
stories.
Since the Chinese R o o m does generate content, we have to shift our attention away from the relationship between syntactic manipulation and content to
the relationship between content and cognition. W h a t the Chinese R o o m really
shows is that we cannot generate cognition by giving computers symbolisms that
express the content of cognition, even if these symbolisms play a role in the syst e m ' s behaviour. Focusing on the relationship between syntactic manipulation
and content is an understandable error, because computers manipulate formally
specified elements, so that it is easy to think that their meaning does not matter.
But representational AI trades in symbolisms precisely because of their content.
It believes that a system has a cognitive state if it contains a symbolism that
expresses the content of that cognitive state, possibly with the caveat that the
form or morphology of the symbolism must play a role in the system's behaviour.
In the next section I outline m y reasons for making this claim.
Strong
Representational
AI
p. 34. This is still his position (personal correspondence). For his reasons, see Smith,
1991.
lating natural languages. Its main methodology was heuristic search. It soon became apparent, however, that intelligence requires large amounts of knowledge,
both about particular domains and about the world in general, so that classical
AI had to provide an account of what it is for a system to have knowledge. For
the most part it construed this as ' W h a t is it for a system for have declarative,
or factual, knowledge?' rather than ' W h a t is it for a system to have skills, or
procedural knowledge?' And it read ' W h a t is it for a system for have declarative
knowledge?' as ' W h a t is it for a system to contain declarative knowledge?' This
was a convenient misreading, because it invites the reply 'A system contains
declarative knowledge if it contains a symbolism that expresses the content of
such knowledge'. AI replaced the philosophical question ' W h a t is it for a system
to know something?' with the engineering question 'How do we represent the
content of knowledge?'
AI made two false moves. First, it assumed that the kind of knowledge that
it was concerned with was factual rather than procedural. Let us concede this
for the sake of simplicity. It also assumed that a system has factual knowledge
if it contains a representation that expresses the content of t h a t knowledge.
Now, of course, there is a sense in which a system 'has knowledge' under these
circumstances. It has knowledge in the same way that a book does: it contains
symbolisms that express the content of knowledge. But this is not what we mean
when we say that someone 'has knowledge'. When we say this we at least mean
that the person is in a certain cognitive state. The traditional epistemological
question ' W h a t is it for person A to know that p?' at least means ' W h a t it is
for A to be in the cognitive state of knowing that p?' When it had to provide
an account of what it is for a system to have knowledge, AI was faced with a
question very similar to the traditional epistemological one. It was faced w i t h
' W h a t is it for system S to know that p?' This is a question in what we might
call 'machine epistemology', and it at least means ' W h a t is it for S to be in the
state of knowing t h a t p ? ' But AI assumed that S knows that p if and only if it
contains a symbolism that expresses the content of p.
4
Cognition
and
Content
I have argued that there is no cognition in the Chinese R o o m because representational AI fails to distinguish between cognitive states and the content of
cognitive states. I will now look at this distinction in more detail.
All mentalistic terms ('knowledge', 'belief', 'thought', 'hope', 'love', 'desire',
etc.) are ambiguous between their cognitive and non-cognitive senses. Some,
such as 'belief' and 'thought', are ambiguous between state and content, whilst
others, such as 'love' and 'desire', are ambiguous between state and object. 'My
love is unrequited and works in a b a n k ' equivocates between m y state, which is
unrequited, and the object of m y state, who works in a bank. In this paper we
are concerned with the state/content distinction. In one sense, my belief that
the Earth is round is a cognitive state, but in another it is a proposition that can
be written down in a public, communicable symbolism and t h a t expresses not
only the content of m y belief, but, I assume, the content of yours as well. I take
'content' and 'proposition' to mean the same thing, and I adopt the traditional
Church/Frege account of a proposition according to which it is (a) the meaning
of a sentence, by virtue of which different sentences can have the same content
or mean the same thing, and (b) the object of propositional attitudes, so that
when I believe t h a t the Earth is round I have an attitude to a proposition which
constitutes the content of my belief.
We can bring the state/content distinction into relief in two ways. The first
is by looking at predicates that contents can take but t h a t states cannot, and at
predicates that states can take but that contents cannot. A belief in the sense
of a content or proposition can be true or false, tautologous or contradictory,
subscribed to by one or many. It can be written down in public, communicable
symbolisms. There is nothing cognitive about beliefs in this sense. On the other
hand, beliefs as cognitive states can be strong and passionate, sincere or insincere,
shortlived or longlasting, but not true or false, tautologous or contradictory. If
we do not distinguish between these senses of 'belief' we will end up saying that
a belief is sincere and tautologous, or that it is contradictory and four years old,
which is reminiscent of the old joke t h a t Gilbert Ryle used as an example of a
category mistake: "she came home in a flood of tears and a sedan chair" (Ryle,
1949).
The other way to distinguish between state and content is to observe that
different states can have the same content. We can believe and fear the same
thing: for instance, that there is no more beer left in the fridge.
This apparently trivial failure to distinguish between cognitive states and
their contents can lead to fundamental confusions about the conceptual foundations of disciplines. In the nineteenth century it gave rise to psychologism, which
is the belief that we can find out about content by studying cognition--for instance, that we can find out about logic and m a t h e m a t i c s by studying the mind,
so that these disciplines belong to empirical psychology. John Stuart Mill believed that the Law of Non-Contradiction is the empirically based generalisation
that anyone who is in the state of believing A is not also in the state of believing
not-A (Mill 1843, 1865).
A list of infelicities can be laid at the feet of this position. T h e y were first
voiced by Frege and then articulated more thoroughly by Husserl. If the laws of
logic were empirical generMisations about how we think then:
- they would be contingent; if they were contingent they could be false; but to
say, for instance, that the Law of Non-Contradiction could be false is itself
a contradiction;
- they would be not only contingent, but contingently false, since some of us
are inconsistent some of the time;
- we would need to look in the world to discover and test them; but we do
not do empirical surveys to determine the truth of laws such as the Law of
Non-Contradiction;
they would be about something in the empirical world: mental states; but the
laws of logic are not about anything in the empirical world and are therefore
not about mental states; the Law of Non-Contradiction, for instance, does
not quantify over mental states.
The official story is that Frege and Husserl exorcised psychologism and buried
it at the cross-roads of history. Be that as it may, the failure that underlies and
drives it (the failure to distinguish between state and content) lives on in a mirror
image of psychologism that I call reverse psychologism. This has a weak and a
strong version. The weak version says that we can study cognition by studying
content, so that we can find out about the mind by studying disciplines such
as logic and linguistics. This assumption is endemic in cognitive science and
linguistics. The strong version says that we can generate cognition by giving
computers representational repertoires that express the content of cognition.
This strong version of reverse psychologism gives us strong representational
AI, for strong representational AI believes that to know something is essentially
to have an inner symbolism that expresses the content of that knowledge. To
have mental states, as Smith says, is to have "a set of formal representations".
In a curious quirk of intellectual history, this makes the same mistake as psychologism, but it does so in reverse.
Let me try to clarify this by returning to the Knowledge Representation
Hypothesis. It might be argued that this does distinguish between state and
content, because it requires the symbolism to provide a propositional account
of the knowledge that the system possesses (this is the content) and it talks
about the causal role of the symbolism (this is played by the cognitive state).
Consequently it might be argued that the Knowledge Representation Hypothesis
does distinguish between state and content. Brian Smith, whilst agreeing with
m y reading of the hypothesis, has suggested that I have downplayed the role
of the symbolisms (personal correspondence). These points are related, so I will
answer them together.
Yes, the hypothesis talks about both causal efficacy and meaningfulness, and
in common parlance we associate these things with cognitive states on the one
hand and content on the other. When we say "His belief caused him to do X"
we mean that it was his state of belief that caused him to do X, and when we
say that his belief was true, we mean that it was the content of his belief (what
he believed) that was true. But the Knowledge Representation Hypothesis does
not draw this distinction. It confers causal efficacy and meaningfulness upon one
and the same thing: the representation or symbolism that expresses the belief.
Rather than saying that someone's state of belief caused him to behave in a
certain way, and that the content of that belief is true or false, it says that the
representation or symbolism expresses the content and plays the causal role.
This brings us to the second part of the argument. I will argue, not only that
strong representational AI fails to distinguish between cognition and content,
but that it is impossible for a system to have cognition by virtue of inner representations that express the content of that cognition. T h a t is, I will answer
]0
Secondly, I do not want to buy into the debate about the ontological status
of propositions. Frege, for example, thought t h a t propositions are objectively
real and enjoy the same status t h a t Platonists accord to numbers. I only need to
say that, whatever their status, they must be publically accessible. One account
that clearly satisfies this condition is Wittgenstein's theory of meaning as use.
(Wittgenstein, 1953).
Now let us suppose that we construct a system, which I will call a 'Locke
Machine', and give it a Lockean semantics. It is difficult to imagine what this
would look like, both because we would have to give the system a semantics that
we did not understand, and because meanings are not private in the sense we
are trying to envisage here. In some ways it would be like giving the system a
language that we do not understand, but in a crucial way it would not be like
this, for if someone speaks a language that we do not understand then we can
come to understand it, because the meanings are publically available. But we
could never understand a Locke Machine. No a t t e m p t s that it made to explain
itself would be comprehensible to us. If the meaning of utterance U1 was idea/1,
then a t t e m p t i n g to explain U1 in terms of U2 would not help, for the meaning
of U2 would b e / 2 , which would be equally inaccessible.
We can look at this the other way round. Suppose, per impossibile, t h a t I
understand what the machine says. Now I can explain it to you, for I assume
that if I understand something then I can explain it both to myself and to
others. If I can explain it to you then it has a public meaning. Now I have not
magically changed anything. If the meaning is public now it was public when I
first understood the machine. If on the other hand the meaning is not public,
then by modus ~ollens I cannot explain it to you, in which case, again by modus
tollens, I could not have understood it in the first place. We cannot have it both
ways: meanings cannot be both public and in the head.
Now if meanings cannot be in the head, then content cannot be in the head,
for meaning and content are one and the same thing. Consequently a system
which has content inside itself, or 'in its head', cannot have cognitive states.
This in itself shows that strong representational AI systems cannot have
cognitive states, but there is more to come. Let us imagine something stranger
still - an Inverted Locke Machine (ILM), which contains symbolisms and utters
sentences that we can understand but which the machine cannot. This would
be a little like reading passages in a language that we did not understand to
someone who did understand them. Under these circumstances our utterances
would not intentionally express our cognitive states, for we would not understand
what we were saying at the time.
But, again, it would not be entirely like this, for when we read the foreign
equivalent of ' T h e Earth is round' we might happen to believe that the E a r t h is
round, even though we did not understand what we are saying at the time. But
an ILM could not even do this, for an ILM understands none of its utterances.
Because an ILM understand none of its utterances, and because the contents
of its cognitive states are the meanings of possible utterances , it follows that an
ILM understands none of its cognitive states. Suppose t h a t it ostensibly believed
]]
that the Earth is round. To do this it would need to understand the proposition
expressed by the sentence 'The Earth is round'. But it understands none of its
utterances, and so does not understand the proposition ~ T h e Earth is r o u n d # .
Wittgenstein said that if a lion could speak we would not be able to understand
it. Michael Frayne parodied Wittgenstein by saying that if a lion could speak it
would not to be able to understand itself. Well, if an AI could speak, it would
not be able to understand itself!
But not to understand the content of a cognitive state is not to have that
cognitive state. If, for instance, a belief is too complicated for us to understand
it, then we cannot have that belief. Think of a complex equation that you do not
understand, and then ask yourself if you can think it or believe it! The White
Queen might have been able to have six impossible thoughts before breakfast,
but the rest of us are less accomplished.
The story of the ILM retells the story of the Chinese Room in terms of
state and content, without any mention of syntax at all. The Chinese Room
produces sequences that the programmers outside the room understand, but
which neither the room nor its occupant understand. An ILM similarly generates
sequences that are meaningful to us, as observers, but not to the system. As I
argued earlier, there is no cognition in the Chinese Room because the content
in the room is unavailable to its occupant, so that it cannot be a content of his
cognition.
Now here is the point of all this: classical representational A I systems are
Inverted Locke Machines, since they contain representations that are meaningful
to us but not to them. Since ILMs cannot have cognitive states, classical representational AI systems cannot have cognitive states, and so cannot be intelligent.
Conclusion
Searle says that the Chinese Room shows us that syntactic manipulation cannot
generate content or semantics. But the room does generate content, and it does
so by syntactic manipulation. Consequently the problem cannot lie with the
relationship between content and syntax, and must lie with the relationship
between content and cognition.
The confusion (more properly, the failure to distinguish) between content
and cognition has a long history, and led to fundamental confusions about the
foundations of logic and mathematics in the 19th century. The confusion is still
with us. There is a clear distinction between a cognitive state, such as believing
that the Earth is round, and the content of that state, such as the belief that
the Earth is round, yet strong representational AI essentially tries to generate
cognition by giving computers symbolisms that express the content of cognition.
This project cannot succeed. The key concept is that of a proposition, construed as the meaning of a sentence and the content of the thought expressed by
that sentence. Meanings cannot be inner and private, and so content cannot be
inner and private. Consequently, a system which has been given inner, private
12
content (expressed in an inner symbolism) cannot have cognitive states, and so
cannot be intelligent.
In fact classical representational AI presents us with an even stranger case, in
which a system has symbolisms that we understand but which the system does
not. Such a system would not understand its own utterances, and so (because
the contents of cognitive states are the meanings of utterances) it would not
understand its own cognitive states. But not to understand a cognitive state
is not to have that cognitive state, so that, again, classical representational AI
systems cannot have cognitive states, and so cannot be intelligent.
7
Postscript
This claim (see the passage referred to in footnote 1) stirred up a storm at the
conference, and a showing of hands revealed that most of the audience believed
that intelligence does not require cognition: systems can be intelligent without
believing, knowing, planning, thinking, or having any other kind of cognitive
state. Most seemed to think that the proper goal of strong AI is intelligent
behaviour. This is in part the legacy of the Turing Test. Surely we call behaviour
'intelligent' to the extent that we believe it to be driven by intelligent states.
If we discovered that apparently intelligent behaviour was in fact achieved by
trickery or some kind of conditioned response (think of pattern matching in the
case of Eliza, or a look-up table in the case of the Great Lookup Being) we would
withdraw our belief that the system was intelligent.
8
Bibliography
13
Searle, J. (1980), 'Minds, Brains, and Programs', The Behavioral and Brain
Sciences, 3, pp. 417-427.
Searle, J. (1995), 'mind, syntax, and semantics', in T. Honderlieh, ed., The Oxford Companion to Philosophy, Oxford: Oxford University Press, pp. 580-581.
Smith, B. C. (1985), 'Prologue to Reflection and Semantics in a Procedural
Language', in R. Brachman & H. Levesque, eds., Readings in Knowledge Representation, Los Altos: Morgan Kaufmann.
Smith, B. C. (1991), 'The Owl and the Electric Encyclopedia', Artificial Intelligence, 47, pp. 251-288.
Wittgenstein, L. (1953), Philosophical Investigations, Oxford: Basil Blackwell.