Sunteți pe pagina 1din 13

Why (A Kind of) AI Can't Be Done

Terry Dartnall
Computing and Information Technology, Griffith University
Qld 4111, Australia
terryd@cit.gu.edu.au
A b s t r a c t . I provide what I believe is a definitive argument against
strong classical representational AI--that branch of AI which believes
that we can generate intelligence by giving computers representations
that express the content of cognitive states. The argument comes in two
parts. (1) There is a clear distinction between cognitive states (such as
believing that the Earth is round) and the content of cognitive states
(such as the belief that the Earth is round), yet strong representational
AI tries to generate cognitive states by giving computers representations
that express the content of cognitive states--representations, moreover,
which we understand but which the computer does not. (2) The content
of a cognitive state is the meaning of the sentence or other symbolism
that expresses it. But if meanings were inner entities we would be unable
to understand them. Consequently contents cannot be inner entities, so
that we cannot generate cognitive states by giving computers inner representations that express the content of cognition. Moreover, since such
systems are not even meant to understand the meanings of their representations, they cannot understand the content of their cognitive states.
But not to understand the content of a cognitive state is not to have
that cognitive state, so that, again, strong representational AI systems
cannot have cognitive states and so cannot be intelligent.
Keywords. Strong AI, cognition, content, Chinese Room Argument,
psychologism, meaning.

Introduction

In this paper I provide what I believe is a definitive argument against strong,


classical AI. But I want to be clear about my quarry. I am arguing against classical AI, also known as symbol-handling AI, or GOFAI, for "Good Old Fashioned
AI" (after Haugeland, 1985). And I am arguing against the strong form of this
sort of AI, that Searle calls "strong AI" (Searle, 1980). This is the branch of
classical AI that believes that appropriately programmed computers can be intelligent, without any scare quotes around 'intelligent'. I contrast it with weak
AI, which merely purports to give us more sophisticated software, without any
pretensions to intelligence. This differs from Searle's characterisation of weak AI
as a tool for formulating and testing hypotheses about the mind. I have no quarrel with weak AI, in either of these formulations. But I do think that the real AI
should stand up and be counted. If Artificial Intelligence is not trying to produce
artificial intelligence then Artificial Intelligence should not call itself 'Artificial

Intelligence', but something like 'Knowledge Based Systems'--as it sometimes


does. If on the other hand Artificial Intelligence is trying to produce artificial
intelligence then it has to face the realities of what is required, and this is that
intelligent systems should at least have cognitive states (such as understanding,
knowing and believing): for a system to be intelligent it must understand, know,
believe, or have some other cognitive state. This is a conservative claim, bordering on the banal, that simply says that intelligence requires cognition, and who
could quarrel with that? 1 It makes no mention of consciousness, though it may
be that we cannot have cognition without consciousness.
There is one more caveat. My argument is against what I call 'representational strong classical' AI, which says that we can generate artificial intelligence
by giving computers inner symbolisms that express the content of cognition,
possibly on condition that the form or morphology of the symbolism causes the
system to behave in appropriate ways. More about this later. It is sufficient at
this stage to say that this captures the spirit and practice of contemporary strong
AI. Such an approach is eponymised in Brian Cantwell Smith's Knowledge Representation Hypothesis (Smith, 1985), touted in the literature, and employed by
active practitioners of AI.
Here is an outline of my argument. In his infamous Chinese Room Argument
(CRA) Searle (ibid.) says that computers cannot be intelligent because they
merely manipulate symbols according to syntactic rules, and syntactic manipulation cannot generate semantics or content. Searle makes the uncontentious
assumptions (a) that intelligence requires cognitive states, and (b) that cognitive
states have content. The kind of content we are concerned with in this paper is
propositional content. When you believe that the Earth is round, the content of
your belief is the proposition # T h e Earth is r o u n d # . 2 Searle says that the CRA
shows that syntactic manipulation cannot generate content, so that computers
(which perform only syntactic manipulation) cannot generate content, and hence
cannot have cognitive states.
I think that this argument, and the copious literature that it has generated,
totally misses the mark. The CRA shows us something that we all k n o w - - t h a t
syntactic manipulation can generate content. Computers generate new content
through syntactic manipulation all the time, whether to perform arithmetical
calculations, or, in the case cited in the Chinese Room Argument, to answer
questions about stories we have put into their databases. This is new content
for us, of course, not for the computer. The computer understands none of it.
Consequently, the Chinese Room does not show that syntactic manipulation
does not give us new content. But it does show something else: it shows us that
internal content is not sufficient for cognition. Having the internal symbolism
'The Earth is round' does not mean that the system is in any kind of cognitive
state.
Now this might seem to be stating the obvious, and if you think that it is,
1 See Postscript.
2 I use the notation # T h e Earth is round# to denote the proposition expressed by
the sentence 'The Earth is round'.

well, I agree. But strong AI tells us that under these circumstances the system
would be in a cognitive state, for to be in a cognitive state is essentially to contain
a representation that expresses the content of that cognitive state. This is the
clear claim of the Knowledge Representation Hypothesis, which underlies and
drives strong representational AI, it is what the literature tells us, and it is what
practitioners of strong AI actually do.
I will argue that this failure to distinguish between cognitive states and the
content of cognitive states explains how, in Searle's words, "AI got into this
mess in the first place" (Searle, 1990). In the 19th century it led to fundamental
confusions about the foundations of logic and mathematics, and I will look at
this history.
Nevertheless, there remains the possibility that AI might be able to bring
it off. In stating the Knowledge Representation Hypothesis, Smith says that he
does not think that cognition is necessarily representational, but he is not sure
about the weaker claim that representational intelligence is at least possible. So
could we generate intelligence by giving a system symbolisms that express the
content of cognition, and then by giving it something else--I leave it entirely
open what this might be--which would enable the system to exploit these inner
entities to give it intelligent states? (One possibility is that the symbolism should
generate appropriate behaviour.)
The second part of my argument provides an answer to this question, and
I will argue that internalising content actually makes cognition impossible, and
that is the main message of this paper.
I will first revisit the Chinese Room and show that the real issue is not the
shortcomings of symbol manipulation, but the failure to distinguish between
cognition and content. Then I will show that strong representational AI fails to
draw this distinction, and tries to generate cognitive states by giving computers
representational repertoires that express the content of cognition. I will then
examine the state/content distinction in more detail. Finally I will show that we
cannot generate cognition by giving a system a representational repertoire that
expresses the content of cognition.
2

The

Chinese

Room:

an Entr6e

Everyone in AI knows about the Chinese Room, but here it is again. Searle is
sitting in a room in front of two windows. Pieces of paper covered in squiggles
come in through one of the windows. Searle examines the squiggles and looks
them up in a rulebook, which is written in English. The rulebook tells him how to
manipulate them: he can reproduce them, modify them, destroy them, create new
ones, and pass the results out through the other window. 3 Unbeknown to Searle,
these squiggles are in Chinese, and there are Chinese computer programmers
outside the room, feeding sentences into it, and, they believe, getting sentences
3 Searle does not specify these operations, which I have borrowed them from Schank
& Abelson, 1977. He talks more generally about "perform[ing] computational operations on formally specified elements."

back in reply. The rule book is so sophisticated, and Searle so adept at using
it, that the room appears to understand Chinese, and this is certainly what t h e
p r o g r a m m e r s believe. But Searle says that the room understands nothing, for he
does not understand Chinese, nor does anything else in the room, and nor do the
room and its contents as a whole. Prom this, he says, it follows that computers
do not understand, for they too manipulate squiggles according to formal rules.
We can tighten this argument up a bit: digital computers syntactically manipulate formally specified elements according to formal rules; such manipulation
cannot give us content; cognitive states have content; therefore digital computers
cannot have cognitive states. 4
But there is content in the Chinese Room, and the room generates new content through syntactic manipulation. These facts are an explicit part of Searle's
story. He says that the room produces contentful symbolisms so efficiently that
its p r o g r a m m e r s (the Chinese speakers outside the room) believe t h a t it understands its input. These contentful symbolisms are generated by what Searle calls
"computational operations on formally specified elements" (Searle, 1980). Now,
it is true that the room performs such operations, but it is also the case that
the elements are interpretted, so t h a t the room is semantically well-behaved.
This semantic good behaviour is why we write computer programs, whether to
perform arithmetical calculations or to (after a fashion) answer questions about
stories.
Since the Chinese R o o m does generate content, we have to shift our attention away from the relationship between syntactic manipulation and content to
the relationship between content and cognition. W h a t the Chinese R o o m really
shows is that we cannot generate cognition by giving computers symbolisms that
express the content of cognition, even if these symbolisms play a role in the syst e m ' s behaviour. Focusing on the relationship between syntactic manipulation
and content is an understandable error, because computers manipulate formally
specified elements, so that it is easy to think that their meaning does not matter.
But representational AI trades in symbolisms precisely because of their content.
It believes that a system has a cognitive state if it contains a symbolism that
expresses the content of that cognitive state, possibly with the caveat that the
form or morphology of the symbolism must play a role in the system's behaviour.
In the next section I outline m y reasons for making this claim.

Strong

Representational

AI

The Knowledge Representation Hypothesis. Brian Cantwell Smith says:


It is widely held in computational circles that any process capable of
reasoning intelligently about the world must consist in part of a field of
4 This is a reconstruction of Searle's argument, due largely to Dennett (1987). In a
pithy summary of his position, called "Mind, syntax, and semantics", Searle says,
"The argument against the view that intentionality can be reduced to computation
is simply that syntax is not equivalent to nor sufficient for semantics". (Searle, 1995.)

structures, of a roughly linguistic sort, which in some fashion represent


whatever knowledge and beliefs the process may be said to possess. For
example, according to this view, since I know that the sun sets each
evening, my 'mind' must contain (among other things) a language-like
or symbolic structure that represents this fact, inscribed in some kind of
internal code. (1985.)
Additionally, the syntax or morphology (Smith calls it the "spelling") of this
internal code is presumed to play a causal role in the production of intelligent
behaviour. This gives us the full statement of the Knowledge Representation
Hypothesis:
Any mechanically embodied intelligent process will be comprised of structural ingredients that a) we as external observers naturally take to represent a propositional account of the knowledge that the overall process
exhibits, and b) independent of such external semantical attribution,
play a formal but causal and essential role in engendering the behaviour
that manifests that knowledge.
In other words, a system knows that p if and only if it contains a symbol structure
that means p to us and that causes the system to behave in appropriate ways.
Smith distinguishes between a strong version of the Knowledge Representation
Hypothesis, which claims that "knowing is necessarily representational", and a
weak version, which merely claims that "it is possible to build a representational
knower". Smith says, "I myself see no reasons to subscribe to the strong view,
and remain skeptical of the weak version as well" .5
What the literature says. Representational AI is especially concerned with
knowledge, which it usually construes as data. The following claims are typical:
"In AI, a representation of knowledge is a combination of data structures and
interpretive procedures." (Barr &: Feigenbaum, 1981) "We will discuss a variety
of knowledge structures. Each of them is a data structure in which knowledge
about particular problem domains can be stored." (Elaine Rich, 1983) "A picture
of tomorrow's computer vocabulary can be imagined, if all the words containing
'data' or 'information' are replaced by the word 'knowledge'." (Tore Amble,
1087)
What practitioners do. In keeping with this position, active practitioners
of AI (often called 'knowledge engineers') put symbol structures expressed in
knowledge representation formalisms, such as frames, semantic networks and
production systems, into belief bins and knowledge bins in order to engineer
knowledge into the systems. Syntactic manipulation is involved in learning, inferencing, planning, parsing, and other ways of generating new content, but the
core repertoire of knowledge and belief is stored in static, data-like structures.
A brief history. Early classical AI tried to develop systems that were capable
of general intelligent action, such as playing chess, proving theorems, and trans5

p. 34. This is still his position (personal correspondence). For his reasons, see Smith,
1991.

lating natural languages. Its main methodology was heuristic search. It soon became apparent, however, that intelligence requires large amounts of knowledge,
both about particular domains and about the world in general, so that classical
AI had to provide an account of what it is for a system to have knowledge. For
the most part it construed this as ' W h a t is it for a system for have declarative,
or factual, knowledge?' rather than ' W h a t is it for a system to have skills, or
procedural knowledge?' And it read ' W h a t is it for a system for have declarative
knowledge?' as ' W h a t is it for a system to contain declarative knowledge?' This
was a convenient misreading, because it invites the reply 'A system contains
declarative knowledge if it contains a symbolism that expresses the content of
such knowledge'. AI replaced the philosophical question ' W h a t is it for a system
to know something?' with the engineering question 'How do we represent the
content of knowledge?'
AI made two false moves. First, it assumed that the kind of knowledge that
it was concerned with was factual rather than procedural. Let us concede this
for the sake of simplicity. It also assumed that a system has factual knowledge
if it contains a representation that expresses the content of t h a t knowledge.
Now, of course, there is a sense in which a system 'has knowledge' under these
circumstances. It has knowledge in the same way that a book does: it contains
symbolisms that express the content of knowledge. But this is not what we mean
when we say that someone 'has knowledge'. When we say this we at least mean
that the person is in a certain cognitive state. The traditional epistemological
question ' W h a t is it for person A to know that p?' at least means ' W h a t it is
for A to be in the cognitive state of knowing that p?' When it had to provide
an account of what it is for a system to have knowledge, AI was faced with a
question very similar to the traditional epistemological one. It was faced w i t h
' W h a t is it for system S to know that p?' This is a question in what we might
call 'machine epistemology', and it at least means ' W h a t is it for S to be in the
state of knowing t h a t p ? ' But AI assumed that S knows that p if and only if it
contains a symbolism that expresses the content of p.
4

Cognition

and

Content

I have argued that there is no cognition in the Chinese R o o m because representational AI fails to distinguish between cognitive states and the content of
cognitive states. I will now look at this distinction in more detail.
All mentalistic terms ('knowledge', 'belief', 'thought', 'hope', 'love', 'desire',
etc.) are ambiguous between their cognitive and non-cognitive senses. Some,
such as 'belief' and 'thought', are ambiguous between state and content, whilst
others, such as 'love' and 'desire', are ambiguous between state and object. 'My
love is unrequited and works in a b a n k ' equivocates between m y state, which is
unrequited, and the object of m y state, who works in a bank. In this paper we
are concerned with the state/content distinction. In one sense, my belief that
the Earth is round is a cognitive state, but in another it is a proposition that can
be written down in a public, communicable symbolism and t h a t expresses not

only the content of m y belief, but, I assume, the content of yours as well. I take
'content' and 'proposition' to mean the same thing, and I adopt the traditional
Church/Frege account of a proposition according to which it is (a) the meaning
of a sentence, by virtue of which different sentences can have the same content
or mean the same thing, and (b) the object of propositional attitudes, so that
when I believe t h a t the Earth is round I have an attitude to a proposition which
constitutes the content of my belief.
We can bring the state/content distinction into relief in two ways. The first
is by looking at predicates that contents can take but t h a t states cannot, and at
predicates that states can take but that contents cannot. A belief in the sense
of a content or proposition can be true or false, tautologous or contradictory,
subscribed to by one or many. It can be written down in public, communicable
symbolisms. There is nothing cognitive about beliefs in this sense. On the other
hand, beliefs as cognitive states can be strong and passionate, sincere or insincere,
shortlived or longlasting, but not true or false, tautologous or contradictory. If
we do not distinguish between these senses of 'belief' we will end up saying that
a belief is sincere and tautologous, or that it is contradictory and four years old,
which is reminiscent of the old joke t h a t Gilbert Ryle used as an example of a
category mistake: "she came home in a flood of tears and a sedan chair" (Ryle,
1949).
The other way to distinguish between state and content is to observe that
different states can have the same content. We can believe and fear the same
thing: for instance, that there is no more beer left in the fridge.
This apparently trivial failure to distinguish between cognitive states and
their contents can lead to fundamental confusions about the conceptual foundations of disciplines. In the nineteenth century it gave rise to psychologism, which
is the belief that we can find out about content by studying cognition--for instance, that we can find out about logic and m a t h e m a t i c s by studying the mind,
so that these disciplines belong to empirical psychology. John Stuart Mill believed that the Law of Non-Contradiction is the empirically based generalisation
that anyone who is in the state of believing A is not also in the state of believing
not-A (Mill 1843, 1865).
A list of infelicities can be laid at the feet of this position. T h e y were first
voiced by Frege and then articulated more thoroughly by Husserl. If the laws of
logic were empirical generMisations about how we think then:
- they would be contingent; if they were contingent they could be false; but to
say, for instance, that the Law of Non-Contradiction could be false is itself
a contradiction;
- they would be not only contingent, but contingently false, since some of us
are inconsistent some of the time;
- we would need to look in the world to discover and test them; but we do
not do empirical surveys to determine the truth of laws such as the Law of
Non-Contradiction;
they would be about something in the empirical world: mental states; but the
laws of logic are not about anything in the empirical world and are therefore

not about mental states; the Law of Non-Contradiction, for instance, does
not quantify over mental states.
The official story is that Frege and Husserl exorcised psychologism and buried
it at the cross-roads of history. Be that as it may, the failure that underlies and
drives it (the failure to distinguish between state and content) lives on in a mirror
image of psychologism that I call reverse psychologism. This has a weak and a
strong version. The weak version says that we can study cognition by studying
content, so that we can find out about the mind by studying disciplines such
as logic and linguistics. This assumption is endemic in cognitive science and
linguistics. The strong version says that we can generate cognition by giving
computers representational repertoires that express the content of cognition.
This strong version of reverse psychologism gives us strong representational
AI, for strong representational AI believes that to know something is essentially
to have an inner symbolism that expresses the content of that knowledge. To
have mental states, as Smith says, is to have "a set of formal representations".
In a curious quirk of intellectual history, this makes the same mistake as psychologism, but it does so in reverse.
Let me try to clarify this by returning to the Knowledge Representation
Hypothesis. It might be argued that this does distinguish between state and
content, because it requires the symbolism to provide a propositional account
of the knowledge that the system possesses (this is the content) and it talks
about the causal role of the symbolism (this is played by the cognitive state).
Consequently it might be argued that the Knowledge Representation Hypothesis
does distinguish between state and content. Brian Smith, whilst agreeing with
m y reading of the hypothesis, has suggested that I have downplayed the role
of the symbolisms (personal correspondence). These points are related, so I will
answer them together.
Yes, the hypothesis talks about both causal efficacy and meaningfulness, and
in common parlance we associate these things with cognitive states on the one
hand and content on the other. When we say "His belief caused him to do X"
we mean that it was his state of belief that caused him to do X, and when we
say that his belief was true, we mean that it was the content of his belief (what
he believed) that was true. But the Knowledge Representation Hypothesis does
not draw this distinction. It confers causal efficacy and meaningfulness upon one
and the same thing: the representation or symbolism that expresses the belief.
Rather than saying that someone's state of belief caused him to behave in a
certain way, and that the content of that belief is true or false, it says that the
representation or symbolism expresses the content and plays the causal role.

Inverted Locke Machines and AI

This brings us to the second part of the argument. I will argue, not only that
strong representational AI fails to distinguish between cognition and content,
but that it is impossible for a system to have cognition by virtue of inner representations that express the content of that cognition. T h a t is, I will answer

Brian Smith's second question (is it possible to have intelligence by virtue of


inner representations?) in the negative.
I have said t h a t I adopt the traditional Church/Frege account of a proposition, according to which it is both the meaning of a sentence and the content of
the cognitive state expressed by that sentence. The proposition # T h e Earth is
r o u n d # is (a) the meaning of the sentence ' T h e Earth is round' and (b) what
we believe to be true when we believe that the Earth is round.
This identity of content and meaning is crucial to m y argument, for if I can
show t h a t the meaning of a sentence cannot be in the head, then the content of
cognitive states cannot be in the head (notice t h a t I say content here, and not
state), and if that is the case then we cannot get intelligence by locating content
'in the head' or 'in the system'.
An early account of meaning is John Locke's Ideational Theory of Meaning,
which says that the meaning of an word is an idea in the head. Locke says:
Words, in their p r i m a r y or immediate signification, stand for nothing
but the ideas in the mind of him that uses them, how imperfectly soever
or carelessly those ideas are collected from the things which they are
supposed to represent. When a m a n speaks to another, it is that he m a y
be understood; and the end of speech is, that those sounds, as marks,
m a y make known his ideas to the hearer. T h a t , then, which words are
the marks of are the ideas of the speaker: nor can any one apply them, as
marks, immediately to anything else but the ideas that he himself hath.
(Locke, 1690, Book III, Chapter II, Section 2. Locke's emphasis.)
But if the meaning of a word was an idea in the head it would be impossible
to understand what a speaker meant by a word, for we have no access to the
ideas in a speaker's head other than by understanding what he or she says. If the
meaning of 'splut' was an idea in m y head I would have no way of explaining it to
you, for m y a t t e m p t s to explain it would be in terms of other words, which would
be equally opaque. Of course, we do have ideas in our heads, in the sense t h a t we
have cognitive states that are attitudes to propositions, and there is an obvious
sense in which these are private: you can't look into m y head and inspect m y
ideas and beliefs. But this does not mean that meanings are private. Meanings
are expressed in public, communicable symbolisms that we share, utter, write
down, look up in dictionaries and play Scrabble with. T h e y are part of the
public fabric of communication, and it is this publicity that makes it possible
for one person to understand another. The Ideational Theory of Meaning gets it
exactly back to front: we understand people's ideas ('what they have in mind')
by understanding the meaning of what they say, not vice-versa.
I want to mention two things in passing. First, we should not confuse meanings, which must be public, with psychological associations, which m a y vary from
person to person. If you were stifled with a pillow when you were a child, you
will probably associate pillows with asphyxia, but this has nothing to do with
the meaning of the word 'pillow'. Psychological associations m a y be private, but
meanings cannot be.

]0
Secondly, I do not want to buy into the debate about the ontological status
of propositions. Frege, for example, thought t h a t propositions are objectively
real and enjoy the same status t h a t Platonists accord to numbers. I only need to
say that, whatever their status, they must be publically accessible. One account
that clearly satisfies this condition is Wittgenstein's theory of meaning as use.
(Wittgenstein, 1953).
Now let us suppose that we construct a system, which I will call a 'Locke
Machine', and give it a Lockean semantics. It is difficult to imagine what this
would look like, both because we would have to give the system a semantics that
we did not understand, and because meanings are not private in the sense we
are trying to envisage here. In some ways it would be like giving the system a
language that we do not understand, but in a crucial way it would not be like
this, for if someone speaks a language that we do not understand then we can
come to understand it, because the meanings are publically available. But we
could never understand a Locke Machine. No a t t e m p t s that it made to explain
itself would be comprehensible to us. If the meaning of utterance U1 was idea/1,
then a t t e m p t i n g to explain U1 in terms of U2 would not help, for the meaning
of U2 would b e / 2 , which would be equally inaccessible.
We can look at this the other way round. Suppose, per impossibile, t h a t I
understand what the machine says. Now I can explain it to you, for I assume
that if I understand something then I can explain it both to myself and to
others. If I can explain it to you then it has a public meaning. Now I have not
magically changed anything. If the meaning is public now it was public when I
first understood the machine. If on the other hand the meaning is not public,
then by modus ~ollens I cannot explain it to you, in which case, again by modus
tollens, I could not have understood it in the first place. We cannot have it both
ways: meanings cannot be both public and in the head.
Now if meanings cannot be in the head, then content cannot be in the head,
for meaning and content are one and the same thing. Consequently a system
which has content inside itself, or 'in its head', cannot have cognitive states.
This in itself shows that strong representational AI systems cannot have
cognitive states, but there is more to come. Let us imagine something stranger
still - an Inverted Locke Machine (ILM), which contains symbolisms and utters
sentences that we can understand but which the machine cannot. This would
be a little like reading passages in a language that we did not understand to
someone who did understand them. Under these circumstances our utterances
would not intentionally express our cognitive states, for we would not understand
what we were saying at the time.
But, again, it would not be entirely like this, for when we read the foreign
equivalent of ' T h e Earth is round' we might happen to believe that the E a r t h is
round, even though we did not understand what we are saying at the time. But
an ILM could not even do this, for an ILM understands none of its utterances.
Because an ILM understand none of its utterances, and because the contents
of its cognitive states are the meanings of possible utterances , it follows that an
ILM understands none of its cognitive states. Suppose t h a t it ostensibly believed

]]
that the Earth is round. To do this it would need to understand the proposition
expressed by the sentence 'The Earth is round'. But it understands none of its
utterances, and so does not understand the proposition ~ T h e Earth is r o u n d # .
Wittgenstein said that if a lion could speak we would not be able to understand
it. Michael Frayne parodied Wittgenstein by saying that if a lion could speak it
would not to be able to understand itself. Well, if an AI could speak, it would
not be able to understand itself!
But not to understand the content of a cognitive state is not to have that
cognitive state. If, for instance, a belief is too complicated for us to understand
it, then we cannot have that belief. Think of a complex equation that you do not
understand, and then ask yourself if you can think it or believe it! The White
Queen might have been able to have six impossible thoughts before breakfast,
but the rest of us are less accomplished.
The story of the ILM retells the story of the Chinese Room in terms of
state and content, without any mention of syntax at all. The Chinese Room
produces sequences that the programmers outside the room understand, but
which neither the room nor its occupant understand. An ILM similarly generates
sequences that are meaningful to us, as observers, but not to the system. As I
argued earlier, there is no cognition in the Chinese Room because the content
in the room is unavailable to its occupant, so that it cannot be a content of his
cognition.
Now here is the point of all this: classical representational A I systems are
Inverted Locke Machines, since they contain representations that are meaningful
to us but not to them. Since ILMs cannot have cognitive states, classical representational AI systems cannot have cognitive states, and so cannot be intelligent.

Conclusion

Searle says that the Chinese Room shows us that syntactic manipulation cannot
generate content or semantics. But the room does generate content, and it does
so by syntactic manipulation. Consequently the problem cannot lie with the
relationship between content and syntax, and must lie with the relationship
between content and cognition.
The confusion (more properly, the failure to distinguish) between content
and cognition has a long history, and led to fundamental confusions about the
foundations of logic and mathematics in the 19th century. The confusion is still
with us. There is a clear distinction between a cognitive state, such as believing
that the Earth is round, and the content of that state, such as the belief that
the Earth is round, yet strong representational AI essentially tries to generate
cognition by giving computers symbolisms that express the content of cognition.
This project cannot succeed. The key concept is that of a proposition, construed as the meaning of a sentence and the content of the thought expressed by
that sentence. Meanings cannot be inner and private, and so content cannot be
inner and private. Consequently, a system which has been given inner, private

12
content (expressed in an inner symbolism) cannot have cognitive states, and so
cannot be intelligent.
In fact classical representational AI presents us with an even stranger case, in
which a system has symbolisms that we understand but which the system does
not. Such a system would not understand its own utterances, and so (because
the contents of cognitive states are the meanings of utterances) it would not
understand its own cognitive states. But not to understand a cognitive state
is not to have that cognitive state, so that, again, classical representational AI
systems cannot have cognitive states, and so cannot be intelligent.
7

Postscript

This claim (see the passage referred to in footnote 1) stirred up a storm at the
conference, and a showing of hands revealed that most of the audience believed
that intelligence does not require cognition: systems can be intelligent without
believing, knowing, planning, thinking, or having any other kind of cognitive
state. Most seemed to think that the proper goal of strong AI is intelligent
behaviour. This is in part the legacy of the Turing Test. Surely we call behaviour
'intelligent' to the extent that we believe it to be driven by intelligent states.
If we discovered that apparently intelligent behaviour was in fact achieved by
trickery or some kind of conditioned response (think of pattern matching in the
case of Eliza, or a look-up table in the case of the Great Lookup Being) we would
withdraw our belief that the system was intelligent.
8

Bibliography

Amble, T. (1987), Logic Programming and Knowledge Engineering, Wokingham:


Addison-Wesley.
Barr, A ~ Feigenbaum, E. A. (1981), The Handbook of Artificial Intelligence,
vol. I, Reading Mass: Addison-Wesley.
Dennett, D. (1987), 'Fast Thinking', in D. Dennett, The Intentional Stance,
Cambridge MA: MIT Press, pp. 324-337.
Haugeland, J. (1985), Artificial Intelligence: The Very Idea, Cambridge, Mass.:
MIT/Bradford Press.
Locke, J. (1690), An Essay Concerning Human Understanding. My edition is
edited by A.D. Woozley, London: Fontana, 1964.
Mill, J. (1843), A System of Logic. London: Longmans, Green, Reader & Dyer.
Mill, J. (1865), Examination of Sir William Hamilton's Philosophy, Boston:
William V. Spencer.
Rich, E. (1983), Artificial Intelligence, Auckland: McGraw-Hill.
Ryle, G. (1949), The Concept of Mind, Hutchinson. Reprinted Harmondsworth:
Penguin (1963).
Schank, R. C. ~ Abelson, R. P. (1977), Scripts, Plans, Goals and Understanding,
Hillsdale: Laurence Erlbaum Associates.

13
Searle, J. (1980), 'Minds, Brains, and Programs', The Behavioral and Brain
Sciences, 3, pp. 417-427.
Searle, J. (1995), 'mind, syntax, and semantics', in T. Honderlieh, ed., The Oxford Companion to Philosophy, Oxford: Oxford University Press, pp. 580-581.
Smith, B. C. (1985), 'Prologue to Reflection and Semantics in a Procedural
Language', in R. Brachman & H. Levesque, eds., Readings in Knowledge Representation, Los Altos: Morgan Kaufmann.
Smith, B. C. (1991), 'The Owl and the Electric Encyclopedia', Artificial Intelligence, 47, pp. 251-288.
Wittgenstein, L. (1953), Philosophical Investigations, Oxford: Basil Blackwell.

S-ar putea să vă placă și