Sunteți pe pagina 1din 11

Strong AI and the Chinese Room Argument, Four

views
Joris de Ruiter
3AI, Vrije Universiteit Amsterdam
jdruiter@few.vu.nl
First paper for: FAAI 2006

Abstract
Strong AI is the view that the human mind is a computational device and computers are
in principle capable of thought. In 1980, Searle published a paper which argued against
this position by means of a thought experiment: the chinese room. In the years to come,
many comments were made to this paper, of which we will discuss two. Central questions
in this paper will be whether strong AI is true, and whether it is possible to create a true
'Artificial Intelligence'. All four views (strong AI, Searle, Harnad and Churchland) differ
from one another, and all will be summarized and discussed.

In short, this paper discusses some important philosophical underpinnings of AI, by


summarizing and discussing the views of three authors and strong AI itself.
Introduction
In 1980, Searle defined Strong AI, and argued against it by means of the Chinese Room
Argument (CRA). Searle's argument was (and is) a direct challenge to proponents of
Artificial Intelligence (AI), and the argument also has broad implications for functionalist
and computational theories of meaning and of mind. As a result, there have been many
critical replies to it.

In this paper we discuss two such replies (from Harnad and Churchland1), along with
Searle's chinese room argument and of course strong AI itself. Central questions will be:
(1) is strong AI true?
(2) is strong AI possible, and how?

Note that this paper requires no prior knowledge of the philosophy of AI: all necessary
terms will be explained. To begin with, we start with an explanation of Strong AI.

What is strong AI?


Strong AI is the view that the human mind is a computational device and computers are
in principle capable of thought[1]. Supporters of strong AI believe that an appropriately
programmed computer isn't simply a simulation or model of a mind, it actually would
count as a mind. That is, it understands, has cognitive states, and can think.

The term was originally coined by John Searle, who writes:


"According to strong AI, the computer is not merely a tool in the study of the mind;
rather, the appropriately programmed computer really is a mind"[2]

By contrast, 'weak AI' is the view that computers are merely useful in psychology,
linguistics, and other areas, in part because they can simulate mental abilities. Weak AI
makes no claim that computers actually understand or are intelligent.

Strong AI states that all there is to having a mind (having mental/cognitive states), is
running a program (the right program of course). By program, we mean a sequence of
steps, an algorithm. Programs executed on a computer are always purely symbolic
(consisting only of symbols like letters or digits (0/1)). Because of that, computation is
pure symbol manipulation (e.g. manipulation of the zero's and one's).

This leads us to computationalism, which states more or less the same as strong AI2. For
reasons of simplicity, we use them interchangeably.

Computationalism is the theory that cognition is computation, that mental states are just
computational states. According to Harnad [3], the following can be said of
computationalism:

(1) Mental states are just implementations of (the right) computer program(s).
(Otherwise put: Mental states are just computational states).
(2) Computational states are implementation-independent. (Software is hardware-
independent).

1
When referring to Churchland, we actually refer to 'the churchlands', a a married couple, both professors of
philosophy. Sometimes the singular form is more suitable in a sentence, sometimes the plural. In both cases,
we refer to both of them, since they have written the paper[4] together.
2
For reasons of simplicity, we see strong AI, computationalism and functionalism as the same, and use them
interchangeably.
The same holds for understanding, consciousness, intentionality, intelligence, and mentality (having a mind).
When we say that a computer has intentionality, or that it can truly understand, we usually mean all these
words.
If we combine (1) and (2) we get: Mental states are just implementation-
independent implementations of computer programs. This is not self-
contradictory. The computer program has to be physically implemented as a
dynamical system in order to become the corresponding computational state, but
the physical details of the implementation are irrelevant to the computational
state that they implement -- except that there has to be some form of physical
implementation. Radically different physical systems can all be implementing one
and the same computational system.

So basically, computationalism/strong AI states that running a program is enough for


mentality, and that mentality is implementation-independent. This has some far-reaching
implications:
- We now know how the mind works (including consciousness, etc), namely by just
running a program
- Whether you run that program on a human or computer3, the result is the same
(understanding, consciousness, etc).
- A computer can display any systematic pattern of responses to the environment
whatsoever, and can have all mental states that humans have. It's just a matter of
finding the right computer program (given enough time and storage space). Note that
this is exactly what AI-researchers are doing: making intelligent programs.
- The programs that we make to simulate intelligence (e.g. Eliza, SAM, SHRDLU), are
truly intelligent, and explain human intelligence

To make matters more concrete, let's consider such a program: SAM (Script Applier
Mechanism), made by Roger Schank in 1977 [6]. Note that "nothing that follows depends
upon the details of Schank's programs. The same arguments (coming after the story)
would apply to Winograd's SHRDLU [7], Weizenbaum's ELIZA [8], and indeed any Turing
machine simulation of human mental phenomena."[2]

Searle describes the program as follows[2]: (shortened)

"Very briefly, one can describe Schank's program as follows: the aim of the
program is to simulate the human ability to understand stories. It is characteristic
of human beings' story-understanding capacity that they can answer questions
about the story even though the information that they give was never explicitly
stated in the story. Thus, for example, suppose you are given the following story:

A man went into a restaurant and ordered a hamburger. When the


hamburger arrived it was burned to a crisp, and the man stormed
out of the restaurant angrily, without paying for the hamburger
or leaving a tip.

Now, if you are asked 'Did the man eat the hamburger?', you will presumably
answer, 'No, he did not.' Schank's machines can similarly answer questions about
restaurants in this fashion. "

According to Searle[2], partisans of strong AI claim that in this 'question and answer
sequence', "the machine is not only simulating a human ability, but also that the machine
can literally be said to understand the story and provide the answers to questions, and
that what the machine and its program do explains the human ability to understand the
story and answer questions about it."[2]

3
Note that a program can not only be run on a human or computer, but also on all sorts of crazy
implementations like a roll of toilet paper, a pile of small stones, or a system of waterpipes and valves. To
computationalism, all of these can have mentality, since they are all able to run a program. The question is just
a matter of writing the right program, and enough time/spead and storage space.
Because of the huge size (10 billion neurons) and speed of the brain, these crazy implementations will not be
suitable for reproducing the brain in real-time, but in principle they can.
These are exactly the claims that Searle likes to refute with his chinese room argument.
But before going in to that, let's explore a bit more about computationalism.

According to Harnad[3], there's a third proposition about computationalism:

(3) There is no stronger empirical test for the presence of mental states than
Turing-Indistinguishability; hence the Turing Test is the decisive test for a
computationalist theory of mental states.

According to him, "this does not imply that passing the Turing Test (TT) is a guarantor of
having a mind or that failing it is a guarantor of lacking one. It just means that we
cannot do any better than the TT, empirically speaking. Whatever cognition actually turns
out to be -- whether just computation, or something more, or something else -- cognitive
science can only ever be a form of 'reverse engineering' [10] and reverse-engineering
has only two kinds of empirical data to go by: structure and function (the latter including
all performance capacities). Because of tenet (2), computationalism has eschewed
structure; that leaves only function. And the TT simply calls for functional equivalence
(indeed, total functional indistinguishability) between the reverse-engineered candidate
and the real thing."[3]

Now that we know what computationalism/strong AI is, we can ask two questions:
(1) is strong AI true? Are all claims made by strong AI true?
(2) is strong AI possible, and how? Can we really build an understanding computer, a
true 'Artificial Intelligence'? The holy grail of AI, a computer which can truly reason, solve
problems, speaks fluently natural language, is intelligent and sapient (self-aware); is it
possible? And if so, how?

Both questions will be addressed after the introduction, where we will discuss the views
of computationalism and three writers (Searle, Harnad en Churchland).

About (1), we can already say that there's lots of discussion needed before we would
have a final conclusion. We would have to discuss all objections to the CRA, and probably
also understanding, consciousness, the symbol grounding problem, etc. It suffices to say
that each of these topics has generated it's own pile of papers and books. In this paper
we only discuss computationalism, and some of the arguments made by Searle, Harnad
en Churchland.

About (2), we can already say that it can be answered in two ways, theoretically and
practically. The theoretical approach will likely be (again) a long discussion, so let's take
a look at the practical approach. Why don't we just skip the question, and start actually
making the programs? And when we have a good enough program (e.g. which passes a
test), we say, yes, this is strong AI!

Such an approach has indeed been taken. It's called the Turing test, and it tests the
machine's capability to perform human-like conversation. If we cannot distinguish
between a human and a machine in human-like conversation, the test concludes that the
machine also has mentality.

This sounds like a great solution for believers of computationalism, because it ignores
structure and only calls for functional equivalence (see tenet (3) above). Also, no
introspectivism is needed (which suits cognitive scientists), and instead of long
theoretical discussions, we can just start making programs (which suit pragmatists and
AI researchers).

Unfortunately, not everyone agrees. One of the objections phrased is that the machine
only simulates mentality, while not actually understanding the conversation. This is one
of the things Searle intends to show with his chinese room argument: even if a program
is indistinguishable from a human (it passes Turing test), it still understands nothing, and
therefore, the Turing test is not a good enough test for understanding.

So alas, there's no simple way out for this one, we're condemned to dive into the
theoretical discussion. As said, we will do this by summarizing and discussing the views
of computationalism and three writers (Searle, Harnad en Churchland). Hopefully, some
interesting conclusions will come out.

Views on Strong AI

We had 2 questions to answer:


(1) is strong AI true?
(2) is strong AI possible, and how?

Below we quickly summarize the viewpoints of computationalism, Searle, Harnad and


Churchland, after which we will go more into depth into each of them.

To begin with, all disagree with one another on the first question. Computationalism of
course agrees with itself. Searle does not, and lays out the chinese room argument to
'proof' computationalism is wrong. Harnad agrees with that proof, but points out the CRA
is limited. The Churchlands disagree with Searle, and point to a fault in the CRA.

On the second question, all agree that strong AI is possible, but disagree on how to
achieve it.
According to computationalism, having a mind is just a question of executing the right
program, so we just have to write the right program (assuming it exists). Searle argues
against this, by stating that even by executing the right program (a program which
passes the Turing test), a computer will still understand nothing. According to him, a
computer needs to have the same causal structures as a brain. Harnad and Churchland
are also against computatationalism, but think hybrid or noncomputational systems (like
artificial neural networks) will be the solution.

Computationalism:
Computationalism states that mentality is implementation-independent: whether you run
a program on a human or computer, the result is the same (understanding,
consciousness, etc). This is quite a big statement, and our intuition may think the
opposite, so let's see what idea's lay behind it.

In his paper 'is the brain a digital computer'[11], Searle describes the 'primal story',
which he describes as "a story about the relation of human intelligence to computation
that goes back at least to Turing's classic paper[12]". He begins the primal story as
follows: (slightly modified)

"We begin with two results in mathematical logic, the Church-Turing thesis and
Turing's theorem. For our purposes, the Church-Turing thesis states that for any
algorithm there is some Turing machine that can implement that algorithm (given
enough time and storage space). Turing's theorem says that there is a Universal
Turing Machine which can simulate any Turing Machine. Now if we put these two
together, we have the result that a Universal Turing Machine (UTM) can
implement any algorithm whatever" [11].
Because the computer is a UTM, the computer can implement any algorithm. Now if
computationalism is right, brains are Universal Turing Machines as well, and this would
result in all far-reaching implications we described before.

Now, are there good reasons for supposing the brain might be a Universal Turing
Machine?

"It is clear that at least some human mental abilities are algorithmic. For example,
I can consciously do long division by going through the steps of an algorithm for
solving long division problems. It is furthermore a consequence of the Church-
Turing thesis and Turing's theorem that anything a human can do algorithmically
can be done on a Universal Turing Machine. I can implement, for example, the
very same algorithm that I use for long division on a digital computer. In such a
case, as described by Turing (l950), both I, the human computer, and the
mechanical computer are implementing the same algorithm, I am doing it
consciously, the mechanical computer nonconsciously.

Now it seems reasonable to suppose there might also be a whole lot of mental
processes going on in my brain nonconsciously which are also computational. And
if so, we could find out how the brain works by simulating these very processes on
a digital computer. Just as we got a computer simulation of the processes for
doing long division, so we could get a computer simulation of the process for
understanding language, visual perception, categorization, etc." [11]

And so computatationalism concludes that all our brains processes are computational,
and can therefore be simulated on a computer. Even more, this simulations are not just
simulations, but the real thing; running the program is enough for mentality.

And so AI becomes a search for these programs, with results such as SAM, SHRDLU and
ELIZA. And it is at this moment, that Searle comes in to say that those simulations are
really just simulations, that we are not computers, and that syntax (symbol
manipulation) is not enough for semantics.

Searle:
In 1980, Searle published a paper called 'Minds, brains, and programs' in which he
defined strong AI (computationalism), and tries to prove it's wrong. He does this by
laying out the chinese room argument, which shows that syntax is not enough for
semantics, and therefore, that running a program cannot be sufficient for having a mind.
According to him, for a computer to really understand natural language, it needs to have
the same causal structures as the brain.

Below we will discuss these matters in more depth, beginning with a description of the
chinese room argument.

According to Wikipedia, the Chinese room argument, goes as follows:

"Suppose that, many years from now, we have constructed a computer which
behaves as if it understands Chinese. In other words, the computer takes Chinese
symbols as input, consults a large look-up table (as all computers can be
described as doing), and then produces other Chinese symbols as output.
Suppose that this computer performs this task so convincingly that it easily
passes the Turing test. In other words, it convinces a human Chinese speaker that
it is a Chinese speaker. All the questions the human asks are responded to
appropriately, such that the Chinese speaker is convinced that he or she is talking
to another Chinese speaker. The conclusion proponents of strong AI would like to
draw is that the computer understands Chinese, just as the person does.
Now, Searle asks us to suppose that he is sitting inside the computer. In other
words, he is in a small room in which he receives Chinese symbols, looks them up
on look-up table, and returns the Chinese symbols that are indicated by the table.
Searle notes, of course, that he doesn't understand a word of Chinese.
Furthermore, his lack of understanding goes to show, he argues, that computers
don't understand Chinese either, because they are in the same situation as he is.
They are mindless manipulators of symbols, just as he is - and they don't
understand what they're 'saying', just as he doesn't.

The conclusion of this argument is that running a program cannot create


understanding. The wider argument includes the claim that one cannot get
semantics (meaning) from syntax (formal symbol manipulation)." [13]

We might summarize this argument as a reductio ad absurdum against Strong AI:

"Let L be a natural language, and let us say that a 'program for L' is a program for
conversing fluently in L. A computing system is any system, human or otherwise,
that can run a program.

(1) If Strong AI is true, then there is a program for Chinese such that if any
computing system runs that program, that system thereby comes to understand
Chinese.
(2) I could run a program for Chinese without thereby coming to understand
Chinese.
(3) Therefore Strong AI is false.

The second premise is supported by the Chinese Room thought experiment."[13]

There've been lot's of objections raised against the chinese room argument. A summary
of these can be found in the Stanford encyclopedia of philosophy [9]. For now it suffices
to say that the fight is still going on:

"The many issues raised by the Chinese Room argument will not be settled until
there is a consensus about the nature of meaning, its relation to syntax, and
about the nature of consciousness. There continues to be significant disagreement
about what processes create meaning, understanding, and consciousness, and
what can be proven a priori by thought experiments." [9]

So then Searle, if pure symbol manipulation cannot achieve strong AI, what can?
His view is that "only a machine could think, and indeed only very special kinds of
machines, namely brains and machines that have the same causal powers as brains."
Searle concludes to say, that "whatever else intentionality is, it is a biological
phenomenon, and it is as likely to be as causally dependent on the specific biochemistry
of its origins as lactation, photosynthesis, or any other biological phenomena." To keep
things simple, one can take this as to mean that for a computer to actually think, it has
to have the same biological structure as the brain. So basically, only biological brains can
think, nothing else.

We will come back to this later, because Harnad and Churchland think there is actually a
level between the two extremes (computationalism - only biological brains), namely
hybrid or noncomputational systems.

Note that (apart from computationalism), the CRA also refutes the Turing test as a good
enough test for understanding. Because according to Searle, even if a program passes it,
it's still mindlessly manipulating symbols, and thereby understands nothing. The Turing
test is a pure behavioural test (it only looks at the natural language behaviour of the
system), while Searle is also interested in what's inside the system (brains or symbol
manipulation). The Turing test only looks at function, Searle also at structure (to be
precise: the internal structure of the brain/computer).
If computationalism is true, we are allowed to only look at function (because the physical
implementation doesn't matter), but because Searle 'proved' computationalism is false,
we must also look at structure.

Harnad:
In his paper 'What's Wrong and Right About Searle's Chinese Room Argument?'[3],
Harnad summarizes the chinese room argument, agrees with it (thereby refuting
computationalism), and counters a few comments on the CRA. After that (with only one
page left), he goes on to say that the CRA is limited to only refuting pure
computationalism, and that "there are still plenty of degrees of freedom in both hybrid
and noncomputational approaches".

He states that "for although the CRA shows that cognition cannot be all just
computational, it certainly does not show that it cannot be computational at all".

Harnad concludes that Searle's contribution (the CRA) has not only been negative
(destructing computationalism), but that "his critique has helped open up the vistas that
are now called 'embodied cognition' and 'situated robotics'". Also, Harnad states that,
thanks to Searle, he is now exploring neural nets.

The Churchlands:
According to the Churchlands, the CRA is false4, and so it's not proven that mentality
cannot be achieved by pure symbol manipulation. However, due to performance failures
of classical AI and specific characteristics of brains, they think that "classical AI is unlikely
to yield conscious machines, but that "systems that mimic the brain might".

By classical AI, we refer to a movement at the beginning of the AI-research program,


which has the goal of "identifying the undoubtedly complex function that governs the
human pattern of response to the environment, and then write the program by which the
Symbol Manipulation machine (SM-machine) will compute it."[4]. Note that by this
description, classical AI is strongly dependant on claims made by
computationalism/strong AI.

By systems which mimic the brain, they refer to Artificial Neural Networks (ANN's).

Note that we have just answered the first major question (is strong AI true?), and are
now tumbling into the second (is strong AI possible, and how?).

As said, the reasons that SM-machines may not lead to conscious intelligence, while
ANN's might, are twofold:
- performance failures of classical AI
- specific characteristics of brains (and thereby of ANN's as well)

By performance failures, they refer to the fact that SM-machines are not very good at
object recognition. When compared to brains, computers were slower, and required vast
knowledge bases, which of course created their own set of problems. The Churchlands
conclude that "the functional architecture of classical SM machines is simply the wrong
architecture for the very demanding jobs required". For these 'very demanding jobs',
ANN's might be more suitable.

4
The arguments for this are a bit too long too state here. However, the Churchlands
conclude that "even though Searle’s Chinese room may appear to be 'semantically dark',
he is in no position to insist, on the strength of this appearance, that rule-governed
symbol manipulation can never constitute semantic phenomena"
In general, we can make a distinction between brains and computers. Lot's of things can
be said about their differences, the most important being:
- speed vs. parallelism: computers are roughly a million times faster than brains (both in
signal propagation as in clock frequency), but brains have roughly 10^11 neurons (which
can be seen as simple CPU's)
- symbol manipulation vs. vector manipulation. A computer does pure symbol
manipulation (zero's and one's), while neural networks can be seen as vectors-
transformers: input vectors are transformed by neurons and weighted links, after which
they are outputted. According to the Churchlands, symbol manipulation appears to be
just one of many cognitive skills that a network may or may not learn to display, but is
certainly not it's basic mode of operation.

Note that by defining neural networks (and thereby ANN's) this way, this shields ANN's
from Searle's chinese room argument. This is because Searle's argument is directed
against rule-governed SM-machines, not against vector transformers.

But let us continue. Above, we made some distinctions between brains and computers.
While these are interesting in itself, they result in the fact that brains and computers are
useful for radically different types of computational problems.

"Parallel processing is not ideal for all types of computation. On tasks that require
only a small input vector, but many millions of swiftly iterated recursive
computations, the brain performs very badly, whereas classical SM machines
excel. This class of computations is very large and important, so classical
machines will always be useful, indeed, vital.
There is, however, an equally large class of computations for which the brain’s
architecture is the superior technology. These are the computations that typically
confront living creatures: recognizing a predator’s outline in a noisy environment;
recalling instantly how to avoid its gaze, flee its approach or fend off its attack;
distinguishing food from nonfood; and so on"[4]

With this knowledge in mind, we can answer the question: 'is strong AI possible, and
how?'. To the Churchlands, the answer is that "pure SM-machines are unlikely to yield
conscious intelligence, but that systems that mimic the brain might". They conclude to
say that "only research can decide how closely an artificial system must mimic the
biological one, to be capable of intelligence."

Conclusion
We've seen statements of strong AI/computationalism, and three views commenting on
that. All tried to answer 2 questions:
(1) is strong AI true?
(2) is strong AI possible, and how?

Computationalism of course agrees that strong AI is true, and states that running a
program is enough for creating a strong Artifical Intelligence. Searle argued against this
(by stating the CRA), and concluded that computationalism is false. To him, strong AI
can only be achieved by a computer having the same causal relations as the brain.
Harnad agrees with that, but points out the CRA is limited to refuting only pure
computationalism, which leaves "still plenty of degrees of freedom in both hybrid and
noncomputational approaches". The Churchlands argue the chinese room argument is
false, and so it's not proven that mentality cannot be achieved by pure symbol
manipulation. However, due to performance failures of classical AI and specific
characteristics of brains, they think that "classical AI is unlikely to yield conscious
machines, but that systems that mimic the brain might".
While computationalism and Searle both have clear statements on what is needed and
sufficient for achieving a strong AI, Harnad and Churchland leave the question open for
empirical research to decide.

Finally, because there's disagreement over what is needed for mentality, there's also
disagreement over whether the Turing test is a good enough test for mentality.
Computationalism clearly states yes; Searle clearly states no; Harnad and the
Churchlands are somewhere in between (for reasons unstated in this paper).

This paper was about strong AI/computationalism, the chinese room argument, the
Turing test, understanding, consciousness, the relation between syntax and semantics,
etc. For each of these terrains, there are loads of problems, loads of papers, and no
consensus. This paper has only given an overview on some of these.

More research, especially in the domains of cognitive science and artificial intelligence,
will hopefully shed some light one these issues.

References
Note: the main papers of this paper (the one's which are summarized and discussed),
are: [2], [3], and [4]

[1] Definitions of some key terms


http://www.ucd.ie/philosop/documents/2.%20definitions%20of%20some%20key%20terms.htm

[2] Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain
Sciences 3 (3): 417-457
http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html

[3] Harnad, S. (2001) What's Wrong and Right About Searle's Chinese Room Argument?
In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford
University Press.
http://cogprints.org/1622/

[4] Churchland, P.M. & Churchland, P.S. (1990) Could a Machine Think? Scientific
American 262.1: 32-37.
http://www.psychology.ilstu.edu/jbwagma/churchland.pdf

[5] Searle, John, 1990. "Is the Brain a Digital Computer?" Proceedings and
Addresses of the American Philosophical Association 64: 21-37
http://web.comlab.ox.ac.uk/oucl/research/areas/ieg/e-
library/sources/searle_comp.pdf

[6] Schank, R. C. & Abelson, R. P. (1977) Scripts, plans, goals, and understanding
Hillsdale, N.J.: Lawrence Erlbaum Press. [RCS, JRS]

[7] Winograd, T. (1973) A procedural model of language understanding. In: Computer


models of thought and language, ed. R. Schank & K. Colby. San Francisco: W. H.
Freeman. [JRS]

[8] Weizenbaum, J. (1965) Eliza - a computer program for the study of natural
language communication between man and machine. Communication of the Association
for Computing Machinery 9:36 45. [JRS]

[9] Stanford Encyclopedia of Philosophy - Chinese Room Argument


http://plato.stanford.edu/entries/chinese-room/

[10] 'Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian


Turing Test for Artificial Life', Artificial Life, vol.1, pp.293-301 (reprinted in:
C.G.Langton (ed.), Artificial Life: An Overview, (Cambridge, MA: MIT Press, 1995).
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad94.artlife2.html

[11] Searle - Is the Brain a Digital Computer?


http://www.ecs.soton.ac.uk/~harnad/Papers/Py104/searle.comp.html

[12] Turing, Alan (1950). "Computing Machinery and Intelligence." Mind, 59, 433-
460.

[13] Wikipedia - Chinese Room Argument


http://en.wikipedia.org/wiki/Chinese_room

S-ar putea să vă placă și