Sunteți pe pagina 1din 9

1 Broadly speaking, the study of artificial intelligence (AI) involves using machines to execute tasks that would normally

require human intelligence and judgment. Some notable examples of AI include: the Turing machine, the chess-playing IBM computer Deep Blue, and the music-composing program EMI. Artificially intelligent machines typically require a hardware system and a software program that performs computational operations on syntactical input (formal symbols), which churns out syntactical output-thereby realizing the intended task or function. Looking at AI this way, three questions are usually raised (Lycan 124): 1. Will a computer ever be able to perform X (where X is something intelligent humans can do)? 2. Given that a computer can perform X, can it perform X in the same way that humans can perform X? 3. If a computer can do anything a human can do, does it then have properties that are thought to be limited to humans, e.g. thought, consciousness, intentionality, cognition, semantics, or understanding? (I will use these concepts interchangeably in this paper.) The first two questions are more matters of empirical investigation and are not what philosophers are especially interested in. The third question is very much of philosophical interest and a great deal of philosophical debate relates to it. One way we can characterize this debate is in terms of Weak AI vs. Strong AI. According to Weak AI, computers are merely invaluable tools that aid scientists in unlocking the nature of the human mind. We can use them to procure ideas and formulate novel hypotheses, but in and of themselves they do not depict how the mind really works. Contrastingly, Strong AI claims the computer program is the mind; how the program operates is how the mind operates. It is my task in this paper to focus on this issue between weak and strong AI by examining in detail John Searles argument against Strong AI and two replies by Jerry Fodor. Subsequently, I will try to justify my partiality towards Searle.

2 In Searles original essay, Minds, Brains, and Programs, the artificial intelligence under consideration is a story machine that allegedly understands stories in the way persons do. Surprisingly enough, upon feeding the machine a formalized story and given related queries, it generates the same answers expected of a competent person who has understood the story. All of this is carried out by the programs formal manipulation of syntax. Since Strong AI identifies programs with the human mind, friends of Strong AI make two claims in light of the story machine: (a) the machine literally understands the story in the same sense a person would, i.e. has semantics (or intentionality) and (b) the machines program explains human understanding. Searle denies both of these claims. Below is one version of his argument: 1. Programs are syntax. 2. Minds have contents: semantics. 3. Syntax is not sufficient for semantics. Programs are not minds. Partisans of Strong AI will find premises 1 and 2 uncontroversial. They cannot deny that computer programs involve intricate symbol manipulation over formal principles, nor do they want to deny that minds have propositional content. The heart of the debate seems to lie in premise 3, i.e. does computation of syntax alone constitute semantics? One way to answer this question is to ask oneself what it would be like if my mind actually worked on the principles that Strong AI says it works on. For this purpose, Searle devises the Chinese Room. Briefly, the situation in the Chinese Room is meant to mimic the operations of the story machine (or any Turing machine for that matter). There is an English-speaking person sealed in a room following an English rulebook to manipulate Chinese characters. He is given batches of Chinese writing from which he is able to create more of by manipulating the characters according to the rulebook. After he finishes manipulating them, he sends it back out.

3 Unbeknownst to him, he was actually delivered a story accompanied by relevant questions. Using the rulebook and jotting down the manipulated symbols, the batches he returns are the answers. For the Chinese observers standing outside, his answers are indistinguishable from that of a native Chinese speaker. Of course, the person in the Chinese room never understands Chinese. Unlike a native Chinese speaker, our subject in the room is merely manipulating squiggle squiggle to squaggle squaggle. The Chinese Room seems to demonstrate this: if a mind really is just a program, then the way a Chinese person understands the story is the same as how the program instantiates the story. In other words, the Chinese speakers understanding consists of a computational process. If this were true, then a non-Chinese speaker should be able to understand Chinese just by instantiating the same computational process. As the Chinese Room shows, I myself (a nonChinese speaker) can instantiate a program for Chinese in the same manner as the story machine without understanding it. As a result, understanding must be something other than mere computational processes. So it fails Searles test: it turns out that minds cannot work on the principles of mere input/output of syntax that our story machine works on because if it did, the person in the Chinese Room would understand Chinese. The two earlier claims made by partisans of Strong AI seem to be false after all: (a) it appears the story machine doesnt understand the story in the sense a Chinese person would, and as such (b) it cant explain human understanding because it fails to show that manipulation of syntax is a sufficient condition for understanding; nor can a case for a necessary condition be made since it doesnt follow from the thought experiment that any human understanding must involve symbol manipulation. It appears for the moment that premise 3 is sound as well; mere manipulation of syntax cannot generate semantics. The conclusion that programs are not minds follows accordingly. Therefore, Strong

4 AI is false. Having shown Searles argument against Strong AI, one possible objection takes form as the robot reply. The robot argument concedes that cognition is not purely a matter of formalsymbol manipulation. However, the suggestion is that if we put a computer inside a robot augmented with peripheral sensors such as vision, feel, movement, hearing, etc., we can then be justified in saying that the robot has intentionality or propositional content since just as a real person learns by seeing, smelling, and feeling as opposed to manipulation of syntax, so too does the robot. The point of interest in the robot reply is whether the perceptual apparatuses of the robot are sufficient for ascriptions of intentionality. Using the same thought experiment, Searle demonstrates that they arent. For suppose now there is a person inside the robots CPU carrying out input/output functions. All the input data received from the perceptual apparatuses are still processed as formal symbols, and not as perceptual experiences. To clarify, while the robot ear may pick up sound waves, the person in the CPU does not hear sounds; rather, the recorded sound waves are translated into formal symbols for the person in the CPU to process. The lingering consequence of this is that the homunculi remains unaware hes receiving any unique information from the perceptual apparatuses, nor does he realize hes transmitting information to, for example, the movement of an arm. All the extra peripheral sensors add are multiple sources of syntax, but syntax nonetheless. Yet, the very thrust of Searles argument is that computation of syntax isnt sufficient nor constitutive of semantics, and it is for this reason above all others that programs cannot be minds. One defender of the above robot reply in particular and machine mentality in general is Jerry Fodor. There are two distinct arguments he lays out against Searle. The first of which is a

5 direct challenge to Searles reply to the robot argument, and the second is an assessment of the credibility of the Chinese Room thought experiment itself. In both instances, Fodor upholds the plausibility of computers having minds. Accordingly, I will treat these two arguments separately. In Searle on What Only Brains Can Do, Fodor offers a defense of the robot reply. While he agrees with Searle that an instantiation of a program is not in and of itself sufficient for understanding, it is not altogether clear that syntax plus the right causal linkages to the world arent either. So long as the symbols the computer manipulates are causally related to the world in the right sort of way, it seems plausible to attribute intentionality to the robot. What Fodor means by right sort of way needs clarifying, and this cant be done without, if only in brief, outlining the basic ideas of his Representational Theory of Mind (RTM). In short then, RTM claims that there are mental representations of objects in a persons mind. A way to describe these representations is as symbols that somehow represent to the mind the objects in the world. It is in virtue of these mental symbols that thoughts can be intentional, or about something. For example, what makes it the case that when I think of trees, my thought is actually about a tree? In other words, where does my thought get its semantic properties? RTM would say that when you think about a tree, you have a mental representation in your head of the tree and this representational symbol is causally linked to actual trees in the world. By way of these mental representations, the thought Trees use photosynthesis can be about trees. On this view, the RTM is a computational theory of mind that suggests human minds also process symbols of sorts very much like a computer. All of this relates back to Fodors argument for the robot reply in this way: 1. Human semantics are derived from symbols (mental representations of the world) that are causally linked to the world in a certain way. 2. Computers process symbols. 3. If the computers symbols are related to the world in the same way the humans

6 symbols are, then a computer can have semantics. Therefore, syntax can give rise to semantics. (contra Searle) Assuming that the RTM is correct, the paramount challenge is figuring out the causal relation between symbol and world; first for humans and then computers. In the former case, there is reason to think that relation somehow involves our perceptual faculties and how we perceive the world, which is why I think Fodor defends the robot reply since its symbols are actually obtained through its perceptual augmentations as well. In any case, the exact details of the RTM is an ongoing pursuit, but this is a challenge for the RTM in general and the Chinese Room does not show that this problem is insurmountable. What the Chinese Room does show is that a person in a room cannot be the right causal relation between symbol and world. It doesnt follow that no relation can be the right one. In Yin and Yang in the Chinese Room, Fodor offers another argument against Searle. This argument is wholly unrelated to the RTM argument above and it criticizes the accuracy of the thought experiment itself, rather than Searles understanding of intentionality. In brief, Fodor says that the Chinese Room fails to depict what it was meant to depict, i.e. a Turing machine. Fodor brings in the notion of a proximal cause and effect relationship in a Turing machine where there can be no intermediary interruptions between cause and effect. The input so to speak immediately results in the output. In contrast, Searles Chinese Room has three sequences: input, followed by a person in a room, and then output. On Fodors view of computational processes, Searles thought experiment doesnt even capitulate a real Turing machine since the person in the room is an unwarranted causal intermediary. As such, while Searle wanted to undercut the claims of Strong AI by instantiating the same Chinese program for an English speaker that is presumed by Strong AI to be instantiated in a Chinese speaker (only to show that there is understanding in

7 the latter case and not the former), we can now say that the Chinese Room is not an accurate instantiation of the program in the Chinese speaker at all. If this is the case, then the Chinese Room doesnt undermine Strong AI because it has the entire picture of a Turing machine wrong. Taking Fodors double objection into consideration, I believe Searles thesis against Strong AI subsists. Of Fodors two objections, I take the RTM argument to be stronger and will therefore use the majority of the remaining paper to explain why Searles argument stands. As for Yin and Yang, I find it difficult to see why proximity between cause and effect ought to matter in this case. Given that the causal input instantaneously precipitates the output effect, we can still ask what that causal relation consists of. The fact that the Chinese Room argument utilizes a conscious agent to play this role shouldnt compromise the integrity of the example. For instance, if we slowly replaced the billion neurons or so in a persons brain with little Martian men to do the exact same job, it is plausible to think the person would still exhibit the same mental properties. It is difficult to see in this example why Fodor would count the Turing machine as a model of the neuronal-brain and not the Martian-brain. Leaving Yin and Yang behind, I think there are more interesting things we can say about the RTM argument. It is fairly clear that one of the core disputes between Searle and Fodor is how intentionality comes about. As I have shown, Searle does not think syntax is sufficient for semantics. Fodors challenge is that syntax plus the right causal links to the world can constitute semantics. The robot reply seems to be one such machine that has the right causal links with the world through its perceptual faculties. As such, the plausibility of machines having minds is very real under the RTM theory and the Chinese Room doesnt rule out that possibility. What I want to argue now is that the Chinese Room does undermine that plausibility even within an RTM framework.

8 If the RTM is true, then there is a fact of the matter as to what the right causal linkage between symbol and world is. In the human case, we can be said to have mental representations that are causally linked to the world in a determinate way that explains why a propositional statement like: I know that tree is tall is about something, namely a tree. Similarly, if the right causal linkage between syntax and world can be discovered, then Fodor assumes we can attribute intentionality to the computer in the same way. I cant see how this can be correct. It wont matter that we have established the causal linkage between a given symbol and its correlate in the world. While we can be aware of this determinate link between symbol and object, how does it follow that the machine is aware of this link as well? This seems to be what Searle is saying when he says: Let the egg foo yung symbol be causally connected to egg foo yung in any [sic] way you like, that connection by itself will never enable the agent to interpret the symbol as meaning egg foo yung (523). In order for the machine to understand the symbol as meaning egg foo yung, it needs to be aware of it, but if we need to grant it awareness in order to explain its intentionality, then we are in danger of begging the question since awareness is conceptually intertwined with intentionality and mentality. The moral of the story seems to be that while the observer may know the machines syntax are interpreted symbols; this is only so because the observer is the one doing all the interpreting in the first place. The computed symbol has an interpretation, but it certainly wont be the machines. All the while, the machine goes about its business processing syntax. In conclusion, I want to make clear that what I have discussed above isnt an argument against the RTM, i.e. it is not an attack on Fodors theory about intentionality. Rather, my objective is to show that while the RTMs criteria of intentionality may be a plausible account of intentionality in the human case, fulfillment of that criteria (i.e. determining the causal

9 relationship between symbol and world) does not warrant attribution of mentality for machines. Since us knowing the causal relation between the symbol and the world doesnt imply that the machine itself knows the relationship as well. Even within the RTM, there is no room for machines having minds. Lastly, much of this paper is concerned with Searles argument about what intentionality is not, namely computation of syntax. What positive accounts of intentionality Searle does mention in Minds, Brains, and Programs, are mentioned sparsely: I am a certain sort of organism with a certain biological structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena (Boden 81). Searles account of intentionality certainly seems wanting. Its hard not to feel that Searle is resorting to some version of a radical emergence of intentionality. Whatever Searles theory of intentionality may amount to, for the record, my defense of the Chinese Room argument is not a defense of Searles own theory of intentionality. I only agree with him that intentionality (understanding, cognition,

S-ar putea să vă placă și