Sunteți pe pagina 1din 39

Functionalism

Some context from: Stanford Encyclopedia of Philosophy


Behaviorism ... attempts to explain behavior without any
reference whatsoever to mental states and processes
http://plato.stanford.edu/entries/functionalism/#2.3
Functionalism in the philosophy of mind is the doctrine
that what makes something a mental state of a
particular type does not depend on its internal
constitution, but rather on the way it functions, or the
role it plays, in the system of which it is a part.
http://plato.stanford.edu/entries/functionalism/

Functionalism
Things are defined by their functions
Two ways to define function
1)

Function = inputs and outputs (machine functionalism)


e.g. mathematical function, e.g. +, -, x, /
2 x 3 = 6, when input is 2 and 3, output is 6
Multiple realizability: can be realized in different
materials or through different processes

Functionalism defined as inputs and outputs continued


e.g. beliefs, desires
I am thirsty (i.e. I desire water) is defined in terms of
inputs and outputs. When there are inputs x and y, there
is output z:

Input

Output

(x) Water is available


(y) There is no reason not to drink the water

(z) I drink water

2) Function = use (teleological functionalism)


Function is defined by what something does.
e.g. a heart pumps blood.
e.g. a belief plays a role in reasoning: a premise in a
practical syllogism

Premise 1
Premise 2
Premise 3
Conclusion

I believe x is water
I desire water
There is no reason not to drink x
I drink x

No matter if you interpret functional as an


input-output relation (machine functionalism) or
use (teleological functionalism), mental states,
such as thirst are multiply realizable.

A waiter can conduct addition.


A computer can conduct addition.
An alien can have thirst, pain, etc.
A chimpanzee can have thirst, pain, etc.

Functional definition of mind


If x acts like a mind, it is a mind.
If, when compared to a mind, given similar inputs,
x gives similar outputs, x is a mind.
If a computer can converse (take part in linguistic
input and output exchanges/play the role of an
intelligent conversational partner) just like a
person, the computer is as intelligent as a person.
It has a mind.

The Chinese Room Argument

Background
Thought Experiments
Instead of empirical experiments, philosophers
and logicians can conduct thought experiments
Thought experiments may be carried out using
natural languages, graphic visualizations, and/or
formalized versions of their relevant aspects
They test concepts and theories for consistency,
completeness, etc., using critical intuition aided
by logic tools (e.g., reasoners) for evaluation

The Turing Test


In 1950, a computer scientist, Alan Turing,
wanted to provide a practical test to answer
Can a machine think?
His solution -- the Turing Test:
If a machine can conduct a conversation so well that
people cannot tell whether they are talking with a person
or with a computer, then the computer can think. It
passes the Turing Test.
In other words, he proposed a functional solution to the
question, can a computer think?

There are many modern attempts to produce computer


programs (chatterbots) that pass the Turing Test.
In 1991 Dr. Hugh Loebner started the annual Loebner Prize
competition, with prize money offered to the author of the
computer program that performs the best on a Turing Test.
You can track (and perhaps try) the annual winners:
http://en.wikipedia.org/wiki/Loebner_prize#Winners
But Turing Tests have been objected to on several grounds:
http://en.wikipedia.org/wiki/Turing_test#Weaknesses_of_the_test

Searles Chinese Room Argument


John Searle
Famous philosopher at University of
California, Berkeley. Most well-known
in philosophy of language, philosophy
of mind and consciousness studies
Wrote Minds, Brains and Programs in
1980, which described the Chinese
Room Argument:
... whatever purely formal principles
you put into the computer, they will not
be sufficient for understanding, since a
human will be able to follow the formal
principles without understanding
anything.

Searles Chinese Room Argument


The Chinese Room argument is one kind of objection to
functionalism, specifically to the Turing Test
Searle makes distinction between strong AI and weak AI,
objecting (only) to strong AI:
Strong AI: the appropriately programmed computer
really is a mind, in the sense that computers, given the
right programs can be literally said to understand
Weak AI: Computers can simulate thinking and help
us to learn about how humans think
NB: Searle knows that he understands English andby
contrastthat he does not understand any Chinese

Summary of Searles
Chinese Room Thought Experiment
Searle is in a room with input and output windows, and a
list of rules, in English, about manipulating Chinese
characters.
The characters are all meaningless squiggles and
squoggles to him.
Chinese texts and questions come in from the input window.
Following the rules, he manipulates the characters and
produces each reply, which he pushes through the
output window.

The answers in Chinese that Searle produces are very good.


In fact, so good, no one can tell that he is not a native
Chinese speaker!
Searles Chinese Room passes the Turing Test. In other
words, it functions like an intelligent person.
Searle has only conducted symbol manipulation, with no
understanding, yet he passes the Turing Test in Chinese.
Therefore, passing the Turing Test does not ensure
understanding.
In other words, although Searles Chinese Room functions
like a mind, he knows (and we in an analogous foreignlanguage room experiment would know) it is not a mind,
and therefore functionalism is wrong.

Grailog: Classes, Instances, Relations


Classes with relations
subClassOf
understand
instanceOf

Language

understand

lang

lang

apply

lang
with

to
rules

negation

Chinese

English

texts
use
Searle

Instances with relations

with

questions

Wang

lang
for

lang haveLanguage

replies

for
Searle-replyi

Wang-replyi

distinguishable

Syntax vs. semantics


Searle argues that computers can never understand
because computer programs (and he in a Chinese Room)
are purely syntactical with no semantics.
Syntax: the rules for symbol manipulation, e.g. grammar
Semantics: understanding what the symbols (e.g. words)
mean
Syntax without semantics: The bliggedly blogs browl
aborigously.
Semantics without syntax: Milk want now me.

Searle concludes that symbol manipulation


alone can never produce understanding.
Computer programming is only symbol
manipulation.
Computer programming can never produce
understanding.
Strong AI is false and functionalism is wrong.

What could produce real understanding?


Searle: it is a biological phenomenon and only
something with the same causal powers as
brains can have [understanding].

Objections
The Systems Reply
Searle is part of a larger system. Searle doesnt understand
Chinese, but the whole system (Searle + room + rules)
does understand Chinese.
The knowledge of Chinese is in the rules contained in the
room.
The ability to implement that knowledge is in Searle.
The whole system understands Chinese.

Searles Response to the Systems Reply


1)

Its absurd to say that the room and the rules can
provide understanding

2)

What if I memorized all the rules and internalized the


whole system. Then there would just be me and I still
wouldnt understand Chinese.

Counter-response to Searles response


If Searle could internalize the rules, part of his brain would
understand Chinese. Searles brain would house two
personalities: English-speaking Searle and Chinesespeaking system.

The Robot Reply

What if the whole


system was put inside a
robot?
Then the system would
interact with the world.
That would create
understanding.

Searle inside the robot

Searles response to the Robot Reply


1) The robot reply admits that there is more to
understanding than mere symbol manipulation.
2) The robot reply still doesnt work. Imagine that I am in
the head of the robot. I have no contact with the
perceptions or actions of the robot. I still only manipulate
symbols. I still have no understanding.

Counter-response to Searles response


Combine the robot reply with the systems reply. The robot
as a whole understands Chinese, even though Searle
does not.

The Complexity Reply


Really a type of systems reply.
Searles thought experiment is deceptive. A room, a man
with no understanding of Chinese and a few slips of
paper can pass for a native Chinese speaker.
It would be incredibly difficult to simulate a Chinese
speakers conversation. You need to program in
knowledge of the world, an individual personality with
simulated life history to draw on, and the ability to be
creative and flexible in conversation. Basically you need to
be able to simulate the complexity of an adult human brain,
which is composed of billions of neurons and trillions of
connections between neurons.

Complexity changes everything.


Our intuitions about what a complex
system can do are highly unreliable.
Tiny ants with tiny brains can
produce complex ant colonies.
Computers that at the most basic level are just binary
switches that flip from 1 to 0 can play chess and beat the
worlds best human player.
If you didnt know it could be done, you would not believe it.
Maybe symbol manipulation of sufficient complexity can
create semantics, i.e. can produce understanding.

Possible Response to the Complexity Reply


1)

See Response to the Systems Reply

2)

Where would be the quantitative-qualitative transition?

Counter-response to that response


What would happen if Searles Chinese-speaking
subsystem would become as complex as the
English-speaking rest of his linguistic mind?

Searles criticism of strong AIs


mind-program analogy

Searles criticism of strong AIs analogy


mind is to brain as program is to computer
seems justified since mental states and events
are literally a product of the operation of the
brain, but the program is not in that way a
product of the computer.

Classes and relations


tangible

animate

brain

intangible
produce

mind

run

inanimate

computer program
Classes: brain, mind, computer, program
Binary relations: produce, run

Instances
tangible

animate

inanimate

b1
b2

c1
c2

intangible
produce

produce

m1
m2

run
run

Classified instances: brains b1, b2; minds m1, m2;


computers c1, c2; program p

A theory claiming two assertions


over the classes and relations
In English:
Different brains (will) produce different minds.
Different computers (can) run the same program.
In Controlled English,
equivalent to first-order logic with (negated) equality:
For all brains B1, B2 and minds M1, M2 it holds that
if B1 B2 and B1 produces M1 and B2 produces M2
then M1 M2.
There exist computers C1, C2 and program P such that
C1 C2 and C1 runs P and C2 runs P.

A theory claiming two assertions


over the classes and relations
If produce and run would be the same relation,
produce = run, and
brain and computer would be the same class,
brain = computer, and
mind and program would be the same class,
mind = program,
then this would lead to an inconsistency
between the two assertions.
Hence, according to the theory, the relations or one of
the pairs of classes must be different.

Conclusion
1)

The Turing Test:


Searle is probably right about the Turing Test.
Simulating a human-like conversation probably does
not guarantee real human-like understanding.
Certainly, it appears that simulating conversation to
some degree does not require a similar degree of
understanding. Programs like the 2008 chatterbots
presumably have no understanding at all.

2) Functionalism
Functionalists can respond that the functionalist
identification of the room/computer and a mind is carried
out at the wrong level.
The computer as a whole is a thinking machine, like a brain
is a thinking machine. But the computers mental states
may not be equivalent to the brains mental states.
If the computer is organized as nothing but one long list of
questions with canned answers, the computer does not
have mental states such as belief or desire.
But if the computer is organized like a human mind, e.g.
with learnable, interlinked, modularized concepts, facts,
and rules, the computer could have beliefs, desires, etc.

3) Strong AI:
Could an appropriately programmed computer have real
understanding? Too early to say. We might not be
convinced by Searles argument that it is impossible.
The right kind of programming with the right sort of
complexity may yield true understanding.

Searles criticism of strong AIs mind-program analogy


seems justified.

4) Syntax vs. Semantics


How can semantics (meaning) come out of symbol
manipulation? How can 1s and 0s result in real meaning?
Its mysterious. But then how can the firing of neurons
result in real meaning? Also mysterious.
One possible reply: meaning is use (Wittgenstein).
Semantics is syntax at use in the world.

5) Qualia
Qualia = raw feels = phenomenal experience = what it is
to be like something
Can a computer have qualia? Again, it is hard to tell
if/how silicon and metal can have feelings. But it is no
easier to explain how meat can have feelings.
If a computer could talk intelligently and convincingly
about its feelings, we would probably ascribe feelings to
it. But would we be right?

6) Searle claims that only biological brains have causal


relations with the outside world such as perception,
action, understanding, learning, and other intentional
phenomena. (Intentionality is by definition that feature
of certain mental states by which they are directed at
or about objects and states of affairs in the world.)
However, an AI
embodied in a robot that puts syntax at use in the world as in 4)
may not need (subjective) Qualia as in 5)

to permit it perception, action, understanding, and


learning in the objective world.

Optional Readings for next week


Sterelny, Kim, The Representational Theory of Mind,
Section 1.3, pgs. 11-17
Sterelny, Kim, The Representational Theory of Mind,
Section 3.1-3.4, pgs. 42-49
The Representational Theory of Mind. - book review by
Paul Noordhof, Mind, July, 1993.
http://findarticles.com/p/articles/mi_m2346/is_/ai_14330173

More optional readings


On the Chinese Room:

Searle, John. R. (1990), Is the Brain's Mind a Computer Program? in Scientific


American, 262, pgs. 20-25

Churchland, Paul, and Patricia Smith Churchland (1990) Could a machine think? in
Scientific American 262, pgs. 26-31

On modularity of mind:

Fodor, Jerry A. (1983), The Modularity of Mind, pgs. 1-21 at:


http://ruccs.rutgers.edu/forums/seminar3_spring05/Fodor_1983.pdf

Pinker, Steven (1999), How the Mind Works, William James Book Prize Lecture at:
http://www3.hku.hk/philodep/joelau/wiki/pmwiki.php?n=Main.PinkerHowTheMindWorks

S-ar putea să vă placă și