Documente Academic
Documente Profesional
Documente Cultură
org
Access provided by 122.54.227.244 on 09/09/16. For personal use only.
ANNUAL
REVIEWS
Further
Annu. Rev.
INVARIANTS OF HUMAN
BEHAVIOR!
Herbert A. Simon*
Department of Psychology, Carnegie-Mellon University, Pittsburgh, Pennsylvania
15213
CONTENTS
PHYSICAL SYMBOL SySTEMS............. ....................... . ............................
5
5
6
7
8
8
8
9
10
11
11
13
14
14
16
16
SIMON
BEHAYIORAL INYARIANTS
SIMON
Some skeptics continue to regard thinking as something to be explained at
since rejected vitalism where bodily functions are concerned remain vitalists
when it comes to the mind.
It is still incorrectly thought by some that contemporary information pro
cessing
psychology
leaves
unexplained
such
"holistic"
phenomena
described in greater or lesser detail (e.g. solving puzzles like the Tower of
Hanoi or Missionaries and Cannibals, playing chess like a master or a novice,
making medical diagnoses, solving problems in elementary physics and
mathematics, making certain kinds of scientific discoveries, learning lan
guage, using diagrams to solve problems, and understanding problem in
structions). But since many other species of thought remain undescribed, a
vast work of taxonomy and empirical exploration lies ahead. We should avoid
thinking of this work as "mere" taxonomy, for it will unearth multitudes of
interesting and important phenomena and extend our repertory of explanatory
laws and invariants accordingly.
Second, in stark contrast to our complete understanding of the physical
underpinnings of the operation of computers, we have only the vaguest
knowledge today of how the symbol processing capabilities of the human
brain are realized physiologically. Information processing psychology ex
plains the software of thinking, but says only a little about its "hardware" (or
"wetware"?). Information processing psychology and neural science are still
miles apart, with only slight indications of how a bridge will be built between
them-as it certainly will.
This situation is not without precedent. Organismic and cell biology made
extensive progress long before biochemistry could explain their structures and
processes. Nineteenth-century chemistry achieved substantial understanding
of the reactions am(mg molecules long before physics supplied any picture of
atomic structure that could account for the observed chemical regularities.
BEHAVIORAL INVARIANTS
ADAPTIVITY
Let me now put aside biological questions and return to human adaptivity and
its implications for the laws of psychology. A look at computer adaptivity
may cast some light on the human kind. A computer, it is said, can only do
what it is programmed to do (which may be quite different from what the
programmer intended it to do). Generally, it is not instructed to do specific
things at all (e.g. to solve a particular linear programming problem), but to
adapt its behavior to the requirements of a given task chosen from a whole
population of tasks (e.g. to solve any linear programming problem lying
within given size limits). Then its behavior in response to each task is adapted
to the requirements of the task, and it behaves differently, in appropriate
ways, with each task it is given. In short, it is an adaptive system.
The adaptiveness of computers leads to a question that is the converse of
the one raised above. Can a computer be programmed to do anything? Of
course not. Upper limits are set by the famous theorems of Godel, which
prove that every symbol processing system must be, in a certain fundamental
sense, incomplete. It is a truth of mathematics and logic that any program
(including those stored in human heads) must be unable to solve certain
problems.
SIMON
infeasible computation, for it calls for the examination of more chess posi
tions than there are molecules in the universe. If the game of chess, limited to
its 64 squares and six kind of pieces, is beyond exact computation, then we
may expect the same of almost any real-world problem, including almost any
problem of everyday life.
From this simple fact, we derive one of the most important laws of
qualitative structure applying to physical symbol systems, computers and the
human brain included: Because of the limits on their computing speeds and
power, intelligent systems must use approximate methods to handle most
tasks. Their rationality is bounded.
Reasoning Under the Optimality Principle
Historically, human adaptiveness (that is to say, rationality) has preoccupied
economists even more than psychologists. Modem mainstream economic
theory bravely assumes that people make their decisions in such a way as to
maximize their utility (Simon 1979a). Accepting this assumption enables
economics to predict a great deal of behavior (correctly or incorrectly) without
ever making empirical studies of human actors.
If we wish to know what form gelatin will take when it solidifies, we do not
study the gelatin; we study the shape of the mold in which we are going to
pour it. In the same way, the economist who wishes to predict behavior
studies the environment in which the behavior takes place, for the rational
economic actor will behave in whatever way is appropriate to maximize utility
in that environment. Hence (assuming the utility function to be given in
advance), this maximizing behavior is purely a function of the environment,
and quite independent of the actor.
The same strategy can be used to construct a psychology of thinking. If we
wish to know how an intelligent person will behave in the face of a particular
problem, we can investigate the requirements of the problem. Intelligence
consists precisely in responding to these requirements. This strategy has, in
fact, been pursued occasionally in psychology; the theories of perception of J.
J. Gibson (1966) and John Marr (1982) exemplify it, as do some of the recent
rational models of my colleague John R. Anderson (1989).
Why don't we, then, close up the laboratory, frequently a place of vexing
labors and unwelcome surprises, and build a psychology of intelligence by
rational analysis, as the economists have done? The answer, already sug
gested, lies in the law that I have called the Principle of Bounded Rational
ity (Simon 1989b). Since we can rarely solve our problems exactly, the opti
mizing strategy suggested by rational analysis is seldom available. We must
find techniques for solving our problems approximately, and we arrive at dif
ferent solutions depending on what approximations we hit upon. Hence, to
describe, predict and explain the behavior of a system of bounded rational-
BEHAVIORAL INVARIANTS
ity, we must both construct a theory of the system's processes and describe
the environments to which it is adapting.
SIMON
solve many problems that could not be solved without it, but the domain of
differential equations that cannot be exactly integrated in closed form vastly
exceeds the domain of those that can be.
The wide-ranging attempts since the Second World War to apply the optimiz
ing tools of operations research (linear programming, integer and dynamic
programming, queuing theory, and so on) to the decision problems of man
agement have underlined the computational complexity of real-world prob
lems, even relatively well-structured problems that are easily quantified.
Using queuing theory, an optimum production schedule can be found for a
factory that manufactures one or two products, using one or two different
pieces of equipment. Adding even one more product or piece of equipment
puts the problem beyond computational bounds for the fastest supercomputer.
(An optimal class schedule for a university lies even further beyond the limits
of practical computation.)
Yet factories (and universities) are scheduled every day. We are forced to
conclude that methods other than optimization are used-methods that respect
the limits of human and computer rationality. Perhaps the feasible methods
are specific to each specific situation, in which case it is hard to see what
cognitive psychology should say about them.
On the other hand, it is possible that some common properties, deriving
from human bounded rationality, are shared by the approximating procedures
people use in many kinds of complex situations. If so, it is the task of
cognitive psychology to characterize these procedures, to show how they are
acquired, and to account for their compatibility with the known computational
limitations of the human brain.
Recognition Processes
We now know that experts make extensive use of recognition processes, based
on stored knowledge, to handle their everyday tasks. This recognition
capability, based (by rough estimate) on 50,000 or more stored cues and
associated knowledge, allows them to solve many problems "intuitively"
that is, in a few seconds, and without conscious analysis. Recognizing key
BEHAVIORAL INVARIANTS
cues allows experts to retrieve directly from memory information for dealing
with the situations that the cues identify. Recognition processes have been
shown to play a major role, perhaps the major role, in such diverse tasks as
grandmaster chessplaying, medical diagnosis, and reading. Introductions to
the evidence will be found in de Groot (1978), Simon ( l979a) and Chi et al
(1988).
Computer simulation models like EPAM (Feigenbaum & Simon 1984)
provide explanatory mechanisms for recognition-based expertise, including a
learning mechanism for acquiring the stored chunks on which it is based.
Alternative models are being developed in the form of parallel, connectionist
systems (EPAM is a basically serial system). The theoretical explanations and
computer models assume processing speeds that are well within the known
human physiological limits, and EPAM, as least, predicts a wide range of the
phenomena that have been reported in the verbal learning literature (including
the times reported by Ebbinghaus for the learning of nonsense syllables). We
can regard intuition as a phenomenon that has been rather thoroughly ex
plained: It is achieved through acts of recognition.
Heuristic Search
What about problems whose solutions are not provided by immediate recognition,
but which require analysis? Here also, a number of the principal processes have
been identified and simulated. Collectively, they are usually called heuristic (or
selective) search. When a great space of possibilities is to be explored (and
humans commonly balk at searching spaces when the possibilities number even in
the hundreds), search becomes very selective. It is then guided by various rules of
thumb, or heuristics, some of which are specific to particular tasks, but some of
which are more general (Newell & Simon 1972).
If the task domain is highly structured, the task-specific heuristics may be
very powerful, drawing upon the structural information to guide search
directly to the goal. For instance, most of us apply a systematic algorithm
when we must solve a linear equation in algebra. We don't try out different
possible solutions, but employ systematic steps that take us directly to the
correct value of the unknown.
If the task domain has little structure or the structure is unknown to us, we
apply so-called "weak methods," which experience has shown to be useful in
many domains, but which may still require us to search a good deal. One
weak method is satisficing using experience to construct an expectation of
how good a solution we might reasonably achieve, and halting search as soon
as a solution is reached that meets the expectation.
Picking the first satisfactory alternative solves the problem of making a
choice whenever (a) an enormous, or even potentially infinite, number of
alternatives are to be compared and (b) the problem has so little known
-
10
SIMON
they affect the values of more than one person. Then a satisficing choice can
still be made as soon as an alternative is found that (a) is satisfactory along all
dimensions of value, (b) has satisfactory outcomes for all resolutions of the
uncertainty, or (c) is satisfactory for all parties concerned, respectively.
Another weak mothod is means-ends analysis
the current situation and the desired goal situation, and retrieving from
memory operators that, experience has taught us, remove differences of these
kinds.
A small collection of heuristics, of which satisficing and means-ends
analysis are important examples, have been observed as central features of
behavior in a wide range of problem-solving behaviors where recognition
capabilities or systematic algorithms were not available for reaching solutions
without search. The prevalance of heuristic search is a basic law of qualitative
Structure for human problem solving.
Beginning with the General Problem Solver (GPS) in about 1958, a size
able number of computer programs have been built to simulate heuristic
search in various task domains. With their help, a rather detailed account has
been given of human heuristic search, particularly in relatively well
structured domains that call upon only limited amounts of domain-specific
knowledge (Newell & Simon 1972). With these programs as foundation,
other investigations have built processes that can create problem representa
tions for simple situations, using natural language inputs to supply informa
tion about the problem and task domain.
BEHAVIORAL INVARIANTS
11
12
SIMON
roles in thinking of sentences (or the propositions they denote) and imagery of
one or another kind. The reasoning metaphor views goals as described by
sentences, derived from other sentences by processes similar to the processes
of logic. The problem-solving metaphor views goals as achieved by se
quences of moves through a problem space. (The very phrase "problem
space" suggests the importance that is attached to a visual or spatial
metaphor.)
When the reasoning metaphor is used, information is expressed mainly in
declarative sentences. A small number of rules of inference (like the rule of
syllogism in formal logic) are used to derive new sentences from old. The
research tasks most commonly employed to study human reasoning are tasks
of concept formation or tasks of judging the validity or invalidity of formal
syllogisms, the presumption being that human thinking consists in drawing
valid inferences from given premises or data.
When the problem-solving metaphor is used, information is expressed in
schemas, which may resemble interrelated sets of sentences or may resemble
diagrams or pictures of the problem situation. The problem situation is
modified by applying "move operators," which are processes that change a
situation into a new one. Nowadays, the move operators usually take the form
of productions-condition-action pairs, C A. Whenever the information in
short-term memory matches the conditions of a production, the actions of the
production are executed. The execution of a sequence of productions accom
plishes a search through the problem space, moving from one situation to
another until a situation satisfying the goal requirements is reached.
BEHAVIORAL INVARIANTS
13
EPAM: A discrimination net for sensory features that learns new discriminations and
"chunks" familiar stimuli patterns.
GPS: A problem solver that employs heuristic search, and that can be used by the leaming
subsystem.
14
SIMON
A Pattern induction system that searches for regular patterns in stimuli.
Systems for encoding natural language input and producing natural language output.
solving problems.
This may appear to be a lot of baggage, but all of the processing systems
listed are implementable as production systems that can be stored in the
associative long-term memory. Moreover, examples of all of these com
ponents have been simulated with computer programs, and their mutual
compatibility tested to some degree. For example, the UNDERSTAND sys
tem (Hayes & Simon 1974) can encode natural language descriptions of
puzzles into internal representations that are suitable problem spaces for GPS.
The ISAAC system (Novak 1976) can encode natural language statements of
physics problems into internal images, and use these images to produce
algebraic equations, which it then solves.
From this we may conclude that, while many issues about architecture are
fluid at the present time, the knowledge we gain about architectures is
unlikely to invalidate, or require major revision of, the knowledge we have
already gained about component mechanisms like EPAM or GPS, or about
their roles in cognition.
Individual Differences
Traditionally, the study of individual differences has employed psychometric
methods of research. It has been motivated by interest in the nature/nurture
BEHAVIORAL INVARIANTS
15
16
SIMON
tasks, and we have long used George Miller's seven chunks to characterize
those limits.
But a chunk is not an innate measure of storage capacity. A chunk is any
stimulus that has become familiar, hence recognizable, through experience.
Hence, the capacity of short-term memory is itself determined by leaming,
and can grow to vast size as individual acts of recognition access larger and
richer stores of information in long-term memory. Two EPAM systems
possessing the same basic structure can differ greatly in measured STM
capacity simply because one has a more elaborate differentiation net and
associated store of schemas than the other.
Social Psychology
Just as individual differences find a natural place in information processing
psychology, so do social phenomena. To the extent that cognitive peformance
rests on skill and knowledge, it is a social, rather than a purely individual,
phenomenon. Language skills and skills in social interaction can be
approached within the same theoretical framework as knowledge and skills
for dealing with the physical environment.
The recent work of Voss et al (1983) illustrates how cognitive and social
psychology can mutually reinforce each other. He has studied how people
who have different professional backgrounds and information approach the
same problem-solving situation. When asked to write an essay on agricultural
reform in the USSR, subjects who are experts in agronomy address them
selves to entirely different variables and strategies than subjects who are
experts on Russian political affairs. And both of these groups of subjects
respond quite differently from novices. When we study expert behavior, we
cannot help studying the structure of professional disciplines in our society.
Cognitive psychology still has an important task of studying the domain
independent components of cognitive performance. But since the perfor
mance depends heavily on socially structured and socially acquired knowl
edge, it must pay constant attention to the social environment of cognition.
Many of the invariants we see in behavior are social invariants. And since
they are social invariants, many are invariant only over a particular society or
a particular era, or even over a particular social or professional group within a
society. Social variables must be introduced to set the boundaries of our
generalizations.
CONCLUSION
Let me summarize briefly this account of the invariants of human behavior as
they are disclosed by contemporary cognitive psychology. The problem of
identifying invariants is complicated by the fact that people are adaptive
BEHAVIORAL INVARIANTS
17
18
SIMON
Literature Cited
Anderson, J. R. 1983. The Architecture of
Complexity. Cambridge MA: Harvard Univ.
Press
Anderson, J. R. 1985. Cognitive Psychology
and Its Implications. New York: Freeman
Anderson, J. R. 1989. The place of cognitive
architectures in a rational analysis. 22nd
Annu. Symp. Cognit., Dept. Psychol., Car
negie-Mellon Univ.
Broadbent, D. E. 1958. Perception and Com
munication. New York: Pergamon
Carroll, J. B. 1988. Individual differences in
cognitive functioning. In Stevens' Hand
book of Experimental Psychology, ed. R. C.
Atkinson, et ai, Vol. 2:813-62. New York:
Wiley. 2nd ed.
Chi, M. T. H., Glaser, R., Farr, M., eds.
1988. The Nature of Expertise. Hillsdale,
NJ: Erlbaum
De Groot, A. 1978. Chance and Choice in
Chess. The Hague: Mouton. 2nd ed.
Ericsson, K. A., Simon, H. A. 1984. Protocol
Analysis. Cambridge, MA: MIT Press
Feigenbaum, E. A., Simon, H. A. 1984.
EPAM-like models of recognition and
learning. Cogn. Sci. 8:305-36
Gibson, J. J. 1966. The Senses Considered
as Perceptual Systems. Boston: Houghton
Mifflin
BEHAVIORAL INVARIANTS
science as empirical inquiry: symbols and
search. Commun. ACM 19:1l1-26
Novak, G. S. Jr. 1976. Computer understand
ing of physics problems stated in natural
language. Tech. Rep. No. NL-30. Austin,
TX: Dept. Comput. Sci., Univ. Texas
Simon, H. A. 1979a. The Sciences of the
Artificial. Cambridge, MA: MIT Press. 2nd
ed.
Simon, H. A. 1979b. Models of Thought, Vol.
I. New Haven: Yale Univ. Press
Simon, H. A. 1986. The infonnation process
ing explanation of Gestalt phenomena.
Comput. Hum. Behav. 2:241-55
Simon, H. A. 1989a. Models of Thought,
Vol. 2, New Haven: Yale Univ Press
Simon, H. A. 1989b. Cognitive architectures
and rational analysis: comments. 21 st Annu.
Symp Cognit., Dept. Psychol., Carnegie
Mellon Univ.
19