Sunteți pe pagina 1din 14

Alexander Riegler

The End of Science:


Can We Overcome Cognitive Limitations?

EPISTE- be true for any model, in-


E VOLUTIONARY
mology has brought
forth the idea of science
Abstract
cluding narrative and
mathematical models.
Why is the universe knowable? DAVIES (1990)
as an evolutionary sys- wonders. In this paper, I argue that science is not a The success of models is
tem (cf. CAMPBELL 1974, matter of knowing any universe. Rather, it is aas their predicative power. I
OESER 1984, RIEDL 1983). history has shownsuperior method of guidelines of conclude that due to cog-
From systems theory of how to organize experiences yielding predictive power. nitive limits of human sci-
evolution (RIEDL 1977) Historically, two types of models have given rise to the entists, model-building is
and the theory of punc- effectiveness of science, narrative and mathematical also subject to limitations.
tuated equilibrium models. Based on cognitive psychological investiga- By using computational
(GOULD/ELDREDGE 1977) tions, I point out that due to the human nature of sci- devices, those limitations
we know that evolution entific reasoning both types of models are limited. might be transcended.
does not proceed homo- With the advent of computational devices scientific
geneously. Rather, peri- investigation may now be extended to externalized
deductions, which are not subject to a limited short-
Different
ods of stasis are inter-
rupted by dramatic term memory and slow performance. To shift this to perspectives on
changes. Over the last computational science we have to recognize that mod- scientific activity
few centuries we have els in all three approaches have basically the same Ralph GOMORY (1995)
experienced science as a function. Although this might not solve the realists argues that the choice of
dynamic enterprise with question of how models relate to the world (at a deep appropriate perspectives is
several revolutions. Will philosophical sense), it will guarantee the continued significant if we want to
we now face the stasis of existence of contemporary science beyond the cogni- make the unknown visible:
science? These argu- tive barrier. [I]n distinguishing the
ments are not purely the- known or the unknown
Key words
oretical: In a recent from the unknowable, the
book, John HORGAN ex- Philosophy of science, cognition, complexity, models, level of detail can be deci-
plicitly speaks of The reality, constructivism, problem solving, artifacts. sive (p88).
End of Science (1996). This is also true if we
In this paper, I outline the mechanisms of the evo- look at philosophy of science: to find the proper
lution of science by first finding an appropriate explanation which both explains success and failure
perspective on the philosophy of science. Then, af- of science. Unlike many other papers on the present
ter a short review (and rejection) of HORGANs thesis, topic (e.g., LAUDAN 1977, STENT 1978, VAN FRAASSEN
I identify three core problems to science. These 1980, NERSESSIAN 1987, FAUST 1984, GIERE 1993), I
problems, which are mainly motivated by cognitive will not focus on yet another philosophical treat-
psychology, have become serious since science ment. Rather, I will deal with the subject of science
started to deal with complexity. Computer models in a pragmatic way which aims at the success of pre-
have been proposed to cope with this latest frontier dictions. The following list locates this position
of science. However, such models have not received among all possible views on the philosophy of sci-
acceptance among the scientific community due to ence. Furthermore, the list summarizes what we po-
the presumingly arbitrary relationship between tentially can expect from a philosophy of mind. For
computational model and the reality out there the rest of the paper, I will, triggered by recent dis-
(the reminiscence syndrome). I argue that this must cussions about the end of science, outline why we

Evolution and Cognition 37 1998, Vol. 4, No. 1


Alexander Riegler

should concern ourselves with a possible limitation The last two items especially may yield the expecta-
to science at all and what a possible solution might tion that in future, when the content of scientific
look like.1 theories will have transcended the limitation of the
We must clearly outline what a philosophy of sci- human mind, computers (or other artifacts) may
ence should do for us: take over the business of exploring Nature.
1. Is it a pure philosophical exercise where argu- What can such computers learn from human
ments of various authors are compared, thus scientific activities, and what does Nature refer to?
building a discourse which does not necessarily Are there limits to science carried out by humans? If
ground (HARNAD 1990) in the subject (i.e., sci- we dont face any such limits, we barely need any
entific activity)? However, the ultimate goal of artificial extensions. Too much pleasure is in-
any scientific inquiry is not to be an end in itself. volved in the process of generating scientific knowl-
Rather, it has a constructive character in that it edge. But, as with transportation, walking also may
allows us to extend the set of actions which we use provide much pleasure, nevertheless society would
in order to predict and perceive our world in an not be able to survive without motorized means of
increasingly better way. transportation. This is a good demonstration of hu-
2. Is it descriptive in order to explain what has hap- man nature: Although we have been using motor-
pened to date? Any description may be based on based vehicles for many decades, we still, and in fact
sociological models (cf. KUHN 1962), on a psycho- more than ever, enjoy our biological movement, not
logical approach (cf. GIERE 1993), or even on a to mention that our health depends on it. To draw
computational philosophy of science (cf. an analogy, in the future scientific reasoning might
THAGARD 1988). be done by machines, nevertheless we will still enjoy
3. Is it a normative instrument which tells scientists the intellectual challenge by tackling problems
how to do science, such as the research method- which we can grasp with our (narrow) mind. In the
ology of the logical positivists (SCHILPP 1963) or following chapter, I will present these restrictions in
Karl POPPERs rejection of induction (1934)? more detail, starting from the positivists fear that
4. Is it generative in that it is capable of predicting the big parts of the scientific pie have already been
what the future of science will be? Can we expect eaten, leaving only crumbs for contemporary (and
that the principal limits of science can be specified future) scientists.
analogously to GDELs Incompleteness Theorem,
which poses limits on formal systems (e.g., CASTI
The end of science?
1996a)? Following an entirely positivist view on
science, can we even expect the end of science I was recently reminded of the possibility that sci-
since all great revolutions are already behind us ence might come to an end by the provocative book
as proposed by the recent The End of Science book of John HORGAN (1996) with the self-explaining title
by John HORGAN (1996)? The End of Science. The great scientists want, above
5. Or will it provide insights and mechanisms all, to discover truth about nature, John HORGAN
whichin the long runcan be automatized wrote in his 1996 book. And since researchers
and therefore passed over to artificial artifacts already mapped out physical reality, all that is left
which then will carry out scientific reasoning? is to fill in details2. To be more concrete, all refers
Such proposals have been around for many de- to good science, which is capable of producing sur-
cades already, cf. the General Problem Solver of prises, i.e., scientific revolutions as has been intro-
NEWELL and SIMON (1972) and BACON of LAN- duced by DARWIN, EINSTEIN and WATSON & CRICK.
GLEY et al. (1987) More pragmatically, one may However, all neither refers to the (boring?) scien-
think of the usage of computers in mathematics tific activities of filling in all the gaps within the
as the first sign of this development. For exam- map mentioned above, nor to applied science. And
ple, the famed four-color conjecture (APPEL/HAK- it does not refer to what HORGAN calls ironic sci-
EN 1977), which demonstrated that problems ence, those efforts of physicists and chaos-com-
may no longer be tackled by traditional, human- plexity-researchers (chaoplexologists in HORGANs
based methods. It made use of the power of hun- terminology, p192) which argue for the existence of
dreds of hours of computation on supercomput- high dimensional superstrings and life inside com-
ers in order to calculate individual cases rather puters.
than to prove the problem in a traditional math- HORGAN dissociate himself from any relativist
ematical way. view on science brought forth to a large audience by

Evolution and Cognition 38 1998, Vol. 4, No. 1


The End of Science: Can We Overcome Cognitive Limitations?

Thomas KUHN (1962) in the early 60s3. He therefore Certainly, no theory can ever reach the status of
cannot help but think that all present scientific universal applicability. This is also true for any the-
knowledge is the complete framework to describe ory that wants to explain the dynamics of scientific
and cope with reality. Taking a KUHNIAN perspective activity. Rather, it seems useful to explain science to
into account, he might ratherpossibly correctly an extent which will allow us to formalize its key
speak of an end of the current paradigm.4 Indeed, as mechanisms and to transfer it to artifacts.
Melanie MITCHELL (1995) in her response to HOR-
GANs previously published paper From Complexity
What could the problems be?
to Perplexity (1995, p1) pointed out, that [t]he
specter of the end of science periodically appears The problems which may cause a decay of progress
in the scientific and popular literature, often at the in human science are rooted in its members: the
end of one scientific era (e.g., NEWTONIAN mechan- human scientists and their cognitive apparatus. In a
ics), before the beginning of a new one (e.g., quan- nutshell, as human beings in general, and as scien-
tum mechanics). tists in particular we all suffer from essentially three
According to her and other chaoplexologists, problems that limit our cognitive capabilities (RIE-
the specialization in science has certainly produced GLER 1994):
great advances, but the problem of complex systems 1. We are used to thinking in paradigms in the sense
demands approaches that span disciplines. In other of KUHN (1962)6. Indoctrinated at school and uni-
words, the current set of paradigms needs to be sub- versity, paradigms speed things up. They enable
stituted by another set. Now, will there really soon us to forget about previous steps in our scientific
be a change of paradigm in the traditional KUHNIAN investigation and thus about the need to exhaus-
sense? tively search the entire problem space7 which is
Certainly we have to take evolutionary con- enormously large for scientific investigations. The
straints into account. This is the line of argumenta- bad side of this is that this shortcut also limits our
tion which, for example, is followed by Colin way of thinking and problem solving.
MCGINN (1994). Like rats and monkeys which can- 2. The limitation of our short-term memory does
not conceive of quantum mechanics, humans may not allow us to compare more than seven knowl-
be unable to understand certain aspects which are edge items at the same time (the well-known
more sophisticated than our current theories in sci- chunks of MILLER 1956). This even further restricts
ence. MCGINN primarily addresses the problem of our capability to entirely step through all corners
consciousness. He emphasizes that for humans to of nontrivial-sized problem spaces of which scien-
grasp how subjective experience arises from matter tific issues consist.
might be like slugs trying to do FREUDIAN psycho- 3. Faced with the limitations of our thinking and the
analysisthey just dont have the conceptual equip- fact that interesting phenomena are complex by
ment. nature, we have to ask: Which items must we
These issues make it clear that I am mainly inter- choose in order to prune the cognitive search tree 8
ested in what we can learn from philosophy of sci- effectively? In other words, how shall we solve the
ence and how we can apply this knowledge to problem of relevance or the frame problem as it is
artificial systems in order to transcend the limits of called in artificial intelligence. Daniel DENNETT
human mind. As mentioned above, due to the ever (1984) illustrates it with the following analogy
incomplete aspects of psychology and sociology, which will serve as a reference throughout this
any further philosophical treatise will not make fur- paper: A robot, R1, as well as its improved descen-
ther progress. An analogy makes it clear: Since we dents, have to learn that its spare battery, its pre-
are not able to build such sophisticated systems like cious energy supply, is locked in a room with a
birds, we focus on technical realizations based upon time bomb set to go off soon. To solve this prob-
what we have learned about aerodynamics. Our air- lem the robot has to develop plans in order to
planes might have reached a level of enormous foresee effects of its actions. It fails because it does
complexity (ARTHUR 1993), yet they are not as ele- not pay attention to the implications of its
gant in their movement as birds. However, planes planned actions. Taking possible side-effects into
outperform natural solutions in speed and payload. account, however, does not help. As the real world
Likewise, we will construct artifacts that carry out is very complex, an exhaustive list of all side-ef-
science probably less aesthetically but more effi- fects would take too long to take any action in
ciently.5 real-time. Hence, the robot must know how to

Evolution and Cognition 39 1998, Vol. 4, No. 1


Alexander Riegler

distinguish between relevant and irrelevant side- The purpose of paradigms, very much like the
effects. But even this process of discrimination notion of reality (DIETTRICH 1995), is to secure ac-
needs an enormous amount of computation, all quired scientific knowledge and to provide a base
the more as each of the possible effects must be for further developments. Historically, the scholas-
assigned with some (quantitative) credit in order tic age is a typical example of where the lack of a
to evaluate their usefulness. true hierarchical organization of concepts and par-
All three items are subject to closer investigation in adigms finally led to its disintegration. Quite obvi-
the following sections. ously, knowledge can only be acquired
incrementally step by step without being exposed
to the risk of starting from scratch over and over
Limiting canalization
again. Of course, as pointed out by Rupert RIEDL
through paradigms (1977) for the realm of genetics, such hierarchies of
Science is carried out by human beings whose work interdependent components on the one hand in-
is constrained by the current set of scientific meth- crease the speed of development by magnitudes.
ods, the well-known KUHNIAN paradigm. KUHN On the other hand, they are burdens with respect
(1962) describes the relationship between a scien- to their canalizing effect since established struc-
tist and his or her paradigm as follows: Scientists tures define the boundary conditions for their fu-
work from models acquired through education and ture evolution. Exactly the same applies to science:
through subsequent exposure to the literature In order to achieve progress we have to establish a
often without quite knowing or needing to know firm ground of paradigms through education. Each
what characteristics have given these models the time a new disciplines with a different set of para-
status of community paradigms. (p46) digms rises, it has to start from scratch and is thus
Such continuous repetitions of one and the same prone to a weak explanatory performance in terms
methodical schema inevitably confine the future sci- of details, as the new discipline of complexity re-
entists capability of problem-solving. More than 30 search demonstrates.
years before KUHN, Jos ORTEGA Y GASSET (1929/1994)
described the apparently automatic techniques for
The psychology of science
problem-solving already quite straight forwardly. He
points out that scientists work with available meth- Quite clearly, we can find limitations of deductive
ods like a machine. To achieve a wealth of results it reasoning, a key component within the scientific
is not even necessary to have a clear concept about method. Human brains are obviously not indefati-
their meaning and their foundations. This way, the gable automata capable of storing practically
average savant contributes to the progress of science unlimited amounts of temporary information as is
as he is locked into his lab. ORTEGA compares this demonstrated by the well-studied problem of the
situation with that of a bee in its hive and the situa- Towers of Hanoi (SIMON 1975): The number of sub-
tion of a donkey in its whim-gin.9 goals which have to be simultaneously remem-
Similar to KUHNs notion of paradigm, Paul FEY- bered correlates to the number of disks. This means
ERABEND (1975) outlined the concept of stereotypi- that the subgoals have to be stored in short-term
cal research schemata. He localized their roots in memory which, as already pointed out by the
the cognitive development starting in early child- famous work of MILLER (1956), is quite limited.
hood: From our very early days we learn to react People fail to solve the problem for towers with
to situations with the appropriate responses, lin- more than three disks if they are not allowed to use
guistic or otherwise. The teaching procedures both paper and pencil. Therefore, it is not surprising
shape the appearance, or phenomenon, and es- that for systems that consist of a large number of
tablish a firm connection with words, so that finally variables we use computer models.
the phenomena seem to speak for themselves In psychology, an enormous amount of litera-
(p72) ture deals with the problem solving capacity in hu-
FEYERABEND argues that starting in our early child- man beings. In the following I will present some
hood we are acquiesced in an education that very them which quite clearly show that our cognitive
clearly outlines both the way we have to view the capabilities for problem solving (or puzzle solving in
world and the way we have to act in the world. Al- a more KUHNIAN terminology) are not only limited
ternatives are suppressed or referred to the realm of but also prone to errors when it comes to investi-
fantasy. That is how our concept of reality emerges. gating complex systems.

Evolution and Cognition 40 1998, Vol. 4, No. 1


The End of Science: Can We Overcome Cognitive Limitations?

Stack overflow where it could serve as a support for the candle. In


general, our thinking is canalized (or fixed) with
In the contemporary design of computers, a com- respect to the way we have learned to deal with
ponent called the stack stores temporal informa- things. Since cognitive development deals with
tion necessary to evaluate mathematical functions. both concrete and abstract entities, we assume that
This is similar to the carry when adding large num- this restriction also applies to abstract concepts
bers by hand; we also must not drop it in order to which prevail in scientific, especially mathematical
obtain the correct result. Since computers are finite reasoning.
implementations of TURINGs infinite machine, the The water-jug problem, studied by LUCHINS
stack is finite, too. This can easily be demonstrated (1942), provides empirical data for this assumption
by trying to evaluate an infinitely recursive func- of mechanization of thoughts. He asked test sub-
tion, i.e., a function which takes its results as argu- jects to measure out a specific quantity of water us-
ments over and over again. Depending on the ing a set of three jugs with known volume. The first
speed and stack size of the computer, a stack over- two problems LUCHINS posed could be solved by ap-
flow error will occur within a few milliseconds, plying a certain sequence of pouring water from one
indicating that the stack can no longer memorize jug into another. Test subjects had no problems to
all sub-results. The stack in humans, also referred to discover this procedure. Quite the contrary. They
as short-term memory, does not need to be exposed got used to it and tried to apply it to further tasks.
to infinitely recursive problems in order to show Like the adage says, It aint broke so dont fix it.
the same behavior. What the test subjects overlooked was that much
The example of the mutilated checkerboard simpler procedures would have led to the same re-
(WICKELGREN 1974) is one such case. It asks whether sult, simply because their inductively working mind
it is possible to arrange 31 domino pieces on a check- was set to the previously successful strategy.
erboard on which two diagonally opposite corner The consequences of these psychological experi-
squares have been cut off (yielding a 62 squares ments (among others) are clear. During academic
board). According to the author, it is almost impos- education we are subject to courses and seminars in
sible for a naive test person to find a quick solution. which we acquire a certain way of thinking, a para-
Obviously, the number of squares is correct (2 times digm in the KUHNIAN sense. Recalling the problem
31 yields 62) but the human mind is incapable of of DENNETTs robot, the advantage of such canaliza-
managing the arrangement of black and red squares tions is clear: thinking can be abbreviated (and thus
on a two-dimensional area. However, the problem accelerated) by dropping computations about im-
becomes trivial if one simply counts the number plications which are already known. This way, en-
of black and red squares on the mutilated checker- tire branches of our internal search tree can be
board which differs by two, whereas on the 31 dom- pruned, thus leaving more time to concentrate on
ino pieces the number of imaged black and red the unknown part.
squares is equal. Gestalt psychology argues that we
are good at recognizing regularities in pattern, e.g., The general view of human problem solving
patterns that consist of black and red areas. But an
exact analysis of possible arrangements requires the KUHN (1962) argued that reasoning within normal
temporary storage of subresults which transcends science was puzzle-solving, i.e., it is concerned with
the capacity of our short-term memory. solving tricky problems. From a general point of
view, reasoning is a back-and-forth walk within the
It aint broke so dont fix it problem space, with several decision points. We
might find that a particular branch does not yield
In our everyday life, things are used in a particular the desired result, therefore we have to return to a
context, e.g., we use a hammer to drive nails into a previous decision point and try an alternative
wall, matches to light a fire. In fact, things do not branch. Unfortunately, by a priori cutting off parts
seem to exist outside their domains of of the search tree through functional fixedness we
functionality10. DUNCKER (1935/45) posed the task are simply blind to those alternative branches and
to support a candle on a door. The available items hence unable to find the solution to a particular
were matches and a box filled with tacks. Since the problem. Rather, as LUCHINS Einstellungseffekt
test subjects considered the box as a mere container experiment demonstrates, we prefer to stick to
they failed to empty it and to tack it to the door inductive solutions, very much like the turkey in

Evolution and Cognition 41 1998, Vol. 4, No. 1


Alexander Riegler

Bertrand RUSSELs analogy (after CHALMERS 1982): It Equipped with this innate set of hypotheses, can
started to believe in the charity of its ownersince we successfully face problems which are by far more
the latter fed him regularlybefore it ended up as complex then those of ancient man? Ross ASHBY in
Christmas meal. one of his last publications (1973) maintained
As we have seen, for certain problems our cogni- that the scientist who deals with a complex in-
tive limits are quite narrow. In the following, I will teractive system must be prepared to give up trying
first relate these limits to concepts of Evolutionary to understand it. In order to evaluate this state-
Epistemology (thus providing some ideas how these ment let us have a closer look at the concept of com-
limits have been come about). Then I will show that plexity.
the gap between these limits and the complexity of
systems we might consider to be fancy calculator
Complexity in science
games, i.e., the computational approach to sci-
ence, is much bigger than one might assume. In his remarks on constraints on science, Thomas
HOMER-DIXON (1995) points out that human cogni-
Ratiomorphic apparatus tive limits are due to the lack of infinite ability to
understand and manage the complex, multivariate
According to the LORENZIAN Evolutionary Episte- processes of ecological and social systems. The rela-
mology, human beings feature a system of innate tionships in some of these systems are simply too
forms of ideations which allows the anticipation of numerous and complex to be grasped, much less
space, time, comparability, causality, finality, and a controlled, by the human intellect.
form of subjective probability or propensity (RIEDL What is complexity, and how does it relate to the
et al. 1992). This ratiomorphic apparatus has to be human mind? KOHLEN/POLLAK (1983) characterize
distinguished from our rational abilities (LORENZ the cognitive enterprise as follows: Cognitive sci-
1973/77, RIEDL et al. 1992) since the former indi- ence has worked under the general assumption that
cates that although this ideation is closely anal- complex behaviors arise from complex computa-
ogous to rational behavior in both formal and tional processes. Computation lends us a rich vocab-
functional respects, it has nothing to do with con- ulary for describing and explaining cognitive
scious reason. behavior in many disciplines, including linguistics,
Each of these ideations can be described as innate psychology, and artificial intelligence. It also pro-
hypotheses (RIEDL 1981/84). These inborn teaching vides a novel method for evaluating models by com-
mechanisms are mental adaptations to basic phe- paring the underlying generative capacity of the
nomena that enable organisms to cope with them. model. (p253)
One of these mechanismsthe ability for detection They conclude their analysis of complexity with:
or discrimination of foreseeable and unforeseeable [T]he computational complexity class cannot be
eventsserves as a foundation for all others. This an intrinsic property of a physical system: it
hypothesis of the apparent truth (Hypothese vom an- emerges from the interaction of system state dy-
scheinend Wahren) guides the propensity of a crea- namics and measurement as established by an ob-
ture to make predictions with different degrees of server. (p264)
confidence, ranging from complete uncertainty to As pointed out by several authors (GRASSBERGER
firm certainty. Therefore, it produces prejudices in 1986, WALDROP 1992, HEYLIGHEN/AERTS 1998), com-
advance or anticipations of phenomena to come. plexity is hard to define. Rather than trying yet an-
The capability to anticipate is necessary for survival other definition, I will outline the inherent difficul-
and contributes to the success of every higher organ- ties in understanding systems which entail a non-
ism. trivial amount of interdependent components.
The probability with which an unconditional Where does this non-triviality start? VON FOERSTER
stimulus follows a conditioned one correlates with (1985, 1990) provides a useful definition of the po-
the reliability of the response of the organism link- tential complexity of algorithms when he distin-
ing the two. The consequence is that animals and guishes trivial from non-trivial machines.
human beings behave as if the confirmation of an A trivial machine is a machine whose operations
expectation makes the same anticipation more cer- are not influenced by previous operations. It can be
tain in the future. This is also the case in science described by an operator (or function) p which maps
where repeated confirmation of an expectation any input variable x to an output variable y accord-
leads to certainty. ing to a transition table: p (x) y. For such machines

Evolution and Cognition 42 1998, Vol. 4, No. 1


The End of Science: Can We Overcome Cognitive Limitations?

the problem of identification, i.e., deducing the struc- Complex Problem SolvingAn Example
ture of the machine from its behavior, can be solved,
since they are analytically determinable, indepen- Years before SimCity became a popular game,
dent from previous operations, and predictable. Diettrich DRNER used simulation to scientifically
On the contrary, non-trivial machines, i.e., TUR- investigate the problem of social and economic
ING-like devices, consist of a memory holding an engineering. DRNER et al. (1983) created Loh-
internal state z and two operators: hausen, a computational simulation of a small
1. The effect function pz realizes the state depen- city. Its economic situation is determined by the
dent mapping: pz (x) y city-owned clock company, by a bank, shops, prac-
2. The state function px performs the state transi- tices of physicians, and so on. 24 female and 24
tion within the non-trivial machine: px (z) z male test subjects have to take the office of the
The important issue here is that the identification citys mayor for a total of 120 (simulated) months.
problem is not longer solvable even with very Since the clock company is publicly owned, the
small non-trivial machines. Consider a machine mayor is able to massively influence the economy
with two states, four inputs, and four outputs. The of the city. Due to a large variety of parameters, like
number of possible models that potentially imple- the freedom to arbitrarily set the level of tax, the
ments such a relatively simple system is: 4 4 44 = test subjects had more freedom than in a real situa-
216. A similar machine with three instead of two tions (DRNER 1989, FUNKE 1986). To measure the
internal states requires 2 24 models. And if the num- effectiveness of the virtual mayor, a set of parame-
ber of internal states, in- and outputs is not known ters was defined, such as the satisfaction (i.e., the
to the experimenter, there are some 10 155 possible weighted sum of single aspects of living comfort)
models of that machine. And this number is and size of the population, the financial situation
transcomputable in the following sense: Hans of city, company productivity (in terms of sales and
BREMERMANN (1962) claimed that [n]o data pro- back orders), the income of the bank, the average
cessing system, whether artificial or living, can standard of living, the number of unemployed and
process more than 2 1047 bits per second per gram homeless people, the use of energy, etc.
of its mass.11 In summary, Lohhausen pointed out several
Even if we consider the entire Earth in its over 4 weak points of human problem solvers who face
billion years of existence as a computer, no more complex systems. Its interesting to note that these
than 1093 bits could have been processed, the so- flaws are similar to those of the robots in DEN-
called BREMERMANNs limit. NETTs illustration of the frame problem. The test
These dimensions make it clear that one should subjects were likely to fail because they did not care-
not underestimate the complexity of systems with fully analyze the current situation. Rather, they re-
even simple structures. In artificial life, BRAITEN- ferred to a kind of intuitive interpretation of the
BERGs (1984) famous vehicles perfectly illustrate state. They also tended to neglect side-effects and
this phenomenon that complex and hard-to-ana- future long-term impacts. The test subjects thus
lyze behavior can be generated by simple rules. It treated the complex net of interdependencies
also confirms the view that biological cognitive ap- among variables as simple linear accumulation of
paratus are not necessarily more complex than arti- facts. Even worse, the virtual mayors tended to focus
ficial ones. on a single core variable which then became the
Using the concept of BRAITENBERG bricks in a starting point for a long chain of causal connec-
more abstract way, we may claim that the perceived tions. Such strategies reduce cognitive efforts and
world consists of numerous such entities which allow the outline of a clearly defined goal which is
mutually interact without knowing the internal or- inevitably linked to the improvement of that core
ganization of each other. Lets think of a society variable. They provide the illusion that the system
where living and non-living entities form a web of is controllable and make it easy to forget feedback
interdependencies. Such a web must be maintained mechanisms.
and controlled in one way or the other. Among oth- Lohhausen was not only a prototype for a new
ers, POPPER (1961) advocated the idea of piecemeal type of experiment within cognitive psychology. It
social engineering, namely the idea to utilize sci- was also a pleading against the analytic method of
ence as a tool for political reform. The following traditional analytic science. The investigation of
example shows that such a program piecemeal en- highly interconnected components of a complex
gineering is hopelessly inadequate. systemand sciences are increasingly face such sys-

Evolution and Cognition 43 1998, Vol. 4, No. 1


Alexander Riegler

temsby selecting a few variables is insufficient, edge system. This is in fact the great strength of the
but this is all what human problem solvers can do. scientific method: It first requires one to investigate
Many scientists, especially positivists, may reject the observed phenomena and then to make the re-
the significance of such simulated worlds. Rather, sults available to others. In this sense I speak of at-
they emphasize that our scientific knowledge comes omization, of condensing the results of often
exclusively from Nature, which a fancy simulation several years of research into chunks upon which
program will never be able to represent. This per- further research can be carried out without the ne-
spective is true to the extent that indeed the rela- cessity to repeat the previous experiments.
tionship between a simulation and the natural Furthermore, JACKSONs use of language is mis-
phenomenon with which it is associated remains leading for several reasons
unclear. However, the crucial point is: What is the B It suggests that only physical models are observa-
nature of Nature? How can one claim that there tions, i.e., they have an exclusive option on discov-
is a fundamental gap between the qualities of a sim- ering reality.
ulation and the qualities of Nature. In other words, B Only through a formal mathematical approach
where does the knowledge in (natural) sciences we can establish scientific models.
come from? B Computation may be another source but it plays
the role of a scout who explores the unknown be-
fore civilization, i.e., mathematics and physics,
Where does scientific Information
dare moving in to this area.
and knowledge come from? JACKSON makes this fundamental distinction ex-
In his otherwise quite comprehensive treatise on plicit when he notes that these source are funda-
science, Atlee JACKSON (1995, 1996) pointed out mentally different. For the following reason this
that there are solely three different approaches to distinction is more of an obstacle than helpful.
scientific information: Computer models are just as good as mathematical
B Physical observations models. Any formal logical-mathematical model
B Mathematical models can be fully mapped onto a computational system.
B Computational explorations This equivalency is basically what TURING showed
By proposing this list, JACKSON seems to confuse in 1936. Both the mathematical and the computa-
apples with pears. Humberto MATURANA (1978) very tional approach are capable of serving as a model.
clearly outlines the steps of the traditional scien- The only difference is that they use different nota-
tific methods. He distinguishes four cyclic steps: tions and therefore different deductive mecha-
1. Observation of a phenomenon that, henceforth, nisms.
is taken as a problem to be explained. Despite this fundamental equivalence, computa-
2. Proposition of an explanatory hypothesis in the tional models are not fully accepted as information
form of a deterministic system that can generate sources. Critics of the computational philosophy of
a phenomenon isomorphic with the one ob- science movement disqualify such models as fancy
served (or internal model, as will be outlined in calculators (GLYMOUR 1993). HORGAN (1995, 1996)
the next section). even calls such approaches ironic science which
3. Proposition of a computed state or process in the has no practical use. Either mathematical and com-
system specified by the hypothesis as a predicted putational models both are valid instruments for
phenomenon to be observed. science or neither of them. It all depends on what
4. Observation of the predicted phenomenon. we expect the role of a model to be.
Hence, physical observations refer to the process of
gathering data in order to build up an internal
What is the very nature
model. They are not a model themselves and thus
are not a source of information. Observations with-
of a model in general?
out a model do not make sense. Rather, they are John HOLLAND et al. (1986) and Brian ARTHUR (1994)
necessary for a model to fit the facts. outline the importance of models as temporary
In addition, JACKSON missed another source of in- internal constructs. They are constructs in that we
formation: Scientific literature. As already pointed build them inside our minds on the basis of experi-
out in the previous section, only if we are able to ence. They are temporary since they are exposed to
atomize a chapter of scientific discovery into a continuous modifications. This pragmatic model
single fact, can we build up a hierarchical knowl- concept can be outlined (and extended) as follows:

Evolution and Cognition 44 1998, Vol. 4, No. 1


The End of Science: Can We Overcome Cognitive Limitations?

1. In order to cope with an (apparently) complex Due to this relativist (or constructivist) position
problem we create a model. Such a model may for models are what Erwin SCHRDINGER (1961/64) orig-
example consist of schemata (in the psychologi- inally assigned to metaphysics: scaffolds for our
cal sense), i.e., ifthen rules. This is the root of thinking, and, consequently, scaffolds of the scien-
scientific abstraction: we subsume a certain con- tific building.
textual configuration in the if part of such a sche-
ma and associate it with an expectation or action
Models as scaffolds of thinking
on the right side, the then part. It is important to
note that in general neither guidelines are given From a psychological point of view, there is no dif-
of how to choose the appropriate level of abstrac- ference between scientific and nonscientific think-
tion nor what expectations or actions to associate ing. Scientific Thinking depends on the same
with a particular if. general cognitive process which underlie nonscien-
2. We have seen that the human mind is subject to tific thinking (FREEDMAN 1997, p3) Therefore, one
several serious restrictions, such as the problem should expect that our mind in general works like
of correct deductions in large systems, e.g., when the scientific method commands.
ruling a city as the example of Lohhausen has Indeed, SJLANDER (1995) proposes an alternative
shown. We are simply unable to concurrently fo- perspective on thinking. In his view, mind actually
cus on more than one chain of inference. Fortu- generates hypotheses in order to make sense of per-
nately, one feature of our internal models is that ception. As long as the internal hypothesis is able to
it allows for simple deductions as compared to its let perceptions fit in, we will keep that hypothesis
model, the real world rather than thinking of alternatives12. Despite the
3. As a next step we act upon the result of these simple structure of such internal models, they suffi-
deductions. ciently abstract from the perceived real world in
4. If our actions are successful and our expectations the sense that they allow for successful anticipations.
associated with the then part are fulfilled we are Thus, phrases in oral speech like I want to draw your
likely to keep our mental model and think of it as attention to are obviously referring to the fact that
a representation of the world. Otherwise, we we need to build a good internal model if we want
may modify the set of rules, add new rules in or- to understand another person. In other words, we
der to cover new contexts, or delete obsolete rules need the opportunity to build (implicit) anticipa-
or those which have been proven false (in the tions about what is to come13. SJLANDER illustrates
sense of Popper). this with an example from biology: A dog hunting a
In other words: [W]e use simple models to fill the hare does not need a full picture of a recognizable
gaps in our understanding This type of [induc- hare all the time to conduct a successful hunt. It is
tive] behavior enables us to deal with complica- able to proceed anyway, guided by glimpses of parts
tion: we construct plausible, simpler models that of the hare, by movements in vegetation, by sounds,
we can cope with. (ARTHUR 1994, p407) by smell, etc. If the hare disappears behind a bush or
This characterization of models not only resem- in a ditch the dog can predict the future location of
bles the notion of scientific hypothesis, it also clearly the hare by anticipating where it is going to turn up
states that any act of thinking is based on such mod- next time, basing this prediction on the direction and
els. Some of them might be quite simple, others more the speed the hare had when seen last. (p2)
sophisticated with regard to the number of schemata The need of internal models upon which we can
involved. As a consequence, not only scientific draw conclusions (the innere Probierbhne with
knowledge is formulated this way, but also our the words of SJLANDER) becomes even more clear if
knowledge about the world. Ultimately, this leads we investigate the world of people who have a re-
to the picture that when comparing a mathematical duced spectrum of perception, e.g., blind people. Ol-
or computational model with Nature, we in fact com- iver SACKS (1995) describes the case of man, Virgil,
pare two models with each other: the mathematical/ who had been blind since early childhood. At the age
computational one with our Nature model we have of fifty his eye sight was restored. Contrary to the
been constructing all our life. The roots of the latter general expectation, this was no help for Virgil since
can be found in our childhood. Since this period is the way he has been living as a blind person was in-
no longer accessible by introspective reflection, we compatible with the way normal sighted people per-
tend to assign an objective ontology to our well-de- ceive and organize their world view. With effort and
veloped model of Nature (cf. VON GLASERSFELD 1987). practice, he was able to interpret some of the visual

Evolution and Cognition 45 1998, Vol. 4, No. 1


Alexander Riegler

data in terms of the world as he had known it through gued against the idea that the inductive principle of
his other senses, but he has immense difficulty in verification could ever lead to secure knowledge. He
learning these interpretations. For instance, visually was, however, not aware that his falsification impera-
he cannot tell his dog from his cat. For him, due to tive cannot yield a secure knowledge either. One can
the lack of visual impressions, the temporal aspect of never be sure whether he or she actually included all
his world had priority. He recognized things by feel- explanatory components that show that a theory is
ing their surface in a particular order. He didnt get definitely wrong (cf. the example in LAKATOS 1970).
lost in his own apartment because he knew that after DENNETTs example, well-known in the artificial intel-
entering there was furniture in a particular sequence ligence community, demonstrates that any effort to
which he perceived in a temporal order. To put it determine all relevant factors is a non-practical enter-
differently, he was living in world of anticipation. A prise. We need not even to refer to GDELs Incom-
particular cupboard was followed by a table, so once pleteness Theorem to find scientific reasoning
he reached the cupboard he anticipated reaching the restricted within the vast complexity of combinato-
table with the next step. rics. It is appropriate to state that from an epistemo-
Having this relativist but nevertheless powerful logical point of view such a situation is highly
concept of models in mind we may now turn to a unsatisfying. On the contrary, welike the robot in
final view on the relationship between models and DENNETTs examplecannot spend almost endless
reality. time on building science by taking all possible (bor-
derline) cases into consideration. Fortunately, from a
pragmatic perspective, the scientific methodmainly
Models and reality
based on the reproducibility of experimentsenables
HORGAN (1995) quotes Jack COWAN, according to to build sufficiently reliable models and artifacts.
whom chaoplexologists suffer from the reminis- Before I investigate the limits of internal models,
cence syndrome: They say, Look, isnt this remi- I first want to provide arguments as to why narrative
niscent of a biological or physical phenomenon! descriptions in natural language can be considered
They jump in right away as if its a decent model for as models, in order to underline the basic claim of
the phenomenon, and usually of course its just got fundamental equivalence of all sources of scientific
some accidental features that make it look like knowledge.
something. (p74)
This syndrome resembles the old philosophical
Models in natural language
conundrum of how to know that a model of a natural
system and the system itself bear any relation to each In a nutshell, natural language may serve as a basis
other. How can a deductive operating system, such for internal models in the above sense, since
as mathematics, allow for building bridges and fly- B language is constructed by humans;
ing to the moon?14 B one can carry out deductions from statements
First, it is useless to speak of the system itself without being grounded (in the sense of HARNAD
because we cannot make statements about that sys- 1990);
tem outside the framework of science without vio- B the correspondence to the real world is arbitrary
lating the scientific imperatives. But describing the (from a general (i.e., population) point of view; for
system with the methods of science is exactly what individuals, it has communal character).
we want to do. We thus cannot anticipate the result A theory merely formulated in everyday language
of our inquiry (cf. VON GLASERSFELD 1987). may also serve as a model for science. In contrast to a
Second, what we actually do by building a model is formal mathematical or computational model it has
to install a second source of information, namely the neither clearly defined entities nor clear rules. Refer-
model itself. Originally, we wanted to investigate the ring to VARELA (1990, p95), where the author com-
observed system but due to its complexity and/or hid- pares the crystal-clear world of chess with the world
den features we are neither able to sufficiently explain of a car-driver, a scientific model built in natural lan-
the historical behavior nor to anticipate the future be- guage is potentially more complex than a formal
havior. Thus we build a simplified analogy which we model: states and rules are ambiguous and thus can-
hope exhibits similar or identical behavior. In order to not be easily handled by the human mind. (Cf. the
gain maximum security we apply our set of scientific psychological findings on the performance of hu-
methods. Of course, this is only relative security, as mans for Tower of Hanoi). In addition, the distinction
POPPER already pointed out several decades ago: he ar- between natural language models and mathematical

Evolution and Cognition 46 1998, Vol. 4, No. 1


The End of Science: Can We Overcome Cognitive Limitations?

models mirrors the superiority of the scientific This instrumentalist point of view emphasizes the
method over an everyday explanatory approach since notion of a knowledge that fits observations, or, as
it makes use of crystal-clear and therefore more de- VON GLASERSFELD (1990) puts it, It is knowledge that
buggable (in the sense of falsifiable) structures. human reason derives from experience. It does not
A prominent problem in philosophy addresses represent a picture of the real world but provides
the issue of genuine no-go areas (STEWART 1997): One structure and organization to experience. Searching
can propose scientific questions which are not solv- the correspondence between an internal model and
able. Examples are time travel, the intention to go the world which is experienced as the outside
north of the North Pole while staying on the surface world is like the relationship between a key and a
of the Earth, speaking about the time before Big Bang lock. Many keys open a lock. VON GLASERSFELD (1984)
(which originated time), and perhaps the current speaks of the crucial distinction between match and
search for a General Unified Theory. At first glance, fit: The fact that we can open a lock with a key does
these are questions about something that obviously not tell us anything about the structure of the lock.
does not exist. But within the framework I outlined It merely shows that the key is viable. In the same
so far such questions are examples of the very nature sense we can interpret physical observations.
of language as a model. Again, no statement in nat- Where do these interpretations originate? In the
ural language actually describes something. Rather, above argumentative framework, the notion of real-
it is a model to which we seek correspondence in the ity and knowledge are subject to relativism. But how
set of phenomena we perceive. As has already been can an individual get to know these ideas of an abso-
acknowledged by many linguists (e.g., LENNEBERG et lute truth? In accordance with Ernst VON GLASERSFELD
al. 1967), language is a very powerful mechanism in (1982, p629), the process can be outlined as follows:
that it can create patterns of arbitrary length and First, the active individual organizes his or her sen-
recursivity. Therefore, any natural language model sorimotor experiences by way of building action
(as well as questions that arises from such models) schemata. Only those schemata are maintained
can be arbitrarily long and recursive. The only con- which yield an equilibrium or help to defend it
straints arise in the process of synchronization against perturbations. Second, these operational
within a community, e.g., a scientific community structures are abstracted from the sensorimotor con-
where a certain set of questions is simply ignored. tent which originally gave rise to their creation.
The arbitrary correspondence to a real world is Consequently, they are ascribed to things and thus
also the place where the symbol grounding prob- externalized. Continuously viable ascriptions yield
lem (HARNAD 1990) is located. It arises from the fact a belief in their independent existence and hence-
that formal computations (according to the Physical forth a belief in an objective truth. In other words,
Symbol System Hypothesis of NEWELL/SIMON 1972) the individual established an internal model upon
are the manipulation of symbols devoid of meaning. which he or she can carry out deductions in an at-
In his paper, HARNAD asks: How can the semantic mosphere of security since such deductions strictly
interpretation of a formal symbol system be made follow a logicalmathematical calculus.
intrinsic to the system, rather than just parasitic on
the meanings in our heads? The problem is anal-
Limitations of model-building
ogous to trying to learn Chinese from a Chinese/
Chinese dictionary alone. (p335)
are the limits of human sciences
From a realist point of view it would be desirable Whatever approach we choosethe natural lan-
for symbols to indeed have a semantic content. It guage model, the formal-mathematical or the com-
is true that the realist position distinguishes be- putational modelwe end up with a simplification
tween computational tokens, which may be mean- in our mind. We draw deductions and conclusions
ingless symbols, and the representation per se. 15 But upon this abstraction. Then we seek to fit (in the
as FRANKLIN (1995) notes, sense of VON GLASERSFELD) the
things do not come labeled. Authors address results with the outer world.
This constructivist statement In the case of natural language
is indeed the crucial point: Alexander Riegler, CLEA, Free University models, these deductions are
Symbols receive their mean- Brussels, Rue de la Strategie 33, B-1160 traditional views of discourse,
ing through projection of an Brussels, Belgium. which require rhetoric abili-
observer, through his or her Email: ariegler@vub.ac.be ties. In the case of mathemati-
interpretation. cal models, we find the

Evolution and Cognition 47 1998, Vol. 4, No. 1


Alexander Riegler

classical tools of strictly defined logical rules. Finally, ence will decay and finally arrive at a cognitive bar-
in computational models, we externalize deductions rier. In contrast to HORGANs romantic view of
in the sense that we compute them in artifacts rather science, according to which we have to seek for The
than in our own brains. Is this already a first sign of Truth, the matter of science is not the reality. Rather,
future developments where more and more parts of it consists of fairly sophisticated scaffolds which
scientific reasoning will be shifted to automata? both permit predictions and create meanings.
Gain for speed may only be one advantage of this In their analysis of the limits to scientific knowl-
takeover. The other advantage is the possibility to edge, philosophers tend to forget that science is car-
overcome the shortcomings of deduction (as shown ried out by human beings who are anything but
in the case of Lohhausen and the Towers of Hanoi). infallible machines. Hence, it pays to look at the cog-
Fortunately, to give an outlook of the computa- nitive limits rather than at the theoretical limits of
tional science as anticipated in this paper, making disciplines such as the applicability of GDELs Theo-
use of models can be formulated algorithmically (cf. rem to physics and to the philosophy of mind. Like
HOLLAND et al. 1996 and RIEGLER 1997 for examples). it is impossible to build infinitely high scaffolds, we
Since the pragmatic perspective of science also does cannot manage infinitely large cognitive scaffolds.
not provide mapping-rules between a model and the The conclusion of an end of human science thus nei-
experienced reality, such scientific machines may ther repeats previous we-already-know-everything
gain true intellectual independence. This means that arguments nor forgets the merits of what we have
in contrast to artificial intelligence programs whose achieved so far. And, fortunately, it gives hope that a
input is fed by humans and whose computational possible trans-science, carried out by computational
output is interpreted by humans, scientifically rea- devices, will at least preserve the powerful feature of
soning devices will develop their own interpretation predicting.
of perceived data.
Acknowledgments
Conclusion
This research has been supported by the Austrian
The recent End of Science affair triggered by John Science Foundation (FWF) from which I gratefully
HORGAN reminds us that we have to seriously think acknowledge the receipt of a Erwin-Schrdinger-
about the possibility that the progress in human sci- Grant (J01272-MAT).

Notes 6 As already pointed out by several authors before me (most


prominently by MASTERMAN 1978), KUHN did not provide
1 Since philosophy of science can potentially be an endless a strict definition of a paradigm. I do not think that such
discourse of arguments referring recursively to each other, a definition is possible, since it would require exhaustively
I will apply OCCAMs Razor in order to not get lost in a including psychological and sociological aspects of indi-
jungle of arguments in favor of concentrating on the es- viduals. I therefore would like to define a paradigm as the
sential issues. However, when it becomes necessary, I will implicitly known set of standard procedures of how to per-
refer to more details, such as findings from psychology. ceive and investigate a problem. Since perception is selec-
2 Horgan earned many critics, among whom are A NGIER tive, problems may stay invisible.
(1996), CASTI (1996a, 1996b), HAYES (1996), MITCHELL 7 By problem space I refer to the n-dimensional abstract
(1995), SILBER (1996), and STEWART (1997) space set up by the n variables that characterize a problem.
3 His main argument is the apparent paradoxical situation Most likely, not all these variables are visible within a cur-
in which he fancies such perspectives, i.e., the self-appli- rent paradigm. Therefore, the current paradigm is a sub-
cability of a meta-science. Is falsificationism falsifiable?, space (with lower dimensionality) of the entire problem
he asked Karl POPPER in one of the numerous interviews space. Problem solving is moving in the problem space by
which make up his book. varying one or more variables concurrently.
4 But this, of course, does not sound as dramatic as the title 8 The notion of a search tree refers to the graph in the n-
he actually chose. dimensional search space whose knots are the decision
5 Relating Pierre TEILHARD DE CHARDINs (1966) concept of points.
Noosphere to the present World Wide Web is certainly 9 Wolfgang STEGMLLER (1971) finds even harder words for
of historical and philosophical interest in that it demon- this dogmatism. He writes that we should feel sorry for the
strates that the idea of a global net is certainly not a prod- average scientist since he or she is a uncritical, narrow-
uct of the most recent decades. Nevertheless, a mere minded dogmatist who wants to educated students in the
discussion of the possibility of such a net does not create same way.
the net. But now since it is existent we can prove earlier 10 This psychological finding resembles the philosophy of
predictions of former thinkers. Martin HEIDEGGER. See DREYFUS (1991) for an overview.

Evolution and Cognition 48 1998, Vol. 4, No. 1


The End of Science: Can We Overcome Cognitive Limitations?

11 BREMERMANN calculated this number by evaluating the algorithm need not provide the full environmental infor-
maximum possible energy content within a gram of mass. mation to the agent at every time step. This is in contrast
12 Cf. also the example of the mermaid by von GLASERSFELD 1983, to the information-processing paradigm that defines the
p54: Somebody changes the subjective interpretation of an cognitive system as a bottleneck. The essential features
expression only if some context forces him or her to do so. must be selected among the wealth of information is pro-
13 In my functional model of a cognitive apparatus (1997) I vided by the outside in order to decrease the enormous
take advantage of this constructivist-anticipatory princi- amount of complexity.
ple: Behavior of cognitive creatures is controlled by sche- 14 For the relationship between mathematics and physics in
mata which, once invoked, ask for sensory or internal data particular see, for example, WIGNER (1960).
only when they need them. In other words, the algorithm 15 The hope of the artificial intelligence community is there-
neglects environmental events except for the demands of fore that a formal model containing meaningless compu-
the current action pattern. The algorithm leads to a signif- tational tokens need not necessarily imply a meaningless
icant decrease in performance costs since the simulation representation of the system.

References Foerster, H. v. (1990) Kausalitt, Unordnung, Selbstorganisa-


tion. In: Kratky, K. W./Wallner, F. (eds) Grundprinzipien
Angier, N. (1996) The Job Is Finished. The New York Times der Selbstorganisation. Wiss. Buchgesellschaft: Darmstadt.
Book Review, June 30 1996: 1112. Fraassen, B. C. v. (1980) The Scientific Image. Oxford Univer-
Appel, K./Haken, W. (1977) The solution of the four-color- sity Press: Oxford.
map problem. Scientific American October: 108121. Franklin, S. (1985) Artificial Minds. MIT Press: Cambridge.
Arthur, W. B. (1993) Why Do Things Become More Com- Freedman, E. G. (1997) Understanding scientific discourse:
plex? Scientific American May 1993: 92. A strong programme for the cognitive psychology of sci-
Arthur, W. B. (1994) Inductive Reasoning and Bounded Ra- ence. Theory and Review in Psychology.
tionality. American Economic Review 84: 406411. http://www.gemstate.net/susan/Eric.htm
Ashby, W. R. (1973) Some peculiarities of complex systems. Funke, J. (1986) Komplexes Problemlsen. Springer-Verlag:
Cybernetic Medicine 9: 17. Berlin, Heidelberg.
Braitenberg, V. (1984) Vehicles: experiments in synthetic Giere, R. N. (1993) Cognitive Models of Science. University of
psychology. MIT Press: Cambridge. Minnesota Press: Minneapolis..
Bremermann, H. J. (1962) Optimization through evolution Glasersfeld, E. v. (1982) An Interpretation of Piagets Con-
and recombination. In: Yovits, M. C. et al. (eds) Self-orga- structivism. Revue Internationale de Philosophie 36 (4):
nizing systems. Spartan Books: Washington. 612635.
Cambell, D. T. (1974) Evolutionary epistemology. In: Glasersfeld, E. von (1983) Learning as a constructive activity.
Schilpp, P. A. (ed) The Philosophy of Karl Popper. Open In Bergeron, J. C./N. Herscovics, N. (eds) Proceedings of
Court: La Salle. the Fifth Annual Meeting of the North American Chapter
Casti, J. L. (1996a) Confronting Sciences Logical Limits. Sci- of the International Group for the Psychology of Mathe-
entific American, October 96: 7881. matics Education. University of Montreal: Montreal, pp.
Casti, J. L. (1996b) Lighter than air. Nature 382: 769770. 4169.
Davies, P. C. W. (1990) Why is the universe knowable? In: Glasersfeld, E. v. (1984) An Introduction to Radical Con-
Mickens, R. E. (ed) Mathematics and Science. World Scien- structivism. In: Watzlawick, P. (ed) The Invented Reality.
tific Press: Singapore. W. W. Norton: New York.
Dennett, D. C. (1984) Cognitive Wheels: The Frame Problem Glasersfeld, E. v. (1987) Wissen, Sprache und Wirklichkeit:
of AI. In: C. Hookway (ed) Minds, Machines, and Evolu- Arbeiten zum radikalen Konstruktivismus. Braunschweig:
tion: Philosophical Studies. Cambridge University Press: Vieweg.
London. Glasersfeld, E. v. (1990) An Exposition of Constructivism:
Diettrich, O. (1995) A Constructivist Approach to the Prob- Why Some Like It Radical. In: Davis, R. B./Maher, C. A./
lem of Induction. Evolution & Cognition 1 (2): 1129. Noddings, N. (eds) Constructivist Views On The Teaching
Drner, D. (1989) Die Logik des Milingens. Rowohlt: Rein- and Learning of Mathematics JRME Monograph 4. Nation-
beck bei Hamburg. al Council of Teachers of Mathematics: Reston.
Drner, D. et al. (ed) (1983) Lohhausen: Vom Umgang mit Glymour, C. (1993) Invasion of the Mind Snatchers. In: Giere,
Unbestimmtheit und Komplexitt. Hans Huber: Bern. R. N. (ed) Cognitive Models of Science. University of Min-
Dreyfus, H. L. (1991) Being-in-the-World. A Commentary on nesota Press: Minneapolis.
Division I of Heideggers Being and Time. MIT Press: Cam- Gould, S. J./Eldridge, N. (1977) Punctuated equilibria: the
bridge. tempo and mode of evolution reconsidered. Paleobiology
Duncker, K. (1935) Zur Psychologie des produktiven Den- 3: 115151.
kens. Berlin: Springer. Translated: (1945) On Problem Solv- Gomory, R. E. (1995) The Known, the Unknown and the Un-
ing. Psychological Monographs 58 (270), 1112. knowable. Scientific American, June 95: 88.
Faust, D. (1984) The Limits of Scientific Reasoning. Universi- Grassberger, P. (1986) Towards a quantative theory of self-
ty of Minnesota Press: Minneapolis. generated complexity. International Journal of Theoretical
Feyerabend, P. K. (1975) Against method: Outline of an anar- Physics, 25(9): 907938.
chistic theory of knowledge. NLB: London. Harnad, S. (1990) The symbol-grounding problem. Physica D
Foerster, H. v. (1985) Entdecken oder Erfinden. In: Mohler, 42 (13): 335346.
A. (ed) Einfhrung in den Konstruktivismus. Oldenbourg: Hartwell, A. (1995) Scientific Ideas and Education in the 21st
Mnchen. Century. Inst. for International Research: Washington D. C.

Evolution and Cognition 49 1998, Vol. 4, No. 1


Alexander Riegler

Hayes, B. (1996) The End of Science Writing? American Sci- tice-Hall: Englewood Cliffs.
entist 84 (5): 495-496. Oeser, E. (1984) The evolution of scientific method. In:
Heylighen, F./Aerts, D. (eds) (1998) The Evolution of Com- Wuketits, F. M. (ed) Concepts and approaches in evolu-
plexity. Kluwer: Dordrecht. tionary epistemology. Reidel: Dordrecht.
Holland, J. H./Holyoak, K. J./Nisbett, R. E./Thagard, P. R. Ortega y Gasset, J. (1994) The revolt of the masses. W. W.
(1986) Induction: Processes of Inference, Learning and Norton: New York. Spanish original: (1929) La rebelin de
Discovery. MIT Press: Cambridge, London. las masas. Revista de Occidente: Madrid.
Homer-Dixon, T. (1995) The Ingenuity Gap: Can Poor Coun- Popper, K. (1934) The Logic of Scientific Discovery. Springer:
tries Adapt to Resource Scarcity? Population and Develop- Berlin.
ment Review 21 (3): 587612. Popper, K. R. (1961) The Poverty of Historicism. Routledge &
Horgan, J. (1995) From Complexity to Perplexity. Scientific Kegan Paul: London.
American 272: 7479. Riedl, R. (1977) A systems-analytical approach to macro-evo-
Horgan, J. (1996) The End of Science. Facing the Limits of lutionary phenomena. Quart. Rev. Biol. 52: 351370.
Knowledge in the Twilight of the Scientific Age. Addison- Riedl, R. (1981) Biologie der Erkenntnis. Die stammesge-
Wesley: Reading. schichtlichen Grundlagen der Vernunft. Parey: Hamburg,
Jackson, E. A. (1995) No Provable Limits to Scientific Knowl- Berlin. Translated: (1984) Biology of Knowledge. The Evo-
edge. Complexity 1 (2): 1417. lutionary Basis of Reason. John Wiley & Sons: Chichester.
Jackson, E. A. (1996) The Second Metamorphosis of Science: Riedl, R. (1983) Die Spaltung des Weltbildes. Parey: Hamburg,
A Second View. Working Paper 96-05-039, Santa Fe Insti- Berlin.
tute: New Mexico. Riedl, R./Ackermann, G./Huber, L. (1992) A ratiomorphic prob-
Kolen, J. F./Pollack, J. B. (1983) The Observers Paradox: Ap- lem solving strategy. Evolution & Cognition 2: 2361 (old
parent Computational Complexity in Physical Systems. series).
The Journal of Experimental and Theoretical Artificial In- Riegler, A. (1994) Constructivist Artificial Life. PhD Thesis at
telligence 7: 253-277. the Vienna University of Technology.
Kuhn, T. S. (1962) The Structure of Scientific Revolutions. Riegler, A. (1997) Ein kybernetisch-konstruktivistisches Mod-
University of Chicago Press: Chicago. ell der Kognition. In: Mller, A./Mller, K. H./Stadler, F.
Lakatos, I. (1970) Falsification and the Methodology of Scien- (eds) Konstruktivismus und Kognitionswissenschaft. Kul-
tific Research Programmes. In: Lakatos, I./Musgrave, A. turelle Wurzeln und Ergebnisse. Wien, New York: Springer.
(ed) Criticism and the Growth of Knowledge. Cambridge Schilpp, P. A. (1963) The philosophy of Rudolf Carnap. Open
University Press: London. Court: La Salle.
Langley, P./Simon, H. A./Bradshaw, G. L./Zytkow, J. M. Schrdinger, E. (1961) Meine Weltansicht. Zsolnay: Ham-
(1987) Scientific Discovery: Computational Explorations burg. Translated: (1964) My view of the world. Univ. Press:
of the Creative Processes. MIT Press. Cambridge.
Laudan, L. (1977) Progress and its Problems: Towards a Theo- Silber, K. (1996) Goodbye, Einstein. Reason 28 (5): 61.
ry of Scientific Growth. Routledge & Kegan Paul: London. Simon, H. A. (1975) The functional equivalent of problem-
Lenneberg, E. H./Chomsky, N./Marx, O. (1967) Biological solving skills. Cognitive Psychology 7: 268288.
foundations of language. Wiley: New York. Sjlander, S. (1995) Some cognitive breakthroughs in the
Lorenz, K. (1973) Die Rckseite des Spiegels. Versuch einer evolution of cognition and consciousness, and their im-
Naturgeschichte menschlichen Erkennens. Piper: pact on the biology of language. Evolution and Cognition
Mnchen, Zrich. Translated: (1977) Behind the Mirror. 1 (1): 211.
Harcourt Brace Jovanovich: New York. Stegmller, W. (1971) Hauptstrmungen der Gegenwartsphi-
Luchins, A. S. (1942) Mechanization in Problem Solving. In: losophie, Band II. Krner: Stuttgart.
Psychological Monographs 54/248. Stent, G. (1978) Paradoxes of Progress. W. H. Freeman: San
Masterman, M. (1970) The Nature of a Paradigm. In: Lakatos, Francisco.
I./Musgrave, A. (ed) Criticism and the Growth of Knowl- Stewart, I. (1997) Crashing the barriers. New Scientist, March
edge. Cambridge University Press: London. 97: 4043.
Maturana, H. R. (1978) Biology of Language. In: Miller, G. A./ Teilhard de Chardin, P. (1966) The Vision of the Past. Harper
Lenneberg, E. (eds) Psychology and Biology of Language & Row: New York.
and Thought. Academic Press: New York. Thagard, P. (1988) Computational Philosophy of Science.
McGinn, C. (1994) The Problem of Philosophy. Philosophical MIT Press: Cambridge.
Studies 76: 133156. Varela, F. (1990) Kognitionswissenschaft Kognitionstech-
Miller, G. A. (1956) The magical number seven, plus or minus nik: Eine Skizze aktueller Perspektiven. Suhrkamp: Frank-
two: Some limits on our capacity for processing informa- furt/M.
tion. Psychological Review 63: 8197. Waldrop, M. M. (1992) Complexity: the emerging science at
Mitchell, M. (1995) Complexity and the Future of Science. the edge of chaos. Simon & Schuster: New York.
Announced for publication in Scientific American. Wicklegren, W. A. (1974) Single-trace fragility theory of
http://www.santafe.edu/~mm/sciam-essay.ps memory dynamics. Memory and Cognition 2: 775780.
Nersessian, N. J. (ed) (1987) The Process of Science. Martinus Wigner, E. P. (1960) The Unreasonable Effectiveness of Math-
Nijhoff: Dordrecht, Boston, Lancaster. ematics in the Natural Sciences. Communications on Pure
Newell, A./Simon, H. (1972) Human Problem Solving. Pren- and Applied Mathematics 13: 114.

Evolution and Cognition 50 1998, Vol. 4, No. 1

S-ar putea să vă placă și