Documente Academic
Documente Profesional
Documente Cultură
2010 19:08:04
Noosphere Epistemology
ABSTRACT
1 Introduction
Pierre Teilhard de Chardin (born 1 Mai 1881 near Clermont-Ferrand; died 10 April 1955 in
New York) was a French Jesuit, palaeontologist, anthropologist and philosopher. In his work
he tried to reconcile science and religious faith. In his two most important works ‘The
Phenomenon of Man’ (orig.: Le Phénomène Humain, 1955)1 and ‘The Appearance of Man’
(orig.: La Place de l'Homme dans la Nature, 1956)2 he describes the evolution of the
universe from its beginning to the formation of the planets and the evolution of the biosphere.
With the dawn of humankind a new sphere evolves, the noosphere, the sphere of thought.
Now the evolution of the noosphere is the most important thread of evolution. In its first
phase, it expands, conquers the globe, and diversifies into a multitude of different cultures
that evolve, disappear and cross-fertilize each other. In its second phase, which according to
Teilhard, has just begun, the noosphere is in a state of accelerated convergence. Now the
spiritual forces strive for unification. At the end of this phase, in a few million years, at the
Omega Point, humankind could be united in a collective consciousness, based on a
harmonised world view3. Teilhard was convinced that humans would find ways to bring their
brains to perfection. Between 1948 and 1950, he wrote,
‘I am thinking of the amazing performance of electronic machines (the results and the
great hope of the aspiring ‘Cybernetics’). These devices replace and multiply the
computing and inference capabilities of the human mind by such ingenious methods
and to such an extent that in this direction we can expect an equally great increase in
our abilities as it has brought the evolution of our vision.’4
1
TEILHARD DE CHARDIN, Pierre (1959)
2
TEILHARD DE CHARDIN, Pierre (1965)
3
Teilhard did not seem to be sure about the success of the human race. In 1949 he concluded his
work, (see Conclusion of TEILHARD DE CHARDIN, Pierre (1965)), with some ‘prospects and
prerequisites for the success of the venture man'.
4
TEILHARD DE CHARDIN, Pierre (1984), English translation from German, page 118.
© Stefan Pistorius
2 13.02.2010 19:08:04
Our approach
The central idea of this article is to describe the knowledge evolution of all humans and their
cognitive technical equipment by means of a dynamic adaptive network model. With this
approach to epistemology, we pursue the following objectives:
• The algorithmic foundation of the model leads to well-defined epistemic concepts.
• The model is a powerful description framework, which embraces both individual and
super-individual knowledge evolution. Therefore, we see it as a first step towards a
unified evolutionary and social epistemology.
• Besides its descriptive power, the network view of knowledge reveals new aspects
about the evolution of knowledge derived from complex network research and
observations about the impacts of the Internet. Thus, we can discuss Teilhard’s
hypothesis of a ‘convergent’ noosphere.
We proceed as follows: the main part of this essay (Sections 1 - 4) is dedicated to a semi-
formal step-by-step introduction of the dynamic network model and all necessary concepts.
To motivate the central definitions we present a detailed example. In Section 2, we define
‘interactive adaptive Turing machines’ (IATM) which is the computational model for a
dynamic adaptive network of interacting intelligent agents. It is similar to a model first
introduced by Jan van Leeuwen and Jirii Wiedermann5 and represents an interactive version
of the classical ‘Universal Turing machine’. From the computational model, we can derive
precise definitions of important epistemic concepts. First, we introduce our notions of ‘factual’
and ‘transformational’ knowledge. In Section 3, we apply these definitions to model the
network of knowledge of a single agent, which we call her/his/its 'world view'. In Section 4,
we look at networks of interacting agents. A group of interacting agents may constitute a
particular field of knowledge, which we call 'knowledge domain'. The knowledge network of
all agents constitutes the overall noosphere. On each level of granularity, from a single
agent's network of knowledge to super-individual knowledge domains and the global
noosphere, knowledge evolution follows similar rules.
In the second part (Sections 5 - 7) of this article, we indicate how to apply the model and
discuss some initial results. In Section 5, we discuss how to integrate existing approaches to
evolutionary and social epistemology. In Section 6, we discuss epistemic consequences
derived from the evolution in information technology especially the Internet and results of
what is known as scale-free network research. In Section 7, we reconsider Teilhard's Omega
Point theory.
The English mathematician Alan Turing provided an influential formalisation of the concept of
algorithm and computation by the so-called Turing machine. In our context an informal
description will suffice:
5
see VAN LEEUWEN, Jan and WIEDERMANN, Jirii (2001).
© Stefan Pistorius
3 13.02.2010 19:08:04
On an even more abstract level, every TM is a device that reads a finite input, does some
‘computations’, and produces an output. For the abstract (mathematical) concept of the TM it
makes no difference, whether the step-by-step routines (i.e. algorithms) are done by a ticket
machine, a computer or a human. Turing machines are only one of many ways to describe
the mathematical class of what are known as µ-recursive functions6.
Turing machine
Read/Write Head
Endless
Si+1 Si+2 Si+3 Si+4 Si+6 Si+7 Si+8 Si+9
Tape
Si+j ε Σ
Finite Control finite Alphabet
Fig. 1
From the theory of TMs we can derive a result that is important to the subsequent discussion
of knowledge and truth:
Theorem 1: The following problem concerning Turing machines is unsolvable. Given a Turing
machine M and an input string w, does M halt on w?8
So far, we have only talked about algorithms and the notion of a single Turing machine that
starts with a fixed input. In order to model an evolving network of intelligent, interacting
agents four new ingredients need to be added to the model of computation:
• interaction of agents,
• infinity of operation,
• persistent memory and
• non-uniformity of programs.
A Standard TM does not model interaction between an agent (e.g. human or computer) and
its environment or between agents. In reality, agents do not operate in isolation but may be
connected to dynamic networks of virtually unlimited size, with many agents sending and
receiving messages in unpredictable ways. Infinity of operation means that in principle the
interacting agents may continue to interact without a definite end. Because of that, their input
may be infinite as well as their output. In contrast to a Standard Turing machine humans and
modern computers have a persistent memory even if they are turned off (or are asleep) for a
while. If they start again, further computation may depend on the memory content. Non-
uniformity of programs means that agents in a network may change their algorithms during
6
LEWIS, Harry R. and PAPADIMITRIOU, Christos H. (1981), Section 5 introduces several
alternatives.
7
see for instance PUTNAM, H. (1960)
8
for a formal proof see for instance LEWIS, Harry R. and PAPADIMITRIOU, Christos H. (1981), p.
283-284.
© Stefan Pistorius
4 13.02.2010 19:08:04
operation. Nowadays most computers are regularly upgraded, and their software, which
represents their algorithms, may be fundamentally changed. If the agent represents a
human, the human may have changed the way he/she performs his/her algorithm-like work.
To model this kind of computation we introduce the abstract notion of interacting adaptive
Turing machines (IATM), similar to the notion of interactive Turing machines with advice,
which were first introduced by Jan van Leeuwen and Jirii Wiedermann9. We omit the
mathematical description and concentrate on the essential features.
Because IATMs are the most expressive notion of interactive computation known, we can
expect that any operational model of evolutionary processes can be described by an IATM11.
Below we explain the IATM model of operation and define the essential concepts in a semi-
formal way.
A network of IATMs is made of a set S of interacting IATMs where the nodes are the IATMs
and the message exchange relations within S define the edges12.
For each IATM, everything delivering messages to its input and receiving messages from its
output is called its environment. Accordingly, we define the environment of a network S of
IATMs as everything delivering input from outside S (i.e. other IATMs not in S or nature) and
everything receiving output outside S (i.e. other IATMs not in S13). We can thus look at the
'computation' of a network S of IATMs, i.e. the unbounded interaction process of S with its
environment. For further discussions we need
9
see VAN LEEUWEN, Jan and WIEDERMANN, Jirii (2001). A purely mathematical definition can be
found in VERBAAN, Peter (2005).
10
Van Leeuwen and Wiedermann use a so-called advice function, which is non-computable and is
less intuitive than our read/write memory, which allows rewriting algorithmic rules. Our definition
corresponds to the so-called van Neumann architecture of modern Computers, where programs and
data both reside in the same memory.
11
see GOLDIN, Dina and WEGNER, Peter (2003) and GOLDIN, Dina and WEGNER, Peter (2005) for
articles about the expressiveness of interactive computing with persistent memory compared to the
classical Turing machine model.
12
A precise and formal definition of a dynamic network (like the Internet) based on an interactive
Turing machine concept can be found in VAN LEEUWEN, Jan and WIEDERMANN, Jirii (2001).
13
It does not seem 'reasonable' for an IATM to output messages to nature. Instead, we could assume
that a human or a computer that applied an algorithm represented by an IATM could interact physically
with nature.
© Stefan Pistorius
5 13.02.2010 19:08:04
Theorem 2: For every finite set S of IATMs there exists a single IATM M that sequentially
implements the same computation as S does14.
In other words, theorem 2 tells us that a whole network or parts of a network of interacting
Turing machines can be regarded as a unity because a single (more complex) machine can
simulate it. Subsequently we will model the knowledge of a single human or a single
computer by a network of IATMs. Thus, the knowledge network of all interacting humans and
their computers is a network of networks. However, even the overall network of all agents
(i.e. human or cognitive equipment) can still be seen as a unity, which we will call the
noosphere. The noosphere as a whole interacts with nature.
If the interaction between an IATM M and its environment (i.e. other IATMs or nature) leads
to some kind of disruptions (see Section 4) then M or the environment might 'mutate' and
sometimes successfully adapt its algorithm, or the interaction might continually be disturbed.
To mutate, the IATM rewrites parts of its persistent memory containing the algorithmic rules.
In other cases both may get an ’upgrade’ or ‘adaptation-message’, from a third party IATM
interacting with both in order to rewrite/adapt the algorithm. If for instance M1 and M2 were
computers that exchange erroneous business data, then M3 could be a human or a
computer in the Internet, that provides for the upgrade of M1 and/or M2. ‘Adaptation
messages’ are highly effective (far more effective than rare and undirected 'mutations') and
can be regarded as a kind of learning from others.
In order to motivate the notion of knowledge and knowledge evolution we first present a
typical example of a business process where a group of intelligent agents work together to
settle an invoice. The process will be modelled by IATMs in a semi-formal way and a
definition of factual and transformational knowledge of IATMs will be given. In section 3.2 we
elaborate the kind of knowledge involved and show how to apply the formal definition. We
demonstrate that it can be applied to humans as well as computers.
Example: Company CC orders 10 new PCs for their staff from its supplier for business
computers BC. After a few days, CC’s in-box agent receives the invoice from BC. To settle
the invoice, several steps have to be performed. We model the process by interacting IATMs.
It should be mentioned that the design of the workflow between the different IATMs is just
one solution out of many others that would be possible. For further discussions we
concentrate on the content of the memory (PROP) and on the rules (TRANS) needed.
• IATM-1 is an OCR (Optical Character Recognition) device that scans pieces of paper and
delivers a sequence of characters. In order to fulfil the task it has to ‘know’ the following:
o PROP: in its internal memory there has to be a pattern for each of the possible
letters.
o TRANS: the rules of the IATM specify how to read the pixels on the paper, and
how to match them against the appropriate letter pattern. If successfully matched,
IATM-1 outputs the respective letter.
In other words, IATM-1 translates a stream of pixels into a stream of letters.
14
For a formal proof one has to formulate some more detailed assumptions about the network
protocols and the method of operation of an IATM that we do not define in this article. The idea of the
proof can be found in van LEEUWEN, Jan and WIEDERMANN, Jirii (2001), proposition 10 for the so-
called Internet machine, a model for a time varying set of interacting machines in the Internet.
© Stefan Pistorius
6 13.02.2010 19:08:04
• IATM-2, the invoice registrar, is a device, which takes a stream of letters as its input and
transforms them into meaningful pieces of information like ‘name of supplier is BC’, ‘The
address of BC is ... ‘, ‘BCs account-number is ...’ and so on. If it has found every needed
piece of information, it sends all data and a statement ‘The invoice is complete’ to IATM-
3. To do so it might make use of the following:
o PROP: In its internal memory, it might have collected all relevant master data
about the already known suppliers. This information could be helpful if the invoice
could not be completely read by IATM-1 due to bad quality of the printing.
Moreover, in its memory it might have information about the typical structure of
the invoices of all already known suppliers.
o TRANS: The rules of IATM-2 compose letters into words and implement heuristics
such as the suppliers-names can mostly be found at the top of the page, item
prices on the right, account numbers at the bottom and so on. For a particular
supplier the memorised information about its typical invoice structure might help
to improve the results. If the actual invoice differs in structure, the memory will be
adapted.
• IATM-3, the accountant gets a stream of data about invoices and, at the end of each
invoice, the information whether the invoice is complete. It computes whether the
summation is correct. It then consults IATM-4/5 (i.e. the purchasing department, the IT-
department) for performance acknowledgement and waits for positive feedback. If
everything is all right, the accountant releases a transfer order to the bank.
o PROP: In its memory, the IATM-3 has information about the prices agreed upon
with the different suppliers. Another IATM that is responsible for supplier contracts
adapts this information regularly.
o TRANS: The rules define the steps to check the invoice for computational errors
and whom to consult for performance acknowledgement. Then the transfer order
has to be generated.
We omit the further steps by IATM-4/5. In order to simplify the example, we didn't describe
those cases when the invoice was incorrect. Therefore, in reality the interaction of the IATMs
might be much more intensive.
15
A detailed discussion on the subject is not within the scope of this article and must be left to future
work. Edmund Gettier’s short article ‘Is Justified True Belief Knowledge?’ discusses the classical
© Stefan Pistorius
7 13.02.2010 19:08:04
carrier and hence there is no ‘belief’ and ‘justification’ and there is no absolute 'truth', as we
will argue in Section 4. In general, the definitions abstract from any concrete human mental
states, motivations or cognitive mechanisms. There is no ‘believer’ as long as we don’t apply
the knowledge definition to humans. If we apply it to humans, then factual knowledge may be
propositions that could be justified belief. Section 3.2 discusses in more detail how
knowledge can be attributed to agents but the knowledge core is an abstract notion
independent of the knowledge holder. This independence is essential for further discussion
in Section 6 because we argue that Teilhard's noosphere is composed of humans and their
technical cognitive equipment, namely the Internet and everything that is used to produce
factual knowledge.
If we look at the example, it is a real life description of how to implement the interacting
IATMs as a business application on some kind of computer. And indeed in some companies
the process of settling an invoice is completely automated.
But how about humans? In many companies, humans manage the described workflow and
humans seem to work differently. If we look for instance at the division of labour between the
OCR and the invoice registrar then this is definitely not a workflow between two different
humans. Moreover, definition 3 and 4 about factual and transformational knowledge do not
seem to be adequate for human knowledge. If we look, for instance, at the rules and memory
content of OCR, it is clear that in this case the definition of transformational and factual
knowledge does not mean anything to a human. Probably, no human could tell how he/she
recognises the letters and whether she/he has some kind of ‘patterns for letters’ (i.e. factual
knowledge of OCR) in his/her head.
On the other hand, if the task of IATM-1 is performed by a human, then she/he must also
recognise the letters and read the respective words in some way. Using their eyes, humans,
too, have to analyse the electromagnetic signals from the piece of paper representing the
invoice and transform them into visual units representing letters. All this happens
subconsciously. It is something that our visual experience conveys to us, somehow. For any
interaction with our environment, our senses act as mediators. These mediators need to map
sensorial signals to some other form of representation. In the example, the result of OCR
was a stream of letters. But it could also be some intermediate code, that does not mean
anything to humans. The mapping might even take several steps of intermediate code before
the letters that are input to the invoice registrar are produced16. The interesting fact is that
any interaction with the environment starts with some kind of transformation of sensorial data
that can principally be modelled by an IATM. After one or more non-conceptual
transformations, conceptual output may be produced, something that has meaning to a
human. This conceptual output might be transformed to ‘higher order’ propositional
knowledge or scientific knowledge by means of a ‘conceptual’ IATM. In the example, the
invoice registrar inputs the stream of letters and outputs meaningful propositions. Part of the
conceptual process that the invoice registrar performs is reading - composing letters to words
and assigning meaning to words, such as the letters ‘BC’ mean ‘the suppliers name’.
We also argued that a human would at least personify the OCR and the invoice registrar. We
could even assume that one (highly qualified) human could also do the work of the
understanding of knowledge and has ignited a very influential discussion on the subject, see GETTIER
Edmund (1963).
16
It is not within the scope of this article to model exactly the mapping according to cognitive sciences
or to model how human cognitive mechanisms have evolved. However, we maintain that in principle
we could model this within the IATM framework (see Section 5). The work of G. Vollmer explains and
interprets the philosophical implications of this kind of evolutionary epistemology. See for instance
VOLLMER, Gerhard (2003).
© Stefan Pistorius
8 13.02.2010 19:08:04
accountant and the others. In addition, if we consider all the other cognitive capabilities each
human possesses, we must say that a whole network of hundreds or thousands or even
millions of interacting IATMs could be necessary to describe a single human's formalised
conceptual and non-conceptual, factual and transformational knowledge17.
Worldview of an Agent =
conceptual and non-conceptual knowledge
knowledge knowledge
Fig. 2
The knowledge units in an agent's memory are related to each other in various ways. As we
demonstrated, all of an agent's knowledge can be regarded as a network of many interactive
adaptive Turing machines. Altogether, they account for her/his/its view of the world, the world
view. For further discussions we need
Definition 5: The world view of an agent at time t is the network of IATMs representing
her/his/its conceptual and non-conceptual transformational and factual knowledge at time t
(see Fig. 2).
17
However, according to theorem 2 all these IATMs can be simulated by a single IATM. Therefore, we
can say that all formalised human knowledge at time t can be regarded as a unity.
© Stefan Pistorius
9 13.02.2010 19:08:04
The main purpose of interaction is the exchange of messages. Since messages can be
knowledge, interaction processes may lead to knowledge propagation. We talk about
knowledge propagation if one IATM outputs a message and an other accepts and memorises
it as input. If the interaction process is disrupted and one party or both parties adapt their
transformational knowledge, we talk about knowledge evolution. In order to motivate the
model of knowledge evolution we will elaborate possible operational faults between
interacting IATMs and their strategies to ‘settle their differences’. We will concentrate on the
interaction between two IATMs. From theorem 2 one can derive that this will be enough to
describe the ‘settling’ process for a whole network. The argument is as follows: If we want to
study the interaction processes in a set S of interacting IATMs we may begin with any IATM
M element of S, then simulate S - {M} by a new IATM S' (possible because of theorem 2) and
then we study the interaction of M and S'.
At first sight, this doesn't seem very reasonable for real life situations, but it is! Let us look at
an example: If M is human and exchanges intelligent messages (eMails, chats, whatever)
with her/his intelligent friends A and B via the Internet by means of her/his computer C, all he
needs is C! M only interacts with C's keyboard (input) and screen (output) and nothing else!
Nevertheless, we know that in reality, M exchanges intelligent messages with her/his friends
A and B, and C is only a kind of interface. However, A and B are intelligent and they produce
their well-considered messages by some well-designed cognitive processes. If this is the
case, then C can simulate A and B by means of a software that implements the well-
designed cognitive processes of A and B. If the software is good enough and passes the
Turing test18 then H doesn't even notice the fraud. The point is, that C with its new software,
let us call it C', can simulate the network of A, B and C.
But what if A and B are sitting next to M in her/his living-room? - The answer is: with respect
to the message exchange process it doesn't make a difference. The contact between M and
her/his friends A and B is again via an interface, her/his sensory, probably most of all her/his
ears and her/his eyes. The part of the well-considered message exchange is as before. A
background computer could simulate their messages. The more difficult part is the nonverbal
communication. We only have a chance if this can also be formally modelled. Until now this
is science fiction, some sort of a perfect virtual reality, as in the movie ‘The Matrix’. We do
not have to discuss whether this is possible. The point is if it can be formally modelled then it
can be modelled by a single IATM.
In order to understand what kind of disruptions might happen between two interacting IATM's
we extend the example of Section 3.1, where a sequence of interacting IATMs settle an
invoice. In contrast to the original version of the example we assume that the process may
be disrupted by some errors. We distinguish the following:
18
'The Turing test is a proposal for a test of a machine's ability to demonstrate intelligence. It proceeds
as follows: a human judge engages in a natural language conversation with one human and one
machine, each of which tries to appear human. All participants are placed in isolated locations. If the
judge cannot reliably tell the machine from the human, the machine is said to have passed the test.'
(from http://en.wikipedia.org/wiki/Turing_test, retrieved 2010-02-09)
© Stefan Pistorius
10 13.02.2010 19:08:04
registrar does not analyse because it is not part of its transformational knowledge. Because
of this, for the accountant the registrars' output, ‘The invoice is complete’ is false, although it
is justified true belief (i.e. factual knowledge) of the registrar. The reason is that both have a
different concept about an ‘invoice’. Therefore, the registrar's (transformational and factual)
knowledge contradicts the factual and the transformational knowledge of the accountant.
c) Output alphabet of sender does not match the input alphabet of receiver
The OCR device may have problems reading the letters because BC's invoice is written in
Chinese and it does not have any pattern for Chinese symbols. Consequently, it does not
produce an output the invoice registrar can cope with.
In concrete networks, there are many more possible sources of disruptions resulting from
interaction. For instance, synchronisation problems or message routing problems with loss of
messages and so on are difficult problems in real world networks. We can abstract from
these, because they are not essential for the discussion of knowledge propagation and
evolution.
To dissolve the disruptions, the IATMs have to adapt. Such adaptations can principally take
place after each message read from the input stream. They change the computational
behaviour of an IATM and possibly its memory content. We interpret such adaptations as
evolving knowledge. Based on the described types of disruptions, we will analyse what kind
of adaptations we need, or, in other words, how the knowledge evolution works:
If we first look at the problem from a theoretical point of view, we have to be precise about
what it means to prove that an 'IATM M works as required'. Since M could be adapted any
time, we assume that M, beginning at time t, consumes only one message (i.e. a foreseeable
input string of finite length). By this assumption, we look at M as if it were a Standard Turing
machine (see Definition 1) for a while. Then we need a formal specification of the expected
behaviour of M and a proof that M performs accordingly. Unfortunately, theorem 1 tells us
that we can't even be sure that M will ever halt on the input message. All we could prove is
the so-called partial correctness of M at time t19. Since we are interested in M's performance
in the context of its environment, we have to make assumptions about the environment too.
Because if we do not care for M's environment, then it could happen that the interaction does
not work anymore, because the environment has changed the interaction rules without prior
notice and thus it could send an unacceptable message. But if we want to be sure about the
behaviour of the environment, we need also a prove of the partial correctness of the
environment at t20. Even if we did the cumbersome work of formally specifying the required
transformational behaviour at t and prove the partial correctness of both the IATM and its
environment at t it could only be useful for a period without changes, between t and t + x. The
question is what we expect of M if something in the interaction process changes. We
probably expect M or the environment to adapt somehow. However, a priori we do not know
19
Partial correctness defines correctness neglecting the halting problem. For the Theory of Program
Verification see for instance LORCKX, Jacques and SIEBER, Kurt (1984)
20
If the environment of M is nature then this means that we need a proof of the 'transformational
behaviour' of nature in order to produce sensorial data. However, there is no way to formally prove,
that nature's 'behaviour' meets a formal specification, because all we know about nature is (scientific)
theory.
© Stefan Pistorius
11 13.02.2010 19:08:04
exactly how to specify the requirements, since we do not know anything about the possible
changes ahead. Only a posteriori would we be able to formulate an adequate formalistic
specification of the transformational behaviour. As an epistemic consequence, we get: In an
unforeseeable changing environment, the correctness of transformational and factual
knowledge of an IATM cannot be adequately formalised. A reasonable correctness criterion
can only be formulated for periods without unforeseen changes.
This was theory. In practice, the approach to verifying the expected behaviour of the
implementation of an IATM (i.e. the software representing an IATM or a human performing
the task) is to test her/him/it systematically. If we wanted to test the invoice registrar of the
example, we would probably test it by a few tens of different invoices. If something went
wrong, we would eliminate the programming error, or if he/she is human teach him/her to do
better. Moreover, we would assume that the communication standards to the environment
would stay unaltered. If we test systematically, then by and large we would rely on the
registrar. But a test is not a proof. The conclusion is ‘To err is not only human’. The only way
to improve an IATM's erroneous performance is by trial and adaptation on error!
In the above argument, we just used the term correctness. The decisive question is WHO
specifies what correctness (even in times without changes) of the factual and
transformational knowledge of an IATM at time t should be? We will give the answer, after
the discussion of case b).
A more technical definition: A network of agents exchanging more messages within their
network than with others constitutes a knowledge domain22.
According to this definition, there may be many different kinds of knowledge domains and
there may be hierarchies of knowledge domains. Some knowledge domains may consist of
scientific knowledge, others of cultural or everyday knowledge and others only of non-
conceptual knowledge. The reason for using the expression ‘without obvious contradictions’
is that we cannot be sure of the existence or non-existence of disruptions as can be proven
by a similar argument as in a). Only if contradictions are detected (by tests or accidentally)
and only if the members of the knowledge domain 'feel the pressure' to eliminate these
contradictions will knowledge evolve. In the example, the registrar and the accountant
21
When we use the term contradiction, we are referring to inconsistent knowledge of two IATMs that
may lead to disruptions in the interaction process.
22
This definition is probably precise enough to identify algorithmically different domains in a network. -
See BARABASI, Albert-László (2004), page 171, referring to Flake, Lawrence, Giles from the
company NEC, who used the WWW link structure to identify 'communities' in this way.
© Stefan Pistorius
12 13.02.2010 19:08:04
needed to agree on a common invoice concept. In the context of their families, which are not
part of their ‘invoice knowledge domain’ the invoice concept was irrelevant and no changes
took place. Therefore, we say the knowledge domain specifies the requirements for
transformational and factual knowledge at time t and decides on the correctness or rather the
adequacy of it.
Knowledge changes may have serious and costly consequences. Every change of a
fundamental concept of an IATM can affect a wide range of IATMs of the interacting
environment. The worst-case scenario is that the change propagates errors through the
whole network of interacting IATMs. Some of the errors might emerge immediately, because
an interacting IATM does not accept its input anymore, since it does not know anything about
the intended change. Or it accepts the input somehow but it produces obviously false output.
Other errors might not be detected for a long time, and when they emerge, other changes
might have taken place in between and it takes a lot of time to find the cause. The questions
is: can such situations be avoided? The cheerless answer is once again, a lot can be done to
minimise negative effects but, for theoretical and practical reasons, we can never be sure.
c) Adaption of syntax:
With respect to knowledge evolution, this case is a special case of b). But the required
changes only affect the syntax of input and output of the respective IATMs involved.
Therefore, we don't have to worry about a possible chain reaction as in case b). If the OCR
of company CC can't cope with Chinese then it might ignore the input and wait for an English
written invoice of BC. And if BC does not cooperate, the purchasing department could decide
to change the supplier. - This strategy works as long as there are enough alternatives or as
long as the interaction is only of occasional nature. In the example, it probably depends on
the market power of both parties. If BC has an unchallenged supremacy in the market, CC
will have to adapt. As soon as the interaction gets more intense, one of the parties has to
adapt its transformational knowledge. In this case, either BC will learn English or CC will
learn Chinese or they will agree on a third language.
We summarise the results of this section by the following statements about the class of
interactive adaptive Turing machines:
S3) The more intense the interaction between intelligent agents is, the more likely it is that
contradictions will emerge and the higher the pressure to resolve the contradictions
will be. The resolution can contribute to harmonized world views of agents.
S4) There are two options for resolving contradictions between two IATMs. Either one
will win recognition or both agree to resolve the contradictions by a consistent
unification of their transformational and factual knowledge. If the IATMs belong to
different knowledge domains, this may lead to a unification of the domains.
S5) Changes of fundamental concepts can have far-reaching and costly consequences.
© Stefan Pistorius
13 13.02.2010 19:08:04
These statements cannot be proven within the IATM model, because the normative aspects
of the ‘need’ or the ‘pressure’ to solve contradictions is not formalised within the model.
Nevertheless, in Section 6, we will present some real world observations of phenomena of
knowledge evolution that support S1) to S5).
Based on the definitions of the previous Sections, we can finally define the 'noosphere'.
Definition 7 Noosphere: The noosphere is the evolving global network of the world views of
all interacting humans and their cognitive devices.
Fig. 3 visualises a network of 4 interacting agents, i.e. a small network of networks. Each
agent's network of knowledge (see also Fig. 2) consists of knowledge belonging to different
knowledge domains (KD1 - KD5) and of a network of non-conceptual knowledge. In the
evolution of each domain, there may be many agents involved. We model this by interaction
of the agents (symbolised by arrows). Agents belonging to the same knowledge domain may
still have different world views, and this may have a significant influence on the knowledge
domain. The influence is twofold. First, it arises from the non-conceptual layers of
knowledge. If the sensory of two agents provides for different experiences, it might influence
their attitude towards some knowledge domains. Second there are of course influences from
the other knowledge domains that the agent belongs to. The interaction between agents or
between an agent and nature leads to knowledge propagation and in case of disruptions to
knowledge evolution23 (see rule S2).
Fig. 3
With the definitions and theoretical results of the previous Sections at hand, we can now
reassess the adequacy of the model. In Section 5, we demonstrate that the theory provided
can be seen as a framework for different branches of evolutionary and social epistemology.
23
If we think of knowledge evolution by interaction with other agents we first think of the evolution of
conceptual knowledge. If the agent is human and the non-conceptual knowledge is about human
cognitive mechanisms then the evolution by interaction could be thought of as biological reproduction.
- A thorough investigation of this idea is beyond the scope of this paper.
© Stefan Pistorius
14 13.02.2010 19:08:04
aspects – is a knowledge process, and that the natural selection-paradigm for such
knowledge increments can be generalized to other epistemic activities, such as
thought, learning, and science. … of all the analytically coherent epistemologies
possible, we are interested in those, (or that one), compatible with the description of
man and of the world provided by contemporary science'24.
We think that our network model of knowledge evolution for both individuals and knowledge
societies provides a formal description framework for such an epistemology. The model
specifies important epistemic concepts like 'factual' and 'transformational' knowledge,
individual 'world views', super-individual 'knowledge domains', the overall 'noosphere' and the
'propagation' and 'evolution' of knowledge. What remains to be done, is to integrate those
naturalistic theories that explain the causes promoting the propagation and evolution of
knowledge. To give an example, we indicate a possibility of how to integrate Gerhard
Vollmer's naturalistic model of evolutionary epistemology of cognitive mechanisms25 and Karl
Popper's and/or Philip Kitcher's approach to the evolution of super-individual scientific
theories.
In section 3.2, we modelled the different layers of an agent's transformational and factual
knowledge (see Fig. 2). The layer model resembles and can be interpreted as a formalisation
of Gerhard Vollmer's hierarchical structure of human knowledge at hand. He calls the layers
'sensation', 'perception', 'experience' and several layers of 'scientific knowledge' (see
VOLLMER, Gerhard (2003), Band 1, p.33 or p. 89). By his 'projective model' Vollmer
describes and explains how human's cognition reconstructs (i.e. transforms) sensation to
perception, perception to experience and finally experience to scientific knowledge.
Moreover, Vollmer's philosophy describes and explains the 'fit' of epistemological
mechanisms to the 'mesocosmic' world. His naturalistic approach refers to biological
evolutionary theory, physics, and cognitive sciences. From these and other considerations,
he derives his 'hypothetical realism'.
The evolutionary mutation and selection processes can principally be modelled by IATM's
that represent so-called evolutionary or genetic algorithms26. The transformational step from
sensation to perception can also be described as an IATM. The steps from perception to
experience and from experience to scientific knowledge are of a different nature. Vollmer
does not describe the interactive processes within scientific communities or influences from
others outside the community that lead to the evolution of scientific theories. Nor does he
describe the influence of a scientist's world view on scientific theory building. According to
Vollmer's definition, evolutionary epistemology does not describe and explain the evolution of
human knowledge, but only the evolution of cognitive mechanisms27.
© Stefan Pistorius
15 13.02.2010 19:08:04
a ‘conjecture’. A good theory must be falsifiable and such it is possible, that new facts, i.e.
messages from the environment, refute the theory. Then the existing theory or part of it has
to be adapted. Therefore, genuine science (in contrast to metaphysics) should be seen as a
progressive evolutionary process, i.e. a converging knowledge domain. Philip Kitcher reflects
in more detail the 'division of cognitive labour' within a scientific community i.e. the message
exchange processes within the knowledge domain network29. Moreover, he describes and
explains the 'consensus practice' within scientific communities and he stresses the influences
of individual beliefs, i.e. the 'agents world views' according to our terminology (see also our
example in Sections 3 and 4).30
Within this article, we can only adumbrate the idea of how to integrate existing evolutionary
and social epistemology approaches within the noosphere framework. Bradie, and Harms’
article31 gives a good overview and classification of evolutionary epistemology approaches
and Goldmann’s article32 for an overview of social epistemologies, some of which are
integration candidates. The model is flexible and powerful enough to integrate different
individual and super-individual (e.g. social and cultural) naturalistic theories of knowledge
evolution. The challenge of a unified naturalistic epistemology is to put the right pieces
together and describe them within the framework provided. A unified theory should at least
describe and explain the mutual influences and the promoters of individual and super-
individual knowledge evolution. Moreover, it should integrate the technical aspects of
knowledge propagation and evolution. In the following Section, we will demonstrate that,
besides its descriptive power, the adaptive network model of knowledge can also explain
knowledge evolution phenomena derived from complex network research.
6 Noosphere Epistemology
So far, the network model of knowledge has served as a basis for formal definitions of central
epistemic concepts and as a description framework for existing epistemologies. We now
describe and explain some of the revolutionary changes in information technology. From our
point of view, future epistemology should embrace the ongoing revolution of information
technology, because it fundamentally changes the ways knowledge is propagated,
processed, represented and developed. Moreover, it changes the division of cognitive labour
between humans and machines and it changes the way we think33. With this article, we
unfold a (non-formal) perspective on the subject, which we call noosphere epistemology. We
would like to find answers to the following questions:
• What are the characteristics of the evolution of the noosphere since the emergence of the
Internet and the World Wide Web?
• Can we expect new sources of knowledge?
• Is there evidence of a convergent knowledge evolution as Teilhard postulated?
• Since the noosphere is modelled as a network, what can we learn from complex network
research?
• According to the theoretical model, there is no principal difference between individual
knowledge and knowledge corpora. Are there empirical indications that the demarcation
between an individual's knowledge and her/his/its environment blurs?
29
see KITCHER, Philip (1990)
30
see KITCHER, Philip (1993) or GOLDMAN, Alvin (2006) for a short summary of Kitcher's ideas.
31
see BRADIE, Michael and HARMS, William (2008)
32
see GOLDMANN, Alvin (2006).
33
The 'way we think' is predominantly an aspect of the philosophy of mind and not in the scope of this
article. An interesting interdisciplinary discussion on the topic can be found here:
http://www.edge.org/q2010/q10_index.html
© Stefan Pistorius
16 13.02.2010 19:08:04
In section 6.2 we study knowledge evolution trends due to information technology (IT)
especially the revolution of the Internet and its contribution to the propagation and evolution
of knowledge. We argue that the impact on the rest of the noosphere is enormous, although
in December 2009 only about 26% of the world's population had access to the Internet34.
This article mentions just a few aspects of the evolution; most of the analysis must be left to
future work.
Evolution of the Internet and World Wide Web as a breakthrough for the propagation of
knowledge
The invention of the Internet and World Wide Web brought a new infrastructure for the
propagation of information. But propagation of information does not necessarily mean
propagation of conceptual knowledge. Only if there is an agent that is able to ‘understand’
the information on the Web page can we say that conceptual knowledge propagates. If a
PC's browser program processes a Web page it does not 'know' anything about the
conceptual content of the page. The non-conceptual transformational and factual knowledge
of the browser is only about how to read a sequence of HTML tagged letters and pictures
and how to display them in a colourful way on the screen. The human interacting with the
browser program may be able (or not) to understand and accept the conceptual content and
generate new knowledge out of the Webpage. After all, the W W W in its first phase, now
called Web 1.0, brings a much better and faster access to conceptual knowledge for millions
of people. More and more people have the chance to get to know new knowledge domains
they did not know before. This may cause a significant change in those peoples world view.
Web 1.0 does not provide many possibilities for human agents to give feedback to Web
content. There is only the choice to accept the content or not. Since the emergence of so-
called Web 2.0 technologies, new feed back and collaboration concepts have been
developing and therefore, the evolution of knowledge domains accelerates once again. Due
to Wikipedia and the so-called social networks, people around the world now have the
chance to share the same cultural and scientific knowledge domains and to build new
domains. New virtual organisations evolve and enable people to collaborate on a worldwide
scale. So far, we can assert that the evolution of the Internet and the World Wide Web
improve the communication and global growth of conceptual, cultural and scientific
knowledge domains and therefore it accelerates the evolution of these domains.
34
This number is according to http://www.Internetworldstats.com/stats.htm, retrieved 2009-20-12.
© Stefan Pistorius
17 13.02.2010 19:08:04
established. The most important barriers are of a semantic or conceptual nature. As in the
example of Section 4, in industries around the world there exist many different concepts
about an ‘invoice’, an ‘order’, a ‘shipping note’ or other business objects. As long as these
differences are not settled, machines cannot ‘talk’ to each other on a conceptual basis.
Several organisations have tried to address this problem35. If they succeed, business
computers around the world will participate in the same business knowledge domains and
they could be enabled to autonomously do business around the world.
Besides the business knowledge domains, of course, many other knowledge domains are
not yet accessible to machines. The W W W is full of such knowledge. In 2004, Tim Berners-
Lee, the inventor of the World Wide Web, proposed the so-called Semantic Web36. The basic
idea is to enable computers to analyse the conceptual knowledge of the Web. It will then be
possible for machines to derive new knowledge by combination or 'serendipity'37. Every
Internet-connected agent will then have immediate access to the information needed in
her/his/its actual context, if she/he/it divulges information about her/his/its context. One
condition for the implementation of the Semantic Web is the development of ontologies and
knowledge representation concepts38. If the Semantic Web becomes reality, it will inevitably
push the global unification of knowledge domains, because contradictions resulting from
different basic concepts will be eliminated by design.
In principle we can expect that machines get their own senses. In scientific research
(physics, biology, astronomy, meteorology and others) they already sense facts about our
world (microcosm and macrocosm), that we cannot directly observe by means of our own
sensory.
© Stefan Pistorius
18 13.02.2010 19:08:04
'Today, the scale-free nature of networks of key scientific interest, from protein
interactions to social networks and from the network of interlinked documents that
make up the WWW to the interconnected hardware behind the Internet, has been
established beyond doubt. The evidence comes not only from better maps and data
sets but also from the agreement between empirical data and analytical models that
predict the network structure.'
The consequences for our discussion about knowledge propagation and knowledge
evolution are twofold. First, it is generally known that some of the WWW and Internet hubs
use the links to accumulate enormous amounts of data. Moreover, they distribute data and
they decide which data to distribute, i.e. they decide which knowledge to propagate. The so-
called page-rank mechanisms of the big search engines for instance, establish a knowledge
selection mechanism. Even if the selection is meant to serve the receiver in order to support
his/her/its needs, it inevitably leads to a favouritism of some web content and hence to the
perception of the respective factual knowledge, by many agents. Whether we like it or not,
this phenomenon contributes to the unification of knowledge domains and hence to the
convergence of world views of many agents.
The second consequence of the recognition of the scale-free nature of the Internet, the
WWW and other networks is that completely new knowledge domains have been evolving
and they are different in what they can tell us about the world. The basic idea behind the new
methods of generating knowledge is to explore the petabytes of data accessible and to find
patterns of collective behaviour in nature or human societies. Vice versa, it is possible to
derive knowledge about individual humans just by comparing a profile of their individual data
with these patterns. With a plea for the establishment of a ‘Computational Social Science’
BARABASI et al. (2009) write for instance:
We live life in the network. We check our e-mails regularly, make mobile phone calls
from almost any location, swipe transit cards to use public transportation, and make
purchases with credit cards. Our movements in public places may be captured by
video cameras, and our medical records stored as digital files. We may post blog
entries accessible to anyone, or maintain friendships through online social networks.
Each of these transactions leaves digital traces that can be compiled into
comprehensive pictures of both individual and group behaviour, with the potential to
transform our understanding of our lives, organizations, and societies.
Kevin Kelly, the founding executive editor of Wired magazine in (KELLY, Kevin et all, (2009),
'The end of theory') comments on the method of exploring the data as follows:
'My guess is that this emerging method will be one additional tool in the evolution of
the scientific method. It will not replace any current methods (sorry, no end of
science!) but will complement established theory-driven science. Let's call this data
intensive approach to problem solving Correlative Analytics.'
In 2008 the US National Science Foundation has launched a new program called Cluster
Exploratory (see http://www.nsf.gov/pubs/2008/nsf08560/nsf08560.htm). The programme's
main goal is to 'explore innovative research ideas in data-intensive computing'. If Barabasi
and Kelly are right, the results of this program and similar projects will have a significant
impact on our knowledge about the world. We will observe phenomena that will lead us to an
adaptation of scientific and social theories and to the development of new scientific
disciplines. Beyond the epistemic consequences, there arise ethical questions.
The Conclusion of this Section is that Information Technology accelerates the evolution of
the noosphere. Some knowledge domains evolve and converge very rapidly and others may
vanish. New knowledge domains emerge by means of new techniques and sensors far
© Stefan Pistorius
19 13.02.2010 19:08:04
beyond human capabilities. All connected agents get more and better access to
transformational and factual knowledge. If we assume that these trends will continue, any
agent will have immediate access to all knowledge required at any moment of her/his/its
lifetime. In such a scenario, we will not be able to differentiate between the knowledge of a
single agent and the knowledge of the overall noosphere. Knowledge will simply come out of
the 'cloud'41 or out of the noosphere and we are part of it. It would not be relevant where the
knowledge comes from and human brains would not need to 'burden' themselves with factual
knowledge they do not actually use. Knowledge would migrate from the individual's memory
to the environment. The demarcation between an individual's knowledge and the
environment would blur. This would be a practical affirmation of the theoretical model,
according to which there is no principal difference between the ontogeny of a single
individual's knowledge and the phylogeny of knowledge corpora. Another important epistemic
consequence of the semantic web and 'correlative analytics' is that we will not be able to
identify the source of knowledge any more. Therefore, we will not be able to ask any
individuals for their 'justification'.
In the previous Section, we discussed the current and near future evolution of the noosphere.
One result was that the knowledge domains develop on a global scale, some evolve and
converge very rapidly, some vanish, and new domains emerge. However, it is not at all clear,
whether this will lead towards Teilhard’s vision of the Omega Point, according to which
humankind could be united (‘in several million years’) in a harmonised world view. As some
research results about the World Wide Web indicate, scale-free networks (with directed
edges) can be 'fragmented'; this means that large parts of the web are disconnected from
each other (see BARABASI, Albert-László (2004)). The overall structure of the network of
knowledge, the noosphere, is unknown until now, but we may assume, that there also exists
a multitude of disconnected knowledge domains, because the propagation of knowledge
relies heavily on the Internet and W W W. Moreover, the propagation and evolution of
knowledge are dynamic properties of the noosphere and research on the dynamics of
complex networks has just begun. Finally yet importantly, the 'success' may depend on the
nature and quality of the different knowledge domains. Organised crime, terrorism, dictatorial
regimes and so on, they all have their own knowledge domains and they are all eager to
protect them against their enemies. The worldwide propagation of knowledge may help to
undermine the power of oppressive structures. However, as long as the usage of the world's
natural resources discriminates against large parts of the world, new (knowledge and
physical) conflicts will always arise and Teilhard's vision cannot come true. Although we do
not know whether Teilhard is right or not, it is an interesting thought experiment, what
Teilhard's Omega Point would be like from the framework's point of view.
If the Omega Point became reality, every single agent (humans and technical devices) would
be connected to the noosphere. The noosphere would be global and it would be free of
obvious contradictions. Every agent would live in harmony with every other agent she/he/it is
interacting with. Each agent's perception of the world would be perfectly compatible with all
knowledge (especially scientific knowledge) about the world. Every single observation and
every single interaction of an agent with nature (even with her/his/its own physical body)
would immediately contribute to the perception and if necessary to the adaptation of the
whole noosphere. The main goal of the noosphere would be to survive the challenges of
nature and the universe.
41
The term 'cloud' is in use in Information Technology (IT) and is a metaphor for computer networks
like the Internet. 'Cloud Computing' means that IT-services can come from anywhere and that users
don't have to care (and do not have the chance to find out) about the origin of the different services.
© Stefan Pistorius
20 13.02.2010 19:08:04
References
DAVIES, John and STUDER, Rudi and WARREN, Paul (2006), Semantic Web
Technologies: Trends and Research in Ontology-based Systems Wiley.
ISBN 978-0470025963
GETTIER, Edmund, (1963), Is Justified True Belief Knowledge?, in Analysis, Vol. 23,
pp. 121-123. Online text http://www.ditext.com/gettier/gettier.html
GOLDIN, Dina and WEGNER, Peter (2003), Computation Beyond Turing Machines.
Comm. ACM, Apr. 2003.
GOLDIN, Dina and WEGNER, Peter (2005), The Interactive Nature of Computing:
Refuting the Strong Church-Turing Thesis
GOLDMANN, Alvin (2006), Social Epistemology,
http://plato.stanford.edu/entries/epistemology-social/
GOTTSCHALK-MAZOUZ, Nils (2007), Internet and the flow of knowledge: Which ethical and
political challenges will we face?, in Philosophie of the Information Society,
Proceedings of the 30. International Wittgenstein Symposium, Kirchberg am Wechsel,
Austria 2007 Volume2.
KELLY, Kevin et al, (2009), THE END OF THEORY, Will the Data Deluge Make the
Scientific Method Obsolete?, Originally published the cover story, 'The End of
Science', Wired Magazine: Issue 16.07
http://www.edge.org/documents/archive/edge248.html#feature
KITCHER, Philip (1990), The Division of Cognitive Labor, The Journal of Philosophy,
87: 5–22.
KITCHER, Philip (1993), The Advancement of Science, New York: Oxford University Press.
LANGDON, W. B. and POLI, R. (2002), Foundations of Genetic Programming,
Springer-Verlag, 2002. ISBN 3540424512
LEWIS, Harry R. and PAPADIMITRIOU, Christos H. (1981), Elements of the theory
of Computation, Prentice Hall, ISBN 0-13-273417-6.
LOECKX, Jacques and SIEBER, Kurt (1984), The Foundations of Program Verification,
2nd ed., Teubner ISBN 3 519 12101 8, Wiley ISBN 0 471 91282 4
MERTON, Robert K. (1957), Social Theory and Social Structure, The Free Press,
Glencoe, Ill. 1957. P. 12
POPPER, Karl, Conjectures and Refutations: The Growth of Scientific Knowledge, 1963,
ISBN 0415043182
POSLAD, Stefan (2009). Ubiquitous Computing Smart Devices, Smart Environments and
Smart Interaction. Wiley. ISBN 978-0-470-03560-3.
http://www.elec.qmul.ac.uk/people/stefan/ubicom/index.html
PUTNAM, H. (1960). ‘Minds and Machines”, reprinted in Putnam 1975b, 362–385.
© Stefan Pistorius
21 13.02.2010 19:08:04
TEILHARD DE CHARDIN, Pierre (1959), The Phenomenon of Man, Harper Perennial 1976:
ISBN 0-06-090495-X. Reprint 2008: ISBN 978-0061632655.
TEILHARD DE CHARDIN, Pierre (1965), The Appearance of Man, Collins (UK),
Harper and Row (US).
TEILHARD DE CHARDIN, Pierre (1984), Die Entstehung des Menschen, Deutscher
Taschenbuch Verlag 1984, München. ISBN 3-423-01755-4
VAN LEEUWEN, Jan and WIEDERMANN, Jirii (2001), The Turing Machine Paradigm in
Contemporary Computing, in Mathematics Unlimited - 2001 and Beyond, eds.
B. Enquist and W. Schmidt, LNCS, Springer-Verlag, 2000.
VERBAAN, Peter (2005), The Computational Complexity of Evolving Systems,
http://igitur-archive.library.uu.nl/dissertations/2006-0202-200042/full.pdf
VOLLMER, Gerhard (2003), Was können wir wissen? 2 Bde., Leipzig: Hirzel.
© Stefan Pistorius