Sunteți pe pagina 1din 21

1 13.02.

2010 19:08:04

Noosphere Epistemology

- A unified description framework for evolutionary and social epistemology -


- by Stefan Pistorius, Zirndorf -

ABSTRACT

According to Pierre Teilhard de Chardin, nature's latest mainstream of evolution is the


‘noosphere’, the global sphere of human thought. We interpret Teilhard’s holistic view on the
noosphere as an evolving network of knowledge. To model the network, we introduce the
concept of interactive adaptive Turing machines (IATM). Based on the IATM model, we
define epistemic concepts like ‘factual’ and ‘transformational’ knowledge, 'knowledge
domains', and the 'propagation' and 'evolution' of knowledge. The model takes into account
both people and their cognitive technical equipment. So the noosphere network embraces
the Internet and all humans. It is complex, dynamic, and adaptive. According to the model,
the ontogeny of an individual’s knowledge follows rules analogous to the phylogeny of
knowledge domains and the overall noosphere. Thus, the model is a first step towards a
unified evolutionary and social epistemology. Moreover, the network view of knowledge
allows us to derive new epistemic insights from observations of the evolving Internet and
from complex network research. In the light of the model, we finally discuss Teilhard's vision
of the noosphere converging towards the so-called Omega Point.

1 Introduction

Pierre Teilhard de Chardin (born 1 Mai 1881 near Clermont-Ferrand; died 10 April 1955 in
New York) was a French Jesuit, palaeontologist, anthropologist and philosopher. In his work
he tried to reconcile science and religious faith. In his two most important works ‘The
Phenomenon of Man’ (orig.: Le Phénomène Humain, 1955)1 and ‘The Appearance of Man’
(orig.: La Place de l'Homme dans la Nature, 1956)2 he describes the evolution of the
universe from its beginning to the formation of the planets and the evolution of the biosphere.
With the dawn of humankind a new sphere evolves, the noosphere, the sphere of thought.
Now the evolution of the noosphere is the most important thread of evolution. In its first
phase, it expands, conquers the globe, and diversifies into a multitude of different cultures
that evolve, disappear and cross-fertilize each other. In its second phase, which according to
Teilhard, has just begun, the noosphere is in a state of accelerated convergence. Now the
spiritual forces strive for unification. At the end of this phase, in a few million years, at the
Omega Point, humankind could be united in a collective consciousness, based on a
harmonised world view3. Teilhard was convinced that humans would find ways to bring their
brains to perfection. Between 1948 and 1950, he wrote,

‘I am thinking of the amazing performance of electronic machines (the results and the
great hope of the aspiring ‘Cybernetics’). These devices replace and multiply the
computing and inference capabilities of the human mind by such ingenious methods
and to such an extent that in this direction we can expect an equally great increase in
our abilities as it has brought the evolution of our vision.’4

1
TEILHARD DE CHARDIN, Pierre (1959)
2
TEILHARD DE CHARDIN, Pierre (1965)
3
Teilhard did not seem to be sure about the success of the human race. In 1949 he concluded his
work, (see Conclusion of TEILHARD DE CHARDIN, Pierre (1965)), with some ‘prospects and
prerequisites for the success of the venture man'.
4
TEILHARD DE CHARDIN, Pierre (1984), English translation from German, page 118.

© Stefan Pistorius
2 13.02.2010 19:08:04

Our approach
The central idea of this article is to describe the knowledge evolution of all humans and their
cognitive technical equipment by means of a dynamic adaptive network model. With this
approach to epistemology, we pursue the following objectives:
• The algorithmic foundation of the model leads to well-defined epistemic concepts.
• The model is a powerful description framework, which embraces both individual and
super-individual knowledge evolution. Therefore, we see it as a first step towards a
unified evolutionary and social epistemology.
• Besides its descriptive power, the network view of knowledge reveals new aspects
about the evolution of knowledge derived from complex network research and
observations about the impacts of the Internet. Thus, we can discuss Teilhard’s
hypothesis of a ‘convergent’ noosphere.

We proceed as follows: the main part of this essay (Sections 1 - 4) is dedicated to a semi-
formal step-by-step introduction of the dynamic network model and all necessary concepts.
To motivate the central definitions we present a detailed example. In Section 2, we define
‘interactive adaptive Turing machines’ (IATM) which is the computational model for a
dynamic adaptive network of interacting intelligent agents. It is similar to a model first
introduced by Jan van Leeuwen and Jirii Wiedermann5 and represents an interactive version
of the classical ‘Universal Turing machine’. From the computational model, we can derive
precise definitions of important epistemic concepts. First, we introduce our notions of ‘factual’
and ‘transformational’ knowledge. In Section 3, we apply these definitions to model the
network of knowledge of a single agent, which we call her/his/its 'world view'. In Section 4,
we look at networks of interacting agents. A group of interacting agents may constitute a
particular field of knowledge, which we call 'knowledge domain'. The knowledge network of
all agents constitutes the overall noosphere. On each level of granularity, from a single
agent's network of knowledge to super-individual knowledge domains and the global
noosphere, knowledge evolution follows similar rules.

In the second part (Sections 5 - 7) of this article, we indicate how to apply the model and
discuss some initial results. In Section 5, we discuss how to integrate existing approaches to
evolutionary and social epistemology. In Section 6, we discuss epistemic consequences
derived from the evolution in information technology especially the Internet and results of
what is known as scale-free network research. In Section 7, we reconsider Teilhard's Omega
Point theory.

2 Standard Turing machines and interactive adaptive Turing machines

The English mathematician Alan Turing provided an influential formalisation of the concept of
algorithm and computation by the so-called Turing machine. In our context an informal
description will suffice:

Definition 1: Standard Turing Machines (TM)


A Standard Turing machine (see Fig. 1) is a device with a finite control and an unbounded
amount of read/write tape-memory. The finite control can assume a finite number of states.
For each non-halting state and the actual input symbol (out of a finite alphabet Σ) on the
tape, there is a rule which tells the read/write head what to do next. It reads its input symbol
from the tape and then it might move one field to the left or to the right, write a symbol (out of
Σ) onto the tape and assume a new state. We say the TM accepts an input string (must be
finite) if it starts at the beginning of the input string and halts after a finite number of steps in
a halting state. The new, rewritten string on the tape is called the output string.

5
see VAN LEEUWEN, Jan and WIEDERMANN, Jirii (2001).

© Stefan Pistorius
3 13.02.2010 19:08:04

On an even more abstract level, every TM is a device that reads a finite input, does some
‘computations’, and produces an output. For the abstract (mathematical) concept of the TM it
makes no difference, whether the step-by-step routines (i.e. algorithms) are done by a ticket
machine, a computer or a human. Turing machines are only one of many ways to describe
the mathematical class of what are known as µ-recursive functions6.

Turing machine

Read/Write Head

Endless
Si+1 Si+2 Si+3 Si+4 Si+6 Si+7 Si+8 Si+9
Tape

Si+j ε Σ
Finite Control finite Alphabet

• finite set of states


• finite set of rules

Fig. 1

To avoid a misunderstanding, we have to emphasise that in our epistemological context we


do not intend to model the human brain as in the early theories of Machine State
Functionalism within the philosophy of mind7. The idea is (see Chapter 3) to use a model of
computation for the definition of knowledge independent of the know-how carrier. It can thus
be attributed to both humans and their 'intelligent devices'.

From the theory of TMs we can derive a result that is important to the subsequent discussion
of knowledge and truth:

Theorem 1: The following problem concerning Turing machines is unsolvable. Given a Turing
machine M and an input string w, does M halt on w?8

So far, we have only talked about algorithms and the notion of a single Turing machine that
starts with a fixed input. In order to model an evolving network of intelligent, interacting
agents four new ingredients need to be added to the model of computation:

• interaction of agents,
• infinity of operation,
• persistent memory and
• non-uniformity of programs.

A Standard TM does not model interaction between an agent (e.g. human or computer) and
its environment or between agents. In reality, agents do not operate in isolation but may be
connected to dynamic networks of virtually unlimited size, with many agents sending and
receiving messages in unpredictable ways. Infinity of operation means that in principle the
interacting agents may continue to interact without a definite end. Because of that, their input
may be infinite as well as their output. In contrast to a Standard Turing machine humans and
modern computers have a persistent memory even if they are turned off (or are asleep) for a
while. If they start again, further computation may depend on the memory content. Non-
uniformity of programs means that agents in a network may change their algorithms during

6
LEWIS, Harry R. and PAPADIMITRIOU, Christos H. (1981), Section 5 introduces several
alternatives.
7
see for instance PUTNAM, H. (1960)
8
for a formal proof see for instance LEWIS, Harry R. and PAPADIMITRIOU, Christos H. (1981), p.
283-284.

© Stefan Pistorius
4 13.02.2010 19:08:04

operation. Nowadays most computers are regularly upgraded, and their software, which
represents their algorithms, may be fundamentally changed. If the agent represents a
human, the human may have changed the way he/she performs his/her algorithm-like work.

To model this kind of computation we introduce the abstract notion of interacting adaptive
Turing machines (IATM), similar to the notion of interactive Turing machines with advice,
which were first introduced by Jan van Leeuwen and Jirii Wiedermann9. We omit the
mathematical description and concentrate on the essential features.

Definition 2 (informal): Interactive adaptive Turing machine (IATM)


An interactice adaptive Turing machine (IATM) is a device
• that receives an unbounded sequence of messages (i.e. finite strings) from other
IATMs or 'sensorial data messages' from nature via its input port,
• does 'algorithmic computations' depending on its state, the memory content and the
input received and
• continuously sends an unbounded sequence of messages to other IATMs via its
output port.
• It has an unbounded persistent read/write memory to 'memorise' data (i.e. messages
or intermediate results) as well as its algorithmic rules10.

Because IATMs are the most expressive notion of interactive computation known, we can
expect that any operational model of evolutionary processes can be described by an IATM11.
Below we explain the IATM model of operation and define the essential concepts in a semi-
formal way.

'Interaction', 'network of IATMs', 'environment', 'mutation', and 'adaptation messages' of


IATMs
We say an IATM M1 interacts with another IATM M2 if M1 outputs messages to the input of
M2 (and possibly vice versa). The only 'purpose' of interaction is to send or exchange
messages. Some IATMs may memorise every received or sent message; others may
memorise only some of them or none. IATMs receiving 'messages' from nature could
represent human sensory or technical sensors.

A network of IATMs is made of a set S of interacting IATMs where the nodes are the IATMs
and the message exchange relations within S define the edges12.

For each IATM, everything delivering messages to its input and receiving messages from its
output is called its environment. Accordingly, we define the environment of a network S of
IATMs as everything delivering input from outside S (i.e. other IATMs not in S or nature) and
everything receiving output outside S (i.e. other IATMs not in S13). We can thus look at the
'computation' of a network S of IATMs, i.e. the unbounded interaction process of S with its
environment. For further discussions we need

9
see VAN LEEUWEN, Jan and WIEDERMANN, Jirii (2001). A purely mathematical definition can be
found in VERBAAN, Peter (2005).
10
Van Leeuwen and Wiedermann use a so-called advice function, which is non-computable and is
less intuitive than our read/write memory, which allows rewriting algorithmic rules. Our definition
corresponds to the so-called van Neumann architecture of modern Computers, where programs and
data both reside in the same memory.
11
see GOLDIN, Dina and WEGNER, Peter (2003) and GOLDIN, Dina and WEGNER, Peter (2005) for
articles about the expressiveness of interactive computing with persistent memory compared to the
classical Turing machine model.
12
A precise and formal definition of a dynamic network (like the Internet) based on an interactive
Turing machine concept can be found in VAN LEEUWEN, Jan and WIEDERMANN, Jirii (2001).
13
It does not seem 'reasonable' for an IATM to output messages to nature. Instead, we could assume
that a human or a computer that applied an algorithm represented by an IATM could interact physically
with nature.

© Stefan Pistorius
5 13.02.2010 19:08:04

Theorem 2: For every finite set S of IATMs there exists a single IATM M that sequentially
implements the same computation as S does14.

In other words, theorem 2 tells us that a whole network or parts of a network of interacting
Turing machines can be regarded as a unity because a single (more complex) machine can
simulate it. Subsequently we will model the knowledge of a single human or a single
computer by a network of IATMs. Thus, the knowledge network of all interacting humans and
their computers is a network of networks. However, even the overall network of all agents
(i.e. human or cognitive equipment) can still be seen as a unity, which we will call the
noosphere. The noosphere as a whole interacts with nature.

If the interaction between an IATM M and its environment (i.e. other IATMs or nature) leads
to some kind of disruptions (see Section 4) then M or the environment might 'mutate' and
sometimes successfully adapt its algorithm, or the interaction might continually be disturbed.
To mutate, the IATM rewrites parts of its persistent memory containing the algorithmic rules.
In other cases both may get an ’upgrade’ or ‘adaptation-message’, from a third party IATM
interacting with both in order to rewrite/adapt the algorithm. If for instance M1 and M2 were
computers that exchange erroneous business data, then M3 could be a human or a
computer in the Internet, that provides for the upgrade of M1 and/or M2. ‘Adaptation
messages’ are highly effective (far more effective than rare and undirected 'mutations') and
can be regarded as a kind of learning from others.

3 Knowledge and individual world views

In order to motivate the notion of knowledge and knowledge evolution we first present a
typical example of a business process where a group of intelligent agents work together to
settle an invoice. The process will be modelled by IATMs in a semi-formal way and a
definition of factual and transformational knowledge of IATMs will be given. In section 3.2 we
elaborate the kind of knowledge involved and show how to apply the formal definition. We
demonstrate that it can be applied to humans as well as computers.

3.1 Example and definition of knowledge

Example: Company CC orders 10 new PCs for their staff from its supplier for business
computers BC. After a few days, CC’s in-box agent receives the invoice from BC. To settle
the invoice, several steps have to be performed. We model the process by interacting IATMs.
It should be mentioned that the design of the workflow between the different IATMs is just
one solution out of many others that would be possible. For further discussions we
concentrate on the content of the memory (PROP) and on the rules (TRANS) needed.
• IATM-1 is an OCR (Optical Character Recognition) device that scans pieces of paper and
delivers a sequence of characters. In order to fulfil the task it has to ‘know’ the following:
o PROP: in its internal memory there has to be a pattern for each of the possible
letters.
o TRANS: the rules of the IATM specify how to read the pixels on the paper, and
how to match them against the appropriate letter pattern. If successfully matched,
IATM-1 outputs the respective letter.
In other words, IATM-1 translates a stream of pixels into a stream of letters.

14
For a formal proof one has to formulate some more detailed assumptions about the network
protocols and the method of operation of an IATM that we do not define in this article. The idea of the
proof can be found in van LEEUWEN, Jan and WIEDERMANN, Jirii (2001), proposition 10 for the so-
called Internet machine, a model for a time varying set of interacting machines in the Internet.

© Stefan Pistorius
6 13.02.2010 19:08:04

• IATM-2, the invoice registrar, is a device, which takes a stream of letters as its input and
transforms them into meaningful pieces of information like ‘name of supplier is BC’, ‘The
address of BC is ... ‘, ‘BCs account-number is ...’ and so on. If it has found every needed
piece of information, it sends all data and a statement ‘The invoice is complete’ to IATM-
3. To do so it might make use of the following:
o PROP: In its internal memory, it might have collected all relevant master data
about the already known suppliers. This information could be helpful if the invoice
could not be completely read by IATM-1 due to bad quality of the printing.
Moreover, in its memory it might have information about the typical structure of
the invoices of all already known suppliers.
o TRANS: The rules of IATM-2 compose letters into words and implement heuristics
such as the suppliers-names can mostly be found at the top of the page, item
prices on the right, account numbers at the bottom and so on. For a particular
supplier the memorised information about its typical invoice structure might help
to improve the results. If the actual invoice differs in structure, the memory will be
adapted.
• IATM-3, the accountant gets a stream of data about invoices and, at the end of each
invoice, the information whether the invoice is complete. It computes whether the
summation is correct. It then consults IATM-4/5 (i.e. the purchasing department, the IT-
department) for performance acknowledgement and waits for positive feedback. If
everything is all right, the accountant releases a transfer order to the bank.
o PROP: In its memory, the IATM-3 has information about the prices agreed upon
with the different suppliers. Another IATM that is responsible for supplier contracts
adapts this information regularly.
o TRANS: The rules define the steps to check the invoice for computational errors
and whom to consult for performance acknowledgement. Then the transfer order
has to be generated.
We omit the further steps by IATM-4/5. In order to simplify the example, we didn't describe
those cases when the invoice was incorrect. Therefore, in reality the interaction of the IATMs
might be much more intensive.

Now we focus on the kind of knowledge needed to run the process.

Definition 3: Factual knowledge of an IATM at time t


The factual knowledge of an IATM M at time t resides in its memory and may consist of the
following:
• data patterns (concepts or sensorial patterns) needed to process and memorise
input.
• propositions (either received from other IATMs or derived from M's own algorithm).

Definition 4: Transformational knowledge of an IATM at time t


The transformational knowledge of an IATM at time t is its algorithm residing in its memory at
t used to analyse input and to derive new messages.

Expressed in short terms Definition 3 means knowledge-that (derived from knowledge-how)


and Definition 4 means knowledge-how. In our context, knowledge-how is the kind of
knowledge needed to derive new factual knowledge from already existing knowledge or input
from nature. Factual knowledge can be everything from basic concepts, simple propositions
about observations up to propositions in scientific theories. The difference between factual
knowledge and ‘information’ is that the former has to be processed and accepted by an
IATM, whereas information does not need to be processed by anything. However, definitions
3 and 4 also differ fundamentally from the classical definition of knowledge as ‘justified true
belief'15, because the definition is purely mathematical and independent of any knowledge

15
A detailed discussion on the subject is not within the scope of this article and must be left to future
work. Edmund Gettier’s short article ‘Is Justified True Belief Knowledge?’ discusses the classical

© Stefan Pistorius
7 13.02.2010 19:08:04

carrier and hence there is no ‘belief’ and ‘justification’ and there is no absolute 'truth', as we
will argue in Section 4. In general, the definitions abstract from any concrete human mental
states, motivations or cognitive mechanisms. There is no ‘believer’ as long as we don’t apply
the knowledge definition to humans. If we apply it to humans, then factual knowledge may be
propositions that could be justified belief. Section 3.2 discusses in more detail how
knowledge can be attributed to agents but the knowledge core is an abstract notion
independent of the knowledge holder. This independence is essential for further discussion
in Section 6 because we argue that Teilhard's noosphere is composed of humans and their
technical cognitive equipment, namely the Internet and everything that is used to produce
factual knowledge.

3.2 Individual Worldviews of agents

If we look at the example, it is a real life description of how to implement the interacting
IATMs as a business application on some kind of computer. And indeed in some companies
the process of settling an invoice is completely automated.

But how about humans? In many companies, humans manage the described workflow and
humans seem to work differently. If we look for instance at the division of labour between the
OCR and the invoice registrar then this is definitely not a workflow between two different
humans. Moreover, definition 3 and 4 about factual and transformational knowledge do not
seem to be adequate for human knowledge. If we look, for instance, at the rules and memory
content of OCR, it is clear that in this case the definition of transformational and factual
knowledge does not mean anything to a human. Probably, no human could tell how he/she
recognises the letters and whether she/he has some kind of ‘patterns for letters’ (i.e. factual
knowledge of OCR) in his/her head.

On the other hand, if the task of IATM-1 is performed by a human, then she/he must also
recognise the letters and read the respective words in some way. Using their eyes, humans,
too, have to analyse the electromagnetic signals from the piece of paper representing the
invoice and transform them into visual units representing letters. All this happens
subconsciously. It is something that our visual experience conveys to us, somehow. For any
interaction with our environment, our senses act as mediators. These mediators need to map
sensorial signals to some other form of representation. In the example, the result of OCR
was a stream of letters. But it could also be some intermediate code, that does not mean
anything to humans. The mapping might even take several steps of intermediate code before
the letters that are input to the invoice registrar are produced16. The interesting fact is that
any interaction with the environment starts with some kind of transformation of sensorial data
that can principally be modelled by an IATM. After one or more non-conceptual
transformations, conceptual output may be produced, something that has meaning to a
human. This conceptual output might be transformed to ‘higher order’ propositional
knowledge or scientific knowledge by means of a ‘conceptual’ IATM. In the example, the
invoice registrar inputs the stream of letters and outputs meaningful propositions. Part of the
conceptual process that the invoice registrar performs is reading - composing letters to words
and assigning meaning to words, such as the letters ‘BC’ mean ‘the suppliers name’.

We also argued that a human would at least personify the OCR and the invoice registrar. We
could even assume that one (highly qualified) human could also do the work of the

understanding of knowledge and has ignited a very influential discussion on the subject, see GETTIER
Edmund (1963).
16
It is not within the scope of this article to model exactly the mapping according to cognitive sciences
or to model how human cognitive mechanisms have evolved. However, we maintain that in principle
we could model this within the IATM framework (see Section 5). The work of G. Vollmer explains and
interprets the philosophical implications of this kind of evolutionary epistemology. See for instance
VOLLMER, Gerhard (2003).

© Stefan Pistorius
8 13.02.2010 19:08:04

accountant and the others. In addition, if we consider all the other cognitive capabilities each
human possesses, we must say that a whole network of hundreds or thousands or even
millions of interacting IATMs could be necessary to describe a single human's formalised
conceptual and non-conceptual, factual and transformational knowledge17.

Fig. 2 visualises the network of an agent's (human or computer!) knowledge. It is arranged in


different layers. Each layer represents an IATM, the processing rules on the left and the
content of its memory at the right. Instead of four layers, it could also be many more or fewer.
In the example, we had only one layer of non-conceptual processing, the OCR device. It all
depends on how we model the intermediate steps. Layer 2 IATM of Fig 2 interacts with Layer
1 IATM and uses its memory content to produce simple propositions, which it outputs to
Layer 3 IATM and possibly to its own memory. Note that in Fig. 2 the term ‘environment’
means the environment of an agent as a whole. For the different IATMs involved, anything
that is outside their own layer is their environment.

Worldview of an Agent =
conceptual and non-conceptual knowledge

Environment Transformational Factual


Output

knowledge knowledge

Complex proposition Complex propositions and


processing complex concepts
Several layers of Layer 4 ITM Several layers of
conceptual conceptual factual
Simple proposition Simple propositions and knowledge
transformational
processing simple concepts
knowledge
Layer 3 ITM

Intermediate data Intermediate data and


processing complex patterns
Several layers of non- Several layers of non-
conceptual Layer 2 ITM
conceptual factual
transformational Sensorial data Sensorial data and knowledge
knowledge processing basic patterns
Layer 1 ITM
Input

Fig. 2

The knowledge units in an agent's memory are related to each other in various ways. As we
demonstrated, all of an agent's knowledge can be regarded as a network of many interactive
adaptive Turing machines. Altogether, they account for her/his/its view of the world, the world
view. For further discussions we need

Definition 5: The world view of an agent at time t is the network of IATMs representing
her/his/its conceptual and non-conceptual transformational and factual knowledge at time t
(see Fig. 2).

17
However, according to theorem 2 all these IATMs can be simulated by a single IATM. Therefore, we
can say that all formalised human knowledge at time t can be regarded as a unity.

© Stefan Pistorius
9 13.02.2010 19:08:04

4 Knowledge evolution and supra-individual knowledge domains

The main purpose of interaction is the exchange of messages. Since messages can be
knowledge, interaction processes may lead to knowledge propagation. We talk about
knowledge propagation if one IATM outputs a message and an other accepts and memorises
it as input. If the interaction process is disrupted and one party or both parties adapt their
transformational knowledge, we talk about knowledge evolution. In order to motivate the
model of knowledge evolution we will elaborate possible operational faults between
interacting IATMs and their strategies to ‘settle their differences’. We will concentrate on the
interaction between two IATMs. From theorem 2 one can derive that this will be enough to
describe the ‘settling’ process for a whole network. The argument is as follows: If we want to
study the interaction processes in a set S of interacting IATMs we may begin with any IATM
M element of S, then simulate S - {M} by a new IATM S' (possible because of theorem 2) and
then we study the interaction of M and S'.

At first sight, this doesn't seem very reasonable for real life situations, but it is! Let us look at
an example: If M is human and exchanges intelligent messages (eMails, chats, whatever)
with her/his intelligent friends A and B via the Internet by means of her/his computer C, all he
needs is C! M only interacts with C's keyboard (input) and screen (output) and nothing else!
Nevertheless, we know that in reality, M exchanges intelligent messages with her/his friends
A and B, and C is only a kind of interface. However, A and B are intelligent and they produce
their well-considered messages by some well-designed cognitive processes. If this is the
case, then C can simulate A and B by means of a software that implements the well-
designed cognitive processes of A and B. If the software is good enough and passes the
Turing test18 then H doesn't even notice the fraud. The point is, that C with its new software,
let us call it C', can simulate the network of A, B and C.

But what if A and B are sitting next to M in her/his living-room? - The answer is: with respect
to the message exchange process it doesn't make a difference. The contact between M and
her/his friends A and B is again via an interface, her/his sensory, probably most of all her/his
ears and her/his eyes. The part of the well-considered message exchange is as before. A
background computer could simulate their messages. The more difficult part is the nonverbal
communication. We only have a chance if this can also be formally modelled. Until now this
is science fiction, some sort of a perfect virtual reality, as in the movie ‘The Matrix’. We do
not have to discuss whether this is possible. The point is if it can be formally modelled then it
can be modelled by a single IATM.

In order to understand what kind of disruptions might happen between two interacting IATM's
we extend the example of Section 3.1, where a sequence of interacting IATMs settle an
invoice. In contrast to the original version of the example we assume that the process may
be disrupted by some errors. We distinguish the following:

a) Erroneous knowledge of an IATM


The invoice registrar may have a bug in its rules or in its factual knowledge base such that it
occasionally states for some invoices that they are complete although they are obviously not.

b) Knowledge of sender is contradictory to knowledge of receiver


In this case, the registrar works as intended, produces the correct English statement but its
output is still not accepted by the accountant. The reason is, that the accountant needs some
additional information about the invoice, for instance the VAT, a piece of information that the

18
'The Turing test is a proposal for a test of a machine's ability to demonstrate intelligence. It proceeds
as follows: a human judge engages in a natural language conversation with one human and one
machine, each of which tries to appear human. All participants are placed in isolated locations. If the
judge cannot reliably tell the machine from the human, the machine is said to have passed the test.'
(from http://en.wikipedia.org/wiki/Turing_test, retrieved 2010-02-09)

© Stefan Pistorius
10 13.02.2010 19:08:04

registrar does not analyse because it is not part of its transformational knowledge. Because
of this, for the accountant the registrars' output, ‘The invoice is complete’ is false, although it
is justified true belief (i.e. factual knowledge) of the registrar. The reason is that both have a
different concept about an ‘invoice’. Therefore, the registrar's (transformational and factual)
knowledge contradicts the factual and the transformational knowledge of the accountant.

c) Output alphabet of sender does not match the input alphabet of receiver
The OCR device may have problems reading the letters because BC's invoice is written in
Chinese and it does not have any pattern for Chinese symbols. Consequently, it does not
produce an output the invoice registrar can cope with.

In concrete networks, there are many more possible sources of disruptions resulting from
interaction. For instance, synchronisation problems or message routing problems with loss of
messages and so on are difficult problems in real world networks. We can abstract from
these, because they are not essential for the discussion of knowledge propagation and
evolution.

To dissolve the disruptions, the IATMs have to adapt. Such adaptations can principally take
place after each message read from the input stream. They change the computational
behaviour of an IATM and possibly its memory content. We interpret such adaptations as
evolving knowledge. Based on the described types of disruptions, we will analyse what kind
of adaptations we need, or, in other words, how the knowledge evolution works:

a) Adaptation of a single IATM


If the invoice registrar operates with buggy rules, then the rules have to be adapted. The
nature of the adaptation depends, of course, on the problem. The interesting questions are
how can we avoid such bugs and how can we be sure that the adaptation is a correct
solution to the problem. The answer is: for theoretical and practical reasons, we can never be
sure that in a dynamically changing environment an IATM works as required. Because this
argument is essential for further discussions, we will elaborate on it.

If we first look at the problem from a theoretical point of view, we have to be precise about
what it means to prove that an 'IATM M works as required'. Since M could be adapted any
time, we assume that M, beginning at time t, consumes only one message (i.e. a foreseeable
input string of finite length). By this assumption, we look at M as if it were a Standard Turing
machine (see Definition 1) for a while. Then we need a formal specification of the expected
behaviour of M and a proof that M performs accordingly. Unfortunately, theorem 1 tells us
that we can't even be sure that M will ever halt on the input message. All we could prove is
the so-called partial correctness of M at time t19. Since we are interested in M's performance
in the context of its environment, we have to make assumptions about the environment too.
Because if we do not care for M's environment, then it could happen that the interaction does
not work anymore, because the environment has changed the interaction rules without prior
notice and thus it could send an unacceptable message. But if we want to be sure about the
behaviour of the environment, we need also a prove of the partial correctness of the
environment at t20. Even if we did the cumbersome work of formally specifying the required
transformational behaviour at t and prove the partial correctness of both the IATM and its
environment at t it could only be useful for a period without changes, between t and t + x. The
question is what we expect of M if something in the interaction process changes. We
probably expect M or the environment to adapt somehow. However, a priori we do not know

19
Partial correctness defines correctness neglecting the halting problem. For the Theory of Program
Verification see for instance LORCKX, Jacques and SIEBER, Kurt (1984)
20
If the environment of M is nature then this means that we need a proof of the 'transformational
behaviour' of nature in order to produce sensorial data. However, there is no way to formally prove,
that nature's 'behaviour' meets a formal specification, because all we know about nature is (scientific)
theory.

© Stefan Pistorius
11 13.02.2010 19:08:04

exactly how to specify the requirements, since we do not know anything about the possible
changes ahead. Only a posteriori would we be able to formulate an adequate formalistic
specification of the transformational behaviour. As an epistemic consequence, we get: In an
unforeseeable changing environment, the correctness of transformational and factual
knowledge of an IATM cannot be adequately formalised. A reasonable correctness criterion
can only be formulated for periods without unforeseen changes.

This was theory. In practice, the approach to verifying the expected behaviour of the
implementation of an IATM (i.e. the software representing an IATM or a human performing
the task) is to test her/him/it systematically. If we wanted to test the invoice registrar of the
example, we would probably test it by a few tens of different invoices. If something went
wrong, we would eliminate the programming error, or if he/she is human teach him/her to do
better. Moreover, we would assume that the communication standards to the environment
would stay unaltered. If we test systematically, then by and large we would rely on the
registrar. But a test is not a proof. The conclusion is ‘To err is not only human’. The only way
to improve an IATM's erroneous performance is by trial and adaptation on error!

In the above argument, we just used the term correctness. The decisive question is WHO
specifies what correctness (even in times without changes) of the factual and
transformational knowledge of an IATM at time t should be? We will give the answer, after
the discussion of case b).

b) Adaptation of a knowledge domain


This case is even more sophisticated than a). From the perspective of the registrar, an
invoice is somehow different from the accountant's conception of an invoice. If the
accountant and the registrar were human, they were at home with their families, and they
talked to their family members about an invoice of their electricity company for instance, then
probably each of the family members would have a different conception of the properties of
an invoice but certainly, nobody would care or even realise. However, in their professional
environment, the registrar and the accountant need to agree on a more precise invoice
concept. Therefore, at least one of the two IATMs needs to adapt. Moreover, each of the
IATMs in the invoice settling workflow may have to adapt. We say they all belong to the
same ‘knowledge domain’.

Definition 6: Knowledge Domain (see Fig. 3 and Fig. 2)


A knowledge domain is the content of a particular field of knowledge. It consists of factual
knowledge (concepts and propositions) without obvious contradictions21 and transformational
knowledge necessary to deduce the factual knowledge.

A more technical definition: A network of agents exchanging more messages within their
network than with others constitutes a knowledge domain22.

According to this definition, there may be many different kinds of knowledge domains and
there may be hierarchies of knowledge domains. Some knowledge domains may consist of
scientific knowledge, others of cultural or everyday knowledge and others only of non-
conceptual knowledge. The reason for using the expression ‘without obvious contradictions’
is that we cannot be sure of the existence or non-existence of disruptions as can be proven
by a similar argument as in a). Only if contradictions are detected (by tests or accidentally)
and only if the members of the knowledge domain 'feel the pressure' to eliminate these
contradictions will knowledge evolve. In the example, the registrar and the accountant

21
When we use the term contradiction, we are referring to inconsistent knowledge of two IATMs that
may lead to disruptions in the interaction process.
22
This definition is probably precise enough to identify algorithmically different domains in a network. -
See BARABASI, Albert-László (2004), page 171, referring to Flake, Lawrence, Giles from the
company NEC, who used the WWW link structure to identify 'communities' in this way.

© Stefan Pistorius
12 13.02.2010 19:08:04

needed to agree on a common invoice concept. In the context of their families, which are not
part of their ‘invoice knowledge domain’ the invoice concept was irrelevant and no changes
took place. Therefore, we say the knowledge domain specifies the requirements for
transformational and factual knowledge at time t and decides on the correctness or rather the
adequacy of it.

Knowledge changes may have serious and costly consequences. Every change of a
fundamental concept of an IATM can affect a wide range of IATMs of the interacting
environment. The worst-case scenario is that the change propagates errors through the
whole network of interacting IATMs. Some of the errors might emerge immediately, because
an interacting IATM does not accept its input anymore, since it does not know anything about
the intended change. Or it accepts the input somehow but it produces obviously false output.
Other errors might not be detected for a long time, and when they emerge, other changes
might have taken place in between and it takes a lot of time to find the cause. The questions
is: can such situations be avoided? The cheerless answer is once again, a lot can be done to
minimise negative effects but, for theoretical and practical reasons, we can never be sure.

c) Adaption of syntax:
With respect to knowledge evolution, this case is a special case of b). But the required
changes only affect the syntax of input and output of the respective IATMs involved.
Therefore, we don't have to worry about a possible chain reaction as in case b). If the OCR
of company CC can't cope with Chinese then it might ignore the input and wait for an English
written invoice of BC. And if BC does not cooperate, the purchasing department could decide
to change the supplier. - This strategy works as long as there are enough alternatives or as
long as the interaction is only of occasional nature. In the example, it probably depends on
the market power of both parties. If BC has an unchallenged supremacy in the market, CC
will have to adapt. As soon as the interaction gets more intense, one of the parties has to
adapt its transformational knowledge. In this case, either BC will learn English or CC will
learn Chinese or they will agree on a third language.

We summarise the results of this section by the following statements about the class of
interactive adaptive Turing machines:

S1) In an unforeseeable, changing environment, there is no adequate a priori correctness


criterion for transformational and factual knowledge. The agents belonging to a
knowledge domain specify the requirements for transformational and
factual knowledge at time t and decide on the adequacy of it.

S2) Changes of transformational knowledge are triggered by contradictions within a


knowledge domain and contradictions arising from interaction with the
environment. By means of ‘adaptation’ or an 'adaptation-message' from others the
agents of a knowledge domain adapt their knowledge to new requirements. In short:
Knowledge evolution is a process of trial and adaptation on error.

S3) The more intense the interaction between intelligent agents is, the more likely it is that
contradictions will emerge and the higher the pressure to resolve the contradictions
will be. The resolution can contribute to harmonized world views of agents.

S4) There are two options for resolving contradictions between two IATMs. Either one
will win recognition or both agree to resolve the contradictions by a consistent
unification of their transformational and factual knowledge. If the IATMs belong to
different knowledge domains, this may lead to a unification of the domains.

S5) Changes of fundamental concepts can have far-reaching and costly consequences.

© Stefan Pistorius
13 13.02.2010 19:08:04

These statements cannot be proven within the IATM model, because the normative aspects
of the ‘need’ or the ‘pressure’ to solve contradictions is not formalised within the model.
Nevertheless, in Section 6, we will present some real world observations of phenomena of
knowledge evolution that support S1) to S5).

Based on the definitions of the previous Sections, we can finally define the 'noosphere'.

Definition 7 Noosphere: The noosphere is the evolving global network of the world views of
all interacting humans and their cognitive devices.

Fig. 3 visualises a network of 4 interacting agents, i.e. a small network of networks. Each
agent's network of knowledge (see also Fig. 2) consists of knowledge belonging to different
knowledge domains (KD1 - KD5) and of a network of non-conceptual knowledge. In the
evolution of each domain, there may be many agents involved. We model this by interaction
of the agents (symbolised by arrows). Agents belonging to the same knowledge domain may
still have different world views, and this may have a significant influence on the knowledge
domain. The influence is twofold. First, it arises from the non-conceptual layers of
knowledge. If the sensory of two agents provides for different experiences, it might influence
their attitude towards some knowledge domains. Second there are of course influences from
the other knowledge domains that the agent belongs to. The interaction between agents or
between an agent and nature leads to knowledge propagation and in case of disruptions to
knowledge evolution23 (see rule S2).

Knowledge Domains and worldviews

Agent 1 Agent 2 Agent 3 Agent 4


KD 1 KD 2 KD 3 KD 1 KD 2 KD 4 KD 2 KD 3 KD 4 KD 3 KD 4 KD 5

Non-conceptual Non-conceptual Non-conceptual Non-conceptual


Knowledge layers Knowledge layers Knowledge layers Knowledge layers

Kowledge Domain KD1 : Agent 1 + Agent 2


Kowledge Domain KD2 : Agent 1 + Agent 2 + Agent 3
Kowledge Domain KD3 : Agent 1 + Agent 3 + Agent 4
Kowledge Domain KD4 : Agent 2 + Agent 3 + Agent 4
Kowledge Domain KD5 : Agent 4

Worldview of Agent 1 := KD1 + KD2 + KD3 + non-conceptual Knowledge


Worldview of Agent 2 := KD1 + KD2 + KD4 + non-conceptual Knowledge
Worldview of Agent 3 := KD2 + KD3 + KD4 + non-conceptual Knowledge
Worldview of Agent 4 := KD3 + KD4 + KD5 + non-conceptual Knowledge

Fig. 3

With the definitions and theoretical results of the previous Sections at hand, we can now
reassess the adequacy of the model. In Section 5, we demonstrate that the theory provided
can be seen as a framework for different branches of evolutionary and social epistemology.

5 The noosphere framework for evolutionary and social epistemology

In 1974 Donald T. Campbell postulated

'An evolutionary epistemology would be at minimum an epistemology taking


cognizance of and compatible with man’s status as a product of biological and social
evolution. In the present essay it is also argued that evolution – even in its biological

23
If we think of knowledge evolution by interaction with other agents we first think of the evolution of
conceptual knowledge. If the agent is human and the non-conceptual knowledge is about human
cognitive mechanisms then the evolution by interaction could be thought of as biological reproduction.
- A thorough investigation of this idea is beyond the scope of this paper.

© Stefan Pistorius
14 13.02.2010 19:08:04

aspects – is a knowledge process, and that the natural selection-paradigm for such
knowledge increments can be generalized to other epistemic activities, such as
thought, learning, and science. … of all the analytically coherent epistemologies
possible, we are interested in those, (or that one), compatible with the description of
man and of the world provided by contemporary science'24.

We think that our network model of knowledge evolution for both individuals and knowledge
societies provides a formal description framework for such an epistemology. The model
specifies important epistemic concepts like 'factual' and 'transformational' knowledge,
individual 'world views', super-individual 'knowledge domains', the overall 'noosphere' and the
'propagation' and 'evolution' of knowledge. What remains to be done, is to integrate those
naturalistic theories that explain the causes promoting the propagation and evolution of
knowledge. To give an example, we indicate a possibility of how to integrate Gerhard
Vollmer's naturalistic model of evolutionary epistemology of cognitive mechanisms25 and Karl
Popper's and/or Philip Kitcher's approach to the evolution of super-individual scientific
theories.

In section 3.2, we modelled the different layers of an agent's transformational and factual
knowledge (see Fig. 2). The layer model resembles and can be interpreted as a formalisation
of Gerhard Vollmer's hierarchical structure of human knowledge at hand. He calls the layers
'sensation', 'perception', 'experience' and several layers of 'scientific knowledge' (see
VOLLMER, Gerhard (2003), Band 1, p.33 or p. 89). By his 'projective model' Vollmer
describes and explains how human's cognition reconstructs (i.e. transforms) sensation to
perception, perception to experience and finally experience to scientific knowledge.
Moreover, Vollmer's philosophy describes and explains the 'fit' of epistemological
mechanisms to the 'mesocosmic' world. His naturalistic approach refers to biological
evolutionary theory, physics, and cognitive sciences. From these and other considerations,
he derives his 'hypothetical realism'.

The evolutionary mutation and selection processes can principally be modelled by IATM's
that represent so-called evolutionary or genetic algorithms26. The transformational step from
sensation to perception can also be described as an IATM. The steps from perception to
experience and from experience to scientific knowledge are of a different nature. Vollmer
does not describe the interactive processes within scientific communities or influences from
others outside the community that lead to the evolution of scientific theories. Nor does he
describe the influence of a scientist's world view on scientific theory building. According to
Vollmer's definition, evolutionary epistemology does not describe and explain the evolution of
human knowledge, but only the evolution of cognitive mechanisms27.

We think that Donald T. Campbell's integrating approach to evolutionary epistemology (as


cited above) leads to an even deeper understanding of human knowledge. Using the
dynamic network model, it is obvious how to incorporate the interaction processes within a
scientific community as well as the influences from outside the domain or from a scientist's
own world view (see Fig. 3). We can describe how an individual learns or is influenced from
others and how personal experience can lead to disruptions and subsequently to knowledge
evolution. The disruptions could contribute to the evolution of scientific theories. Karl
Popper's conjectures and refutations approach to evolutionary epistemology of theories28
addresses some of these aspects. According to Popper (as well as to our framework), there
is no absolute truth. Every scientific theory (i.e. a 'knowledge domain') can only be valued as
24
see CAMPBELLl, Donald T. (1987), p 46 (p. 1 of Campbell’s article)
25
see VOLLMER, Gerhard (2003)
26
see for instance LANGDON, W. B. and POLI, R. (2002) for the description of this class of
algorithms. We assume that the biological theory of evolution cannot yet fully explain every detail of
these complex processes but principally we could integrate them within our operational model.
27
see VOLLMER Gerhard, 2003, Bd. 1, page 74
28
see POPPER, Karl (1963)

© Stefan Pistorius
15 13.02.2010 19:08:04

a ‘conjecture’. A good theory must be falsifiable and such it is possible, that new facts, i.e.
messages from the environment, refute the theory. Then the existing theory or part of it has
to be adapted. Therefore, genuine science (in contrast to metaphysics) should be seen as a
progressive evolutionary process, i.e. a converging knowledge domain. Philip Kitcher reflects
in more detail the 'division of cognitive labour' within a scientific community i.e. the message
exchange processes within the knowledge domain network29. Moreover, he describes and
explains the 'consensus practice' within scientific communities and he stresses the influences
of individual beliefs, i.e. the 'agents world views' according to our terminology (see also our
example in Sections 3 and 4).30

Within this article, we can only adumbrate the idea of how to integrate existing evolutionary
and social epistemology approaches within the noosphere framework. Bradie, and Harms’
article31 gives a good overview and classification of evolutionary epistemology approaches
and Goldmann’s article32 for an overview of social epistemologies, some of which are
integration candidates. The model is flexible and powerful enough to integrate different
individual and super-individual (e.g. social and cultural) naturalistic theories of knowledge
evolution. The challenge of a unified naturalistic epistemology is to put the right pieces
together and describe them within the framework provided. A unified theory should at least
describe and explain the mutual influences and the promoters of individual and super-
individual knowledge evolution. Moreover, it should integrate the technical aspects of
knowledge propagation and evolution. In the following Section, we will demonstrate that,
besides its descriptive power, the adaptive network model of knowledge can also explain
knowledge evolution phenomena derived from complex network research.

6 Noosphere Epistemology

So far, the network model of knowledge has served as a basis for formal definitions of central
epistemic concepts and as a description framework for existing epistemologies. We now
describe and explain some of the revolutionary changes in information technology. From our
point of view, future epistemology should embrace the ongoing revolution of information
technology, because it fundamentally changes the ways knowledge is propagated,
processed, represented and developed. Moreover, it changes the division of cognitive labour
between humans and machines and it changes the way we think33. With this article, we
unfold a (non-formal) perspective on the subject, which we call noosphere epistemology. We
would like to find answers to the following questions:

• What are the characteristics of the evolution of the noosphere since the emergence of the
Internet and the World Wide Web?
• Can we expect new sources of knowledge?
• Is there evidence of a convergent knowledge evolution as Teilhard postulated?
• Since the noosphere is modelled as a network, what can we learn from complex network
research?
• According to the theoretical model, there is no principal difference between individual
knowledge and knowledge corpora. Are there empirical indications that the demarcation
between an individual's knowledge and her/his/its environment blurs?

29
see KITCHER, Philip (1990)
30
see KITCHER, Philip (1993) or GOLDMAN, Alvin (2006) for a short summary of Kitcher's ideas.
31
see BRADIE, Michael and HARMS, William (2008)
32
see GOLDMANN, Alvin (2006).
33
The 'way we think' is predominantly an aspect of the philosophy of mind and not in the scope of this
article. An interesting interdisciplinary discussion on the topic can be found here:
http://www.edge.org/q2010/q10_index.html

© Stefan Pistorius
16 13.02.2010 19:08:04

6.1 The degree of knowledge propagation as a measure of noosphere evolution

From S1 – S5 we can conclude that as long as there is intense, and contradiction-free


interaction, knowledge is accepted as adequate (i.e. truth criterion) and can propagate. From
this we can derive that everything that helps to support the propagation and successful
adaption of knowledge among the millions and billions of agents contributes to the evolution
of the noosphere towards ‘relative truth’.

In section 6.2 we study knowledge evolution trends due to information technology (IT)
especially the revolution of the Internet and its contribution to the propagation and evolution
of knowledge. We argue that the impact on the rest of the noosphere is enormous, although
in December 2009 only about 26% of the world's population had access to the Internet34.
This article mentions just a few aspects of the evolution; most of the analysis must be left to
future work.

6.2 Evolution of information technology as a catalyst for noosphere evolution

Evolution of the Internet and World Wide Web as a breakthrough for the propagation of
knowledge
The invention of the Internet and World Wide Web brought a new infrastructure for the
propagation of information. But propagation of information does not necessarily mean
propagation of conceptual knowledge. Only if there is an agent that is able to ‘understand’
the information on the Web page can we say that conceptual knowledge propagates. If a
PC's browser program processes a Web page it does not 'know' anything about the
conceptual content of the page. The non-conceptual transformational and factual knowledge
of the browser is only about how to read a sequence of HTML tagged letters and pictures
and how to display them in a colourful way on the screen. The human interacting with the
browser program may be able (or not) to understand and accept the conceptual content and
generate new knowledge out of the Webpage. After all, the W W W in its first phase, now
called Web 1.0, brings a much better and faster access to conceptual knowledge for millions
of people. More and more people have the chance to get to know new knowledge domains
they did not know before. This may cause a significant change in those peoples world view.

Web 1.0 does not provide many possibilities for human agents to give feedback to Web
content. There is only the choice to accept the content or not. Since the emergence of so-
called Web 2.0 technologies, new feed back and collaboration concepts have been
developing and therefore, the evolution of knowledge domains accelerates once again. Due
to Wikipedia and the so-called social networks, people around the world now have the
chance to share the same cultural and scientific knowledge domains and to build new
domains. New virtual organisations evolve and enable people to collaborate on a worldwide
scale. So far, we can assert that the evolution of the Internet and the World Wide Web
improve the communication and global growth of conceptual, cultural and scientific
knowledge domains and therefore it accelerates the evolution of these domains.

Evolution of new conceptual knowledge layers


As mentioned before, the rise of the World Wide Web means only that the interacting
machines share the same communication alphabets. Most of the information in the W W W
does not mean anything to computers. Only if there is an application that analyses the
transmitted content, can the machine itself produce new conceptual knowledge. There are of
course, business applications that transfer factual knowledge (for instance an electronic
invoice) to another computer that is able to ‘understand’ this piece of information, but this
development has only just begun. The problem is not of a technical nature any more,
because the communication standards (TCP/IP, XML, SOAP and others) are well

34
This number is according to http://www.Internetworldstats.com/stats.htm, retrieved 2009-20-12.

© Stefan Pistorius
17 13.02.2010 19:08:04

established. The most important barriers are of a semantic or conceptual nature. As in the
example of Section 4, in industries around the world there exist many different concepts
about an ‘invoice’, an ‘order’, a ‘shipping note’ or other business objects. As long as these
differences are not settled, machines cannot ‘talk’ to each other on a conceptual basis.
Several organisations have tried to address this problem35. If they succeed, business
computers around the world will participate in the same business knowledge domains and
they could be enabled to autonomously do business around the world.

Besides the business knowledge domains, of course, many other knowledge domains are
not yet accessible to machines. The W W W is full of such knowledge. In 2004, Tim Berners-
Lee, the inventor of the World Wide Web, proposed the so-called Semantic Web36. The basic
idea is to enable computers to analyse the conceptual knowledge of the Web. It will then be
possible for machines to derive new knowledge by combination or 'serendipity'37. Every
Internet-connected agent will then have immediate access to the information needed in
her/his/its actual context, if she/he/it divulges information about her/his/its context. One
condition for the implementation of the Semantic Web is the development of ontologies and
knowledge representation concepts38. If the Semantic Web becomes reality, it will inevitably
push the global unification of knowledge domains, because contradictions resulting from
different basic concepts will be eliminated by design.

Another future source of conceptual knowledge is called 'ubiquitous computing' or 'pervasive


computing’39. The basic idea is that information processing is thoroughly integrated into
everyday objects. Such objects (e. g. clothes, cars, buildings, furniture, and so on) are
invisibly tagged, they have their own online presence, they can communicate with each
other, and they can exchange information about their physical and virtual environment. By
this means, it is possible for machines to collect information and derive knowledge, that
humans don't even know of. Gottschalk-Mazouz40 highlights some ethical aspects of this kind
of knowledge propagation. He also names typical features of knowledge that are compatible
with the definitions in this article.

In principle we can expect that machines get their own senses. In scientific research
(physics, biology, astronomy, meteorology and others) they already sense facts about our
world (microcosm and macrocosm), that we cannot directly observe by means of our own
sensory.

Scale-free networks and ' Correlative Analytics'


So far, we have not assumed anything about the topology of the global network of
knowledge. However, new results in the theory of networks should have an important impact
on the field of social and evolutionary epistemology of theories. Especially the branch of the
so-called scale-free network research, introduced by Albert-László Barabási (see for instance
BARABASI, Albert-László (2004)) sheds new light on many scientific disciplines, such as
biology, physics, computer science and social sciences and consequently on epistemology.
The most stunning result was that complex networks tend to be scale-free. This means that
the whole structure of the network evolves towards so-called hubs, i.e. nodes in the network
that are linked to an enormous number of other nodes. The more links a node possesses,
the more likely other nodes tend to attach to these hubs. This phenomenon is called
preferential attachment. In the World Wide Web for instance, some of the hubs are the sites
of 'Google', 'Yahoo', 'Microsoft' and others. In his article, Scale-Free Networks: A Decade and
Beyond (see BARABASI, Albert-László (2009)) Barabasi writes:
35
see for instance http://www.ebxml.org/geninfo.htm/ or http://www.oasis-open.org/who/
36
BERNERS-LEE, Tim (2004)
37
meaning 'the discovery through chance by a theoretically prepared mind of valid findings which were
not sought for' (see MERTON, Robert K. (1957))
38
see for instance DAVIES, John and STUDER, Rudi and WARRREN, Paul (2006)
39
POSLAD, Stefan (2009)
40
GOTTSCHALK-MAZOUZ, Nils (2007)

© Stefan Pistorius
18 13.02.2010 19:08:04

'Today, the scale-free nature of networks of key scientific interest, from protein
interactions to social networks and from the network of interlinked documents that
make up the WWW to the interconnected hardware behind the Internet, has been
established beyond doubt. The evidence comes not only from better maps and data
sets but also from the agreement between empirical data and analytical models that
predict the network structure.'

The consequences for our discussion about knowledge propagation and knowledge
evolution are twofold. First, it is generally known that some of the WWW and Internet hubs
use the links to accumulate enormous amounts of data. Moreover, they distribute data and
they decide which data to distribute, i.e. they decide which knowledge to propagate. The so-
called page-rank mechanisms of the big search engines for instance, establish a knowledge
selection mechanism. Even if the selection is meant to serve the receiver in order to support
his/her/its needs, it inevitably leads to a favouritism of some web content and hence to the
perception of the respective factual knowledge, by many agents. Whether we like it or not,
this phenomenon contributes to the unification of knowledge domains and hence to the
convergence of world views of many agents.

The second consequence of the recognition of the scale-free nature of the Internet, the
WWW and other networks is that completely new knowledge domains have been evolving
and they are different in what they can tell us about the world. The basic idea behind the new
methods of generating knowledge is to explore the petabytes of data accessible and to find
patterns of collective behaviour in nature or human societies. Vice versa, it is possible to
derive knowledge about individual humans just by comparing a profile of their individual data
with these patterns. With a plea for the establishment of a ‘Computational Social Science’
BARABASI et al. (2009) write for instance:

We live life in the network. We check our e-mails regularly, make mobile phone calls
from almost any location, swipe transit cards to use public transportation, and make
purchases with credit cards. Our movements in public places may be captured by
video cameras, and our medical records stored as digital files. We may post blog
entries accessible to anyone, or maintain friendships through online social networks.
Each of these transactions leaves digital traces that can be compiled into
comprehensive pictures of both individual and group behaviour, with the potential to
transform our understanding of our lives, organizations, and societies.

Kevin Kelly, the founding executive editor of Wired magazine in (KELLY, Kevin et all, (2009),
'The end of theory') comments on the method of exploring the data as follows:

'My guess is that this emerging method will be one additional tool in the evolution of
the scientific method. It will not replace any current methods (sorry, no end of
science!) but will complement established theory-driven science. Let's call this data
intensive approach to problem solving Correlative Analytics.'

In 2008 the US National Science Foundation has launched a new program called Cluster
Exploratory (see http://www.nsf.gov/pubs/2008/nsf08560/nsf08560.htm). The programme's
main goal is to 'explore innovative research ideas in data-intensive computing'. If Barabasi
and Kelly are right, the results of this program and similar projects will have a significant
impact on our knowledge about the world. We will observe phenomena that will lead us to an
adaptation of scientific and social theories and to the development of new scientific
disciplines. Beyond the epistemic consequences, there arise ethical questions.

The Conclusion of this Section is that Information Technology accelerates the evolution of
the noosphere. Some knowledge domains evolve and converge very rapidly and others may
vanish. New knowledge domains emerge by means of new techniques and sensors far

© Stefan Pistorius
19 13.02.2010 19:08:04

beyond human capabilities. All connected agents get more and better access to
transformational and factual knowledge. If we assume that these trends will continue, any
agent will have immediate access to all knowledge required at any moment of her/his/its
lifetime. In such a scenario, we will not be able to differentiate between the knowledge of a
single agent and the knowledge of the overall noosphere. Knowledge will simply come out of
the 'cloud'41 or out of the noosphere and we are part of it. It would not be relevant where the
knowledge comes from and human brains would not need to 'burden' themselves with factual
knowledge they do not actually use. Knowledge would migrate from the individual's memory
to the environment. The demarcation between an individual's knowledge and the
environment would blur. This would be a practical affirmation of the theoretical model,
according to which there is no principal difference between the ontogeny of a single
individual's knowledge and the phylogeny of knowledge corpora. Another important epistemic
consequence of the semantic web and 'correlative analytics' is that we will not be able to
identify the source of knowledge any more. Therefore, we will not be able to ask any
individuals for their 'justification'.

7 The Omega Point

In the previous Section, we discussed the current and near future evolution of the noosphere.
One result was that the knowledge domains develop on a global scale, some evolve and
converge very rapidly, some vanish, and new domains emerge. However, it is not at all clear,
whether this will lead towards Teilhard’s vision of the Omega Point, according to which
humankind could be united (‘in several million years’) in a harmonised world view. As some
research results about the World Wide Web indicate, scale-free networks (with directed
edges) can be 'fragmented'; this means that large parts of the web are disconnected from
each other (see BARABASI, Albert-László (2004)). The overall structure of the network of
knowledge, the noosphere, is unknown until now, but we may assume, that there also exists
a multitude of disconnected knowledge domains, because the propagation of knowledge
relies heavily on the Internet and W W W. Moreover, the propagation and evolution of
knowledge are dynamic properties of the noosphere and research on the dynamics of
complex networks has just begun. Finally yet importantly, the 'success' may depend on the
nature and quality of the different knowledge domains. Organised crime, terrorism, dictatorial
regimes and so on, they all have their own knowledge domains and they are all eager to
protect them against their enemies. The worldwide propagation of knowledge may help to
undermine the power of oppressive structures. However, as long as the usage of the world's
natural resources discriminates against large parts of the world, new (knowledge and
physical) conflicts will always arise and Teilhard's vision cannot come true. Although we do
not know whether Teilhard is right or not, it is an interesting thought experiment, what
Teilhard's Omega Point would be like from the framework's point of view.

If the Omega Point became reality, every single agent (humans and technical devices) would
be connected to the noosphere. The noosphere would be global and it would be free of
obvious contradictions. Every agent would live in harmony with every other agent she/he/it is
interacting with. Each agent's perception of the world would be perfectly compatible with all
knowledge (especially scientific knowledge) about the world. Every single observation and
every single interaction of an agent with nature (even with her/his/its own physical body)
would immediately contribute to the perception and if necessary to the adaptation of the
whole noosphere. The main goal of the noosphere would be to survive the challenges of
nature and the universe.

41
The term 'cloud' is in use in Information Technology (IT) and is a metaphor for computer networks
like the Internet. 'Cloud Computing' means that IT-services can come from anywhere and that users
don't have to care (and do not have the chance to find out) about the origin of the different services.

© Stefan Pistorius
20 13.02.2010 19:08:04

References

BARABASI, Albert-László (2004), Linked: How Everything Is Connected to Everything Else,


ISBN 0-452-28439-2
BARABASI, Albert-László (2009), Scale-Free Networks: A Decade and Beyond,
http://www.barabasilab.com/pubs/CCNR-ALB_Publications/200907-24_Science-
Decade/200907-24_Science-Decade.pdf
BARABASI et al (2009), Computational Social Science,
http://www.barabasilab.com/pubs/CCNR-ALB_Publications/200902-06_Science-
CompSocial/200902-06_Science-CompSocial.pdf
BERNERS-LEE, Tim (2004), Weaving the Semantic Web, by Andy Carvin, published by the
Digital Divide Network, October 2004.
BRADIE, Michael and HARMS, William F, Evolutionary Epistemology (2008),
<http://plato.stanford.edu/entries/epistemology-evolutionary/>
CAMPBELL, Donald T. (1987), Evolutionary Epistemology, in Radnitzky, G. and Bartley,
Theory of Rationality and the Sociology of Knowledge, W. W., LaSalle, Ill: OpenCourt.
CAMPBELL, Donald T. (1988), Popper and Selection Theory. Social Epistemology: 371- 377

DAVIES, John and STUDER, Rudi and WARREN, Paul (2006), Semantic Web
Technologies: Trends and Research in Ontology-based Systems Wiley.
ISBN 978-0470025963
GETTIER, Edmund, (1963), Is Justified True Belief Knowledge?, in Analysis, Vol. 23,
pp. 121-123. Online text http://www.ditext.com/gettier/gettier.html
GOLDIN, Dina and WEGNER, Peter (2003), Computation Beyond Turing Machines.
Comm. ACM, Apr. 2003.
GOLDIN, Dina and WEGNER, Peter (2005), The Interactive Nature of Computing:
Refuting the Strong Church-Turing Thesis
GOLDMANN, Alvin (2006), Social Epistemology,
http://plato.stanford.edu/entries/epistemology-social/
GOTTSCHALK-MAZOUZ, Nils (2007), Internet and the flow of knowledge: Which ethical and
political challenges will we face?, in Philosophie of the Information Society,
Proceedings of the 30. International Wittgenstein Symposium, Kirchberg am Wechsel,
Austria 2007 Volume2.
KELLY, Kevin et al, (2009), THE END OF THEORY, Will the Data Deluge Make the
Scientific Method Obsolete?, Originally published the cover story, 'The End of
Science', Wired Magazine: Issue 16.07
http://www.edge.org/documents/archive/edge248.html#feature
KITCHER, Philip (1990), The Division of Cognitive Labor, The Journal of Philosophy,
87: 5–22.
KITCHER, Philip (1993), The Advancement of Science, New York: Oxford University Press.
LANGDON, W. B. and POLI, R. (2002), Foundations of Genetic Programming,
Springer-Verlag, 2002. ISBN 3540424512
LEWIS, Harry R. and PAPADIMITRIOU, Christos H. (1981), Elements of the theory
of Computation, Prentice Hall, ISBN 0-13-273417-6.
LOECKX, Jacques and SIEBER, Kurt (1984), The Foundations of Program Verification,
2nd ed., Teubner ISBN 3 519 12101 8, Wiley ISBN 0 471 91282 4
MERTON, Robert K. (1957), Social Theory and Social Structure, The Free Press,
Glencoe, Ill. 1957. P. 12
POPPER, Karl, Conjectures and Refutations: The Growth of Scientific Knowledge, 1963,
ISBN 0415043182
POSLAD, Stefan (2009). Ubiquitous Computing Smart Devices, Smart Environments and
Smart Interaction. Wiley. ISBN 978-0-470-03560-3.
http://www.elec.qmul.ac.uk/people/stefan/ubicom/index.html
PUTNAM, H. (1960). ‘Minds and Machines”, reprinted in Putnam 1975b, 362–385.

STEHR, Nico (1994), Knowledge Societies; ISBN 0-8039-7892-8

© Stefan Pistorius
21 13.02.2010 19:08:04

TEILHARD DE CHARDIN, Pierre (1959), The Phenomenon of Man, Harper Perennial 1976:
ISBN 0-06-090495-X. Reprint 2008: ISBN 978-0061632655.
TEILHARD DE CHARDIN, Pierre (1965), The Appearance of Man, Collins (UK),
Harper and Row (US).
TEILHARD DE CHARDIN, Pierre (1984), Die Entstehung des Menschen, Deutscher
Taschenbuch Verlag 1984, München. ISBN 3-423-01755-4
VAN LEEUWEN, Jan and WIEDERMANN, Jirii (2001), The Turing Machine Paradigm in
Contemporary Computing, in Mathematics Unlimited - 2001 and Beyond, eds.
B. Enquist and W. Schmidt, LNCS, Springer-Verlag, 2000.
VERBAAN, Peter (2005), The Computational Complexity of Evolving Systems,
http://igitur-archive.library.uu.nl/dissertations/2006-0202-200042/full.pdf
VOLLMER, Gerhard (2003), Was können wir wissen? 2 Bde., Leipzig: Hirzel.

© Stefan Pistorius

S-ar putea să vă placă și