Sunteți pe pagina 1din 39

Physica A 348 (2005) 505543

Econophysics: from Game Theory and


Information Theory to Quantum Mechanics
Edward Jimenez
a,b,c,
, Douglas Moya
d
a
Experimental Economics, Todo1 Services Inc., Miami, FL 33126, USA
b
GATE, UMR 5824 CNRS, France
c
Research and Development Department, Petroecuador, Paul Rivet E11-134 y 6 de Diciembre,
Quito, Ecuador
d
Physics Department, Escuela Politecnica Nacional, Ladron de Guevara E11-253, Quito, Ecuador
Received 11 March 2004; received in revised form 20 September 2004
Available online 27 October 2004
Abstract
Rationality is the universal invariant among human behavior, universe physical laws and
ordered and complex biological systems. Econophysics isboth the use of physical concepts in
Finance and Economics, and the use of Information Economics in Physics. In special, we will
show that it is possible to obtain the Quantum Mechanics principles using Information and
Game Theory.
r 2004 Elsevier B.V. All rights reserved.
PACS: 02.30.Cj; 02.50.Le; 03.65.Ta
Keywords: Rationality; Optimal laws; Quantum mechanic laws; Energy; Information
1. Introduction
At the moment, Information Theory is studied or utilized by multiple perspectives
(Economics, Game Theory, Physics, Mathematics, and Computer Science). Our goal
ARTICLE IN PRESS
www.elsevier.com/locate/physa
0378-4371/$ - see front matter r 2004 Elsevier B.V. All rights reserved.
doi:10.1016/j.physa.2004.09.029

Corresponding author. Research and Development Department, Petroecuador, Todo1 Services,


Ecuador Experimental Economics, Miami, USA.
E-mail address: ejimenez.twu@petroecuador.com.ec (E. Jimenez).
in this paper is to present the different focuses about information and to select the
main ones, to apply them in Economics of Information and Game Theory.
Economics and Game Theory
1
are interested in the use of information and order
state, in order to maximize the utility functions of the rational
2
and intelligent
3
players, which are part of an interest conict. The players gather and process
information. The information can be perfect
4
and complete
5
see Refs. [14], (Sorin,
1992). On the other hand, Mathematics, Physics and Computer Science are all
interested in information representation, entropy (disorder measurement), optim-
ality of physical laws and in the living beings internal order see Refs. [57]. Finally,
information is stored, transmitted and processed by physical means. Thus, the
concept of information and computation can be formulated not only in the context
of Economics, Game Theory and Mathematics, but also in the context of physical
theory. Therefore, the study of information ultimately requires experimentation and
some multidisciplinary approaches such as the introduction of the Optimality
Concept [8].
The Optimality Concept is the essence of the economic and natural sciences [9,10].
Economics introduces the optimality concept (maximum utility and minimum risk)
as equivalent of rationality and Physics understands action minimum principle, and
maximum entropy (maximum information) as the explanation of nature laws [11,36].
If the two sciences have a common backbone, then they should allow certain
analogies and to share other elements such us: equilibrium conditions, evolution,
uncertainty measurement and the entropy concept. In this paper, the contributions
of Physics (Quantum Information Theory)
6
and Mathematics (Classical Information
Theory)
7
are used in Game Theory and Economics being able to explain mixed
ARTICLE IN PRESS
1
Game Theory can be dened as the study of mathematical models of conict and cooperation among
intelligent rational decision-makers.
2
A decision-maker is rational if he makes decision in pursuit of his own objectives. It is normal to
assume that each players objective is to maximize the expected utility value of his own payoff.
3
A player in the game is intelligent if he knows everything that we know about the game and he can
make inferences about the situation that we can make.
4
Perfect information means that at each time only one of the players moves, that the game depends only
on their choices, they remember the past information (utilities, strategies), and in principle they know all
possible futures of the game [1].
5
In games of incomplete information the state of the nature is xed but not know to all players. In
repeated games of incomplete information, the changes in time is each players knowledge about the other
players past actions, which affects his beliefs about the (xed) state of nature.
Games of incomplete information are usually classied according to the nature of the three important
elements of the model, namely players and payoffs (within the two-person games: zero-sum games and
non-zero-sum games), prior information on the state of the nature (within two-person games: incomplete
information on one side and incomplete information on two sides), and signaling structure (full
monitoring and state independent signals) [2,3,37].
6
Quantum Information Theory is fundamentally richer than Classical Information Theory, because
quantum mechanics includes so many more elementary classes of static and dynamic resourcesnot only
does it support all the familiar classical types, but there are entirely new classes such as the static resource
of entanglement (correlated equilibria) to make life even more interesting that it is classically.
7
Classical Information Theory is mostly concerned with the problems of sending Classical Information-
letters in an alphabet, speech, strings of bitsover communications channels which operate within the
laws of classical physics.
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 506
strategy Nashs equilibrium using Shannons entropy [1215]. According to [16,
p. 11]
quantities of the form H =

p
i
log p
i
play a central role in information theory
as measures of information, choice and uncertainty. The form of H will be recognized as
that entropy as dened in certain formulations of statistical mechanics where p
i
is the
probability of a system being in cell i of its phase space; . . .
In Quantum Information Theory, the correlated equilibria in two-player games
means that the associated probabilities of each-player strategies are functions of a
correlation matrix. Entanglement, according to the Austrian physicist Erwin
Schro dinger, which is the essence of Quantum Mechanics, has been known for
long time now to be the source of a number of paradoxical and counterintuitive
phenomena. Of those, the most remarkable one is the usually called non-locality
which is at the heart of the EinsteinPodolskyRosen paradox (ERP) see Ref. [17, p.
12]. Einstein et al. [18] which consider a quantum system consisting of two particles
separated long distance.
ERP suggests that measurement on particle 1 cannot have any actual inuence on
particle 2 (locality condition); thus the property of particle 2 must be independent of the
measurement performed on particle 1.
The experiments veried that two particles in the ERP case are always part of one
quantum system and thus measurement on one particle changes the possible
predictions that can be made for the whole system and therefore for the other
particle [5].
It is evident that physical and mathematical theories have had a lot of utility for
the economic sciences, but it is necessary to highlight that Information Theory and
Economics also contribute to the explanation of Quantum Mechanics laws. Will the
strict incorporation of Classic and Quantum Information Theory elements allow the
development of Economics and Game Theory? The denitive answer is yes.
Economics has carried out its own developments around information theory;
especially it has demonstrated both that the asymmetry of information causes errors
in the denition of a optimal negotiation and that the assumption of perfect markets is
untenable in the presence of asymmetric information see Refs. [19,20]. The asymmetry
of the information according to the formalism of Game Theory can have two causes:
incomplete information and imperfect information. As we will see in the development
of this paper, Information Economics does not even incorporate in a formal way
neither elements of Classical Information Theory nor Quantum Information concepts.
The creators of Information Theory are Shannon and von Newmann see Ref. [15,
Chapter 11]. Shannon the creator of Classical Information Theory introduces the
entropy as the heart of your theory, endowing it of a probabilistic characteristics. On
ARTICLE IN PRESS
(footnote continued)
The key concept of Classical Information Theory is the Shannon Entropy Theory. Suppose we learn
the value of a random variable X. The Shannon Entropy of X quanties how much information we gain,
on average, when we learn the value of X: An alternative view is that the entropy of X measures the
amount of uncertainty about X before we learn its value. These two views are complementary; we can view
the entropy either as a measure of our uncertainty before we learn the value of X; or as measure of how
much information we have gained after we learn the value of X: [15, p. 500].
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 507
the other hand, von Newmann also creator of Game Theory, uses the probabilistic
elements take into account by Shannon but denes a new mathematical formulation
of entropy using the density matrix of Quantum Mechanics. Both entropy
formulations developed by Shannon and von Newmann, respectively, permit us to
model pure states (strategies) and mixed states (mixed strategies). In Eisert et al.
[21,35], they not only give a physical model of quantum strategies but also express
the idea of identifying moves using quantum operations and quantum properties. This
approach appears to be fruitful in at least two ways. On one hand, several recently
proposed quantum information application theories can already be conceived as
competitive situations, where several factors which have opposing motives interact.
These parts may apply quantum operations using a bipartite quantum system. On
the other hand, generalizing decision theory in the domain of quantum probabilities
seems interesting, as the roots of game theory are partly rooted in probability theory.
In this context, it is of interest to investigate what solutions are attainable if
superpositions of strategies are allowed [2224].
As we have seen before, from a historical perspective we can afrm that Game
Theory and Information Theory advances are related to Quantum Mechanics
especially regarding nanotechnology. Therefore, it is necessary to use Quantum
Mechanics for ve reasons:
+ The origin of quantum information and its potential applications: encryption,
quantum nets and correction of errors has wakened up great interest especially in
the scientic community physicists, mathematicians and economists.
+ Quantum Game Theory is the rst proposal of unifying Game Theory and
Quantum Mechanics with the objective of nding synergies between both.
+ In this paper we present an immediate result, product of using these synergies
(possibility theorem). Possibility theorem allows us to introduce the concept of
rationality in time series.
+ A perfect analogy exists between correlated equilibria that fall inside the domain
of Game Theory, and entanglement that falls inside the domain of Quantum
Information see Refs. [25,15].
+ The reason of being of this paper is to demonstrate theoretically and practically
that Information Theory and Game Theory permit us to obtain Quantum
Mechanics Principles.
This paper is organized as follows. Section 1 is a revision of the existent
bibliography. In Section 2, we show the main theorems of quantum games. Section 3
is the core of this paper; here we present the Quantum Mechanics Principles as a
consequence of maximum entropy and minimum action principle. In Section 4 we
can see the conclusions of this research.
2. Elements of Quantum Game Theory
Let G = (K; S; v) be a game to n-players, with K the set of players k = 1; . . . ; n:
The nite set S
k
of cardinality l
k
c N is the set of pure strategies of each player
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 508
where k c K; s
kj
k
c S
k
; j
k
= 1; . . . ; l
k
and S = P
K
S
k
set of pure strategy
proles with s c S an element of that set, l = l
1
; l
2
; . . . ; l
n
represent the cardinality
of S [26,4,9]. Furthermore, in physics the word strategy represents a state of the
system.
The vector function v : S R
n
associates every prole s c S the vector of utilities
v(s) = (v
1
(s); . . . ; v
n
(s))
T
; where v
k
(s) designates the utility of the player k facing the
prole s. In order to get facility of calculus we write the function v
k
(s) in one explicit
way v
k
(s) = v
k
(j
1
; j
2
; . . . ; j
n
): The matrix v
n;l
represents all points of the Cartesian
product

kcK
S
k
: The vector v
k
(s) is the k-column of v:
If the mixed strategies are allowed, then we have:
D(S
k
) = p
k
c R
l
k
:

l
k
j
k
=1
p
k
j
k
= 1
_ _
the unit simplex of the mixed strategies of player k c K; and p
k
= (p
k
j
k
) is the
probability vector. The set of proles in mixed strategies is the polyhedron D with
D =

kcK
D(S
k
); where p = (p
1
j
1
; p
2
j
2
; . . . ; p
n
j
n
); and p
k
= (p
k
1
; p
k
2
; . . . ; p
k
l
n
)
T
: Using the
Kronecker product Q it is possible to write:
8
p = p
1
Qp
2
Q Qp
k1
Qp
k
Qp
k1
Q Qp
n
;
p
(k)
= p
1
Qp
2
Q Qp
k1
Ql
k
Qp
k1
Q Qp
n
;
l
k
= (1; 1; . . . ; 1)
T
; [l
k
]
l
k
;1
;
o
k
= (0; 0; . . . ; 0)
T
; [o
k
]
l
k
;1
:
The n-dimensional function u : D R
n
associates with every prole in mixed
strategies the vector of expected utilities u(p) = (u
1
(p; v(s)); . . . ; u
n
(p; v(s)))
T
; where
u
k
(p; v(s)) is the expected utility for each player k: Every u
k
j
k
= u
k
j
k
(p
(k)
; v(s))
represents the expected utility for each players strategy and the vector u
k
is noted
u
k
= (u
k
1
; u
k
2
; . . . ; u
k
n
)
T
:
u
k
=

l
k
j
k
=1
u
k
j
k
(p
(k)
; v(s))p
k
j
k
;
u = v
/
p ;
u
k
= (l
k
Qv
k
)p
(k)
:
The triplet (K; D; u(p)) designates the extension of the game G with the mixed
strategies. We get Nashs equilibrium (the maximization of utility) if and only if, \k;
p; the inequality u
k
(p
n
)Xu
k
((p
k
)
n
; p
(k)
) is respected. Moreover, we can write the
original problem as a decision one where u
k
(p
n
) _ u
k
((p
k
)
n
; p
(k)
):
Another way to calculate the Nashs equilibrium, [27, 9, pp. 96104], is
equaling the values of the expected utilities of each strategy when it is
ARTICLE IN PRESS
8
We use bold fonts in order to represent vector or matrix.
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 509
possible.
u
k
1
(p
(k)
; v(s)) = u
k
2
(p
(k)
; v(s)) = = u
k
j
k
(p
(k)
; v(s)) ;

l
k
j
k
=1
p
k
j
k
= 1 \k = 1; . . . ; n ;
s
2
k
=

l
k
j
k
=1
u
k
j
k
p
(k)
; v(s)
_ _
u
k
_ _
2
p
k
j
k
= 0 :
If the resulting system of equations does not have solution (p
(k)
)
+
then we
propose the minimum entropy theorem. This method is expressed as Min
p
(

k
H
k
(p));
where s
2
k
(p
+
) standard deviation and H
k
(p
n
) entropy of each player k.
s
2
k
(p
n
)ps
2
k
((p
k
)
n
; p
(k)
)
or
H
k
(p
n
)pH
k
((p
k
)
n
; p
(k)
)
Theorem 1 (Minimum entropy theorem). The game entropy is minimum only in mixed
strategy Nashs equilibrium. The entropy minimization program Min
p
(

k
H
k
(p)); is
equal to standard deviation minimization program Min
p
(P
k
s
k
(p)); when (u
k
j
k
) has
Gaussian density function or multinomial logit see proof in Ref. [14].
Theorem 2 (Minimum dispersion). The Gaussian density function f (x) = [f(x)[
2
represents the solution of one differential equation given by (
xx)
2s
2
x

q
qx
)f(x) = 0 related
to minimum dispersion of a lineal combination between the variable x, and a Hermitian
operator (i
q
qx
) see proof in Ref. [14].
3. Information Theory towards Quantum Mechanics
In this section, a presentation of Information Theory is made. Moreover, a study
about how this subject, along with the variational principles of Theoretical
Mechanics, is used as a frame to construct Quantum Mechanics Principles. Also,
a relation of equivalence between variation of information and massenergy is
found.
3.1. Information Theory elements
Let n be, a binary word with symbols w
i
c (1; 0): The total number of words
which can be built with symbols in n is 2
n
:
Each word can be an instruction, a number, an alphanumeric character, and so
on. Therefore, the number 2
n
indicates the total amount of information, which is:
N = 2
n
; (1)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 510
n can be a very large number so that we choose (due to Shannon [16]) to count
logarithmically. We dene the amount of information:
I
i
= log
2
P
i
; (2)
where P
i
is the probability of nding the word i.
Mean information is dened
I
m
=

i
P
i
log
2
P
i
or equivalently
I
m
=

i
P
i
ln P
i
: (3)
Now, our purpose is to nd the maximum amount of information, which implies
that:
dI
m
= 0 (4)
with constrain

i
P
i
= 1 : (5)
In order to maximize the Eq. (3), we will use Lagranges multipliers
dI
m
=

i
[ln P
i
1] dP
i
= 0 ;
l

i
dP
i
= 0 : (6)
Using the Eq. (6), we can obtain

i
[ln P
i
1 l] dP
i
= 0 ; (7)
whose solution is
P
i
= e
(1l)
(8)
i.e., each word has the same probability to be chosen. A process of this type is called
a random process. This process allows to save or load information in an efcient or
optimal manner. Therefore, computers use random access memories (RAM).
Considering

i
P
i
=

i
e
(1l)
= 2
n
e
(1l)
= 1 ;
we have
P
i
=
1
2
n
(9)
and
l = n ln 2 1 : (10)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 511
Using logarithms of base 2, the amount of information is
I = log
2
P = log
2
1
2
n
= n bits : (11)
Consequently, the mean information is
I
m
=

i
P
i
log
2
P
i
(12)
and when the process is completely random we can write
I
m
=

i
1
2
n
log
2
1
2
n
=

i
n
2
n
=
n2
n
2
n
= n bits : (13)
Example 1. Let us consider a two-state system
I
m
= Plog
2
P (1 P) log
2
(1 P) : (14)
The maximum is obtained when P =
1
2
and I
m
= 1 bit, see Fig. 1.
If we have a multiplet of spins then they have a degeneracy of 2s 1: We can form
words of length 2s 1 symbols; therefore, the total number of words must be:
N = 2
(2s1)
(15)
and the amount of information will be 2s 1 bits.
ARTICLE IN PRESS
200 400 600 800 1000 1200
100
200
300
400
500
600
700
800
900
Fig. 1. Behavior of a two-state system described by (14).
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 512
Because of the Mixed State Spin Theory, spin would naturally uctuate randomly.
In a quantum computer, we would not have to program complex random number
generation algorithms (which in fact are quasi-random algorithms).
3.2. Measurement of physical magnitudes
Let us consider an experiment, where we obtain data u
i
for i = 1; 2; . . . ; n: The
mean value of u
i
is given by
u =

i
u
i
P
i
: (16)
We are interested in the error. Evidently, Du
i
= u
i
u has mean value
Du
i
=

i
(u
i
u)P
i
= 0 ;
which is not interesting at all. A valuable and interesting denition, in this case, is the
quadratic mean value
Du
2
i
=

i
(u
i
u)
2
P
i
X0 (17)
that is higher from or at least equal to zero.
Now, we are faced with the problem of how to measure a physical parameter
exactly, when we want to have the minimum quadratic possible error (minimum
standard deviation or minimum dispersion theorem) and maximum information (or
maximum average). The maximum average is according to von Newmanns
formalization of Game Theory see Ref. [26].
We have to maximize I
m
under the constrains

i
P
i
= 1 and Du
2
i
: The function
Du
2
i
must take its minimum value according to minimum dispersion theorem.
Theorem 3. Gaussian Density Function permits both maximize I
m
and minimize Du
2
i
:
Proof. Let it be the next maximization program
max
P
i
I
m
= max
P
i

i
P
i
ln P
i
_ _
;
subject to
1 =

i
P
i
and
Du
2
i
=

i
Du
2
i
P
i
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 513
evaluating and making zero rst derivatives to nd the max value.
dI
m
=

i
(ln P
i
1) dP
i
= 0 ;
a

i
dP
i
= 0 ;
b

i
Du
2
i
dP
i
= 0 ; (18)
which brings us to
dI
m
=

i
(ln P
i
1 a bDu
2
i
) dP
i
= 0 (19)
and it has a solution
P
i
= Ae
bDu
2
i
: (20)
In order to determine the integration constants A and b; let us consider the
following continuous distribution:
_
o
o
Ae
bDu
2
dDu = 1 ;
_
o
o
Du
2
Ae
bDu
2
dDu = s
2
u
: (21)
Integrating we found
P =
1

2p
_
s
u
e

Du
2
2s
2
u
(22)
Eq. (22) is a Gaussian density function which also follows minimum dispersion
theorem see Ref. [14]. &
3.3. Information Theory and uncertainty relations
The action is the most important magnitude in Physics. It satises Hamilton
principle, see Ref. [28, p. 38].
dS = 0 : (23)
This equation deserves additional analysis.
Let E be a physical state of a particular system
E = (q
i
; _ q
i
; t) ; (24)
where q
i
is a generalized reference system of coordinates, _ q
i
generalized velocities, t
the instant they are determined, and i every generalized coordinates.
Let us dene the conguration space C = {q
i
]: A point in that space represents the
systems physical conguration. Considering time evolution, we observe that this
point describes a trajectory in the conguration space. If we x instants t
1
and t
2
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 514
where t
2
4t
1
between q
i
(t
2
) and q
i
(t
1
); then the systems time evolution describes its
story between [t
1
; t
2
]:
Let us dene a state system dynamic function
L = L(q
i
; _ q
i
; t) : (25)
The Lagrangian L is different for each possible trajectory. Let us postulate that to
know this function allows us to know the set of differential equations that describes
the systems real story, because using virtual work method we can built L.
It is clear that for every trajectory between the points A(q
i
(t
1
)) and B(q
i
(t
2
)) in the
conguration space, the action integral has the next form
S =
_
t
2
t
1
L(q
i
; _ q
i
; t) dt ; (26)
which will take a different numeric value in function of the chosen trajectory. S is
called the systems action.
What we now is only the initial A and nal B congurations of the system.
Therefore, there exist an innite number of trajectories that begin in A and reach B,
but there is only one that is real. The question is how to nd it.
Let us suppose that the real trajectory has an action S. Any other trajectory will be
a virtual one even though it may be innitesimally close. Trajectories like those have
an action S dS: This way, the real trajectory satises that:
dS = 0 : (27)
Eq. (27) represents an optimum in the functional eld of all trajectories, and also is
stationary because t
2
and t
1
are xed numbers and the d operator is time
independent.
This brings us to deduce that L(q
i
; _ q
i
; t) satises the EulerLagrange equation
qL
qq
i

d
dt
qL
q _ q
i
_ _
= 0 ; (28)
where it can be shown that (see Ref. [28])
L = T U (29)
being T the kinetic energy and U the potential energy. When the systems are non-
conservative, Eq. (28) takes the form
qL
qq
i

d
dt
qL
q _ q
i
_ _
=
qF
q _ q
i
; (30)
where Fis the Rayleigh dissipation function. Eq. (30) can be deduced from Eq. (28)
considering the universe as a whole.
In fact, total energy is conserved in the universe. Furthermore, we can
consider a subsystem described by a Lagrangian L. Let L
AB
be, the Lagrangian of
interaction between the borders of the system and let L
0
be, the Lagrangian of the
rest of the universe. Because the Lagrangian is scalar, it satises the addition
property. Therefore, the Lagrangian of the whole universe will be the sum of L; L
AB
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 515
and L
0
L
U
= L L
AB
L
0
; (31)
where L
U
is the Lagrangian of the universe, and it satises
qL
U
qq
i

d
dt
qL
U
q _ q
i
_ _
= 0 ; (32)
qL
qq
i

d
dt
qL
q _ q
i
_ _
=
q
qq
i
(L
0
L
AB
)
d
dt
q
q _ q
i
(L
0
L
AB
) (33)
the right term represents an interaction with the rest of the universe and we will call it
generalized force Q
i
qL
qq
i

d
dt
qL
q _ q
i
_ _
= Q
i
; (34)
Q
i
can represent dissipative processes as well as energy sources, so more generally
(see Ref. [28])
Q
i
=
qF
q _ q
i
(35)
with
F=
1
2
f
i
_ q
i
2
_ q
i
~
Q
i
: (36)
The rst right term represents dissipative processes and the other represents energy
sources or external energy supplies. In short, it is clear that isolated systems in the
universe cannot exist. In fact, let us consider a so-called thermodynamically isolated
system, which means that its energy is constant. Due to HamiltonJacobi equation,
although its energy is constant, its action is not
E =
qS
qt
= const: (37)
Therefore, the system is interchanging action with the rest of the universe.
The Action is a physical magnitude which all theoretical physicists, except R.
Feynman, have not considered of importance. But, it is the most fundamental
physical magnitude because it is the origin of all other magnitudes, for instance, the
energy (37) or any others,
p
i
=
qS
qq
i
; (38)
where p
i
and q
i
are conjugate variables.
It is known that interactions travel at maximum speed (of light). This type of
interaction with our system such as radiation, heat then
E = pc (39)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 516
but
E =
qS
qt
and ~p = VS : (40)
Because ~c is perpendicular to VS = cte and parallel to ~p; the Eq. (39) can be written as

qS
qt
= V S~c
or equivalently
V S~c
qS
qt
= 0 (41)
which is a conservational law. Therefore, if we consider a region R in the universe,
action will be a random input/output ux in this region, because natural phenomena
satises the condition of maximum mean information.
Also, Eq. (39) can be written as
E
2
c
2
p
2
= 0 ; (42)
1
c
2
qS
qt
_ _
2
(VS)
2
= 0 : (43)
What enables us to dene the Lagrangian density
L=
1
c
2
qS
qt
_ _
2
(VS)
2
; (44)
which satises the following EulerLagrange equation
1
c
2
q
2
S
qt
2
= V
2
S (45)
and
S(~x; t) = S
0
e
i(
~
k~xot)
(46)
whose mean value is

S = 0 (47)
and its quadratic mean value is
SS
n
= [S
0
[
2
a0 : (48)
This completely justies Eq. (56).
On the other hand, Lagrange equation has the next form
qL
qq
i

d
dt
qL
q _ q
i
_ _
= 0 ;
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 517
where
L =

i
1
2
m
i
_ q
i
2
U(q
i
) (49)
satises the condition
q
2
L
q _ q
i
2
X0 : (50)
In the book of Elsgostz [29], we can see the Eq. (50) which is considered as a
condition dS = 0 implies a strong minimum.
Hamilton principle is another optimal principle that natural laws must satisfy, and
the origin of the next physical magnitudes.
H =
qS
qt
; p
i
=
qS
qq
i
: (51)
In a free particle system, classical energy and momentum are conserved; H = E
and q
i
= x:
E =
qS
qt
;
S = Et W(x) ; (52)
qS
qx
=
dW
dx
= p (53)
and
W =
_
p dx S
0
(54)
therefore, S = px Et S
0
and
DS = px Et : (55)
From Eq. (22), considering all given arguments and the fact that action is not a
directly measurable quantity, we postulate
P(S) =
1

2p
_
s
e

DS
2
2s
2
(56)
which contains all the information of the system.
From Hamiltons equation, where there is an interaction we can write.
_ p =
qH
qx
=
qE
qx
: (57)
Considering p; x; E; t independent conjugate variables from Eq. (57) we have
dp dx = dE dt ; (58)
_
p
0
dp
_
x
0
dx =
_
E
0
dE
_
t
0
dt ;
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 518
px = Et (59)
and
DS = px Et = 2px (60)
px is an area on the phase space (see Fig. 2) and it is a Poincare invariant therefore
it is conserved.
px = DpDx :
From this equation we obtain
DS = 2DpDx :
Now let us maximize information
dI
m
dP
i
=

i
P
i
ln P
i
= 0 (61)
with the constrains
dDx
2
=

i
Dx
2
i
dP
i
= 0 ;
ARTICLE IN PRESS
50 100 150 200 250 300 350 400 450
50
100
150
200
250
300
350
A
A

p
x
x
x
p
Fig. 2. Evolution of a Poincare invariant.
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 519
dDp
2
=

i
Dp
2
i
dP
i
= 0 ;
dDS
2
=

i
DS
2
i
dP
i
= 0 ;

i
dP
i
= 0 : (62)
Using Lagrange multipliers we obtain:
dI
m
=

i
[ln P
i
1 l aDx
2
i
bDp
2
i
gDS
2
i
] dP
i
= 0 : (63)
Let us represent the universe as a set (see Fig. 3). Let us consider a region R where
we perform physical measurements.
Information satises addition property, so we have
I
mU
= I
mR
I
mRU
; (64)
where I
mU
is the total amount of information of the universe, I
mR
is the information
in the place where the measurement is performed, and I
mRU
represents the
information of the rest of the universe.
Because the total information is optimal, then
DI
mU
= 0 ; (65)
ARTICLE IN PRESS
50 100 150 200 250
50
100
150
200
250
U
R
Fig. 3. The universe and a region R where we perform physical measurements.
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 520
which brings us to
DI
mR
DI
mRU
= 0 (66)
so
DI
mR
= DI
mRU
; (67)
D

i
P
i
ln P
i
= D

j
jai
P
j
ln P
j
; (68)

i
[ln P
i
1]DP
i
=

j
[ln P
j
1]DP
j
;

i
ln P
i

i
DP
i
=

j
ln P
j

j
DP
j
(69)
and taking into account

i
P
i
= 1 ;

j
P
j
= 1 ; (70)

i
DP
i
= 0 ;

j
DP
j
= 0 (71)
and

i
ln P
i
=

j
ln P
j
; (72)
which will be satised only if

i
ln P
i
=

j
ln P
j
= 0 (73)
in other words
P
i
= P
j
= 1 (74)
The last equations indicate that the perturbation induced by an experiment will be
felt in every point in space instantaneously, which produces a break in the
randomness of the universe, which has no sense because it contradicts the Special
Relativity Theory. Therefore, it must exist an additional term of information
I
mU
= I
mR
I
mRU
I
md
(75)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 521
from here
0 = DI
mR
DI
mRU
DI
md
: (76)
The term
DI
mRU
= 0 (77)
and
DI
mR
DI
md
= 0 : (78)
In this way, we can observe the randomness of the rest of the universe and continue
satisfying the theory of relativity because information cannot be transmitted faster
than the speed of light.
To sum up, if we compute the probabilities in the whole universe and consider that
space and time are globally homogeneous,we can infer that probability does not
depend on x
i
; p
i
; t
i
; E
i
: Therefore
aDx
2
i
bDp
2
i
= gDS
2
i
(79)
with
P
i
= e
(1l)
: (80)
The Eq. (79) which implies that statistical interaction processes cover all phase space.
Theorem 4. Planks constant has a statistical nature. Furthermore, s = _; DpDxX
_
2
:
Proof. The action distribution function, which is local, due to Eq. (56), is
e
gDS
2
= e
aDx
2
e
bDp
2
: (81)
Using the conditions
s
2
=
_
o
o
DS
2
e
gDS
2
dDS
_
o
o
e
gDS
2
dDS
=
_
o
o
4Dp
2
Dx
2
e
aDx
2
dDxe
bDp
2
dDp
_
o
o
e
aDx
2
dDxe
bDp
2
dDp
; (82)
s
2
= 4
_
o
o
Dx
2
e
aDx
2
dDx
_
o
o
e
aDx
2
dDx
_
o
o
Dp
2
e
bDp
2
dDp
_
o
o
e
bDp
2
dDp
; (83)
s
2
= 4s
2
x
s
2
p
= s
x
s
p
=
s
2
; (84)
we obtain the minimum Heisenbergs uncertainty relation.
= s = _ ; (85)
s
x
s
p
represents the minimum area in phase space, more generally speaking, any
other area will satises
DpDxX
_
2
: (86)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 522
We can conclude this because dDx
2
= 0 and Dx
2
X0 are true only for a minimum.
The same arguments are valid for Dp
2
: &
More general, any action can be written as (79).
DS
2
= (px Et)
2
= p
2
x
2
E
2
t
2
2pExt : (87)
We can perform a coordinate rotation, so that we obtain
DS
2
= l
1
x
2
l
2

t
2
(88)
We can see that action is a quadratic function of x;

t: In fact, it is proportional to
the relativistic interval
DS
2
o c
2

t
2
x
2
which represents a wave front.
3.4. Quantum mechanics
The analysis we have made is very general and we want to insist in this idea in a
different way. In order to do that, it is necessary to recall
E
2
= p
2
c
2
m
2
0
c
4
; (89)
E = c

p
2
m
2
0
c
2
_
= cP ; (90)
which has the same form of a photon equation. Also we know
DS = ~p ~x Et (91)
which shows that at t = 0; DS = ~p ~x represents an equation of a plane when DS =
cte: Every point in this plane have the same time, t = 0: This is a characteristic of a
wave front. Moreover, DS = px Et = cte bring us to
p _ x E = 0 = _ x =
E
p
= v
f
; (92)
where v
f
is the phase speed.
Or also, xdp t dE = 0
_ x =
dE
dp
= v
g
(93)
is the group speed. Given that E =

p
2
c
2
m
2
0
c
4
_
dE
dp
=
pc
2
E
: (94)
If we multiply Eqs. (92) and (93) then we have
v
g
v
f
=
E
p
p
E
c
2
= c
2
; (95)
which represents a relation of dispersion of an electromagnetic wave in Optics.
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 523
Therefore, it is possible to think about action waves. We can propose the following
integral equation
e

DS
2
2s
2
=
_
A(p; E; k; o)f (kx ot) dk do : (96)
On the other hand, we do not consider any relation between k and o; we will nd
it from the Eq. (96).
Theorem 5. Optimal wave superposition (Eq. (96)), which satises Gaussian density
function (Eq. (22)), permit us to obtain Planks and De Broglies equations.
Proof. Let d
c
be the derivative on space coordinates.
d
c
DS
2
2s
2
_ _
e

DS
2
2s
2
_ _
=
_
A(p; E; k; o)d
c
f dk do ;
d
c
DS
2
2s
2
_
A(p; E; k; o)f dk do =
_
A(p; E; k; o)d
c
f dk do :
From here
_
A(p; E; k; o) fd
c
DS
2
2s
2
d
c
f
_ _
dk do = 0
this implies the differential equation
fd
c
DS
2
2s
2
d
c
f = 0 ; (97)
DS
2
2s
2
ln f = ln A
= f = Ae

DS
2
2s
2
; (98)
where f is an exponential function on x and t
f = Ae
F(kxot)
: (99)
From where
F(kx ot) =
DS
2
2s
2
ln A : (100)
Expanding F(kx ot) in Taylors series we have
F(kx ot) = F(0)
dF
du

0
(kx ot)
1
2
d
2
F
du
2

0
(kx ot)
2
=
DS
2
2s
2
ln A : (101)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 524
All terms are zero, except F(0)
F(0) = ln A ;
a
2
2
(kx ot)
2
=
(px Et)
2
2s
2
(102)
and we obtain
p = ask = sk
/
;
E = aso = so
/
: (103)
The integral equation is
e

DS
2
2s
2
=
1
a
2
_
A(p; E; k
/
; o
/
) e

(k
/
xo
/
t)
2
2
dk
/
do
/
; (104)
k
/
and o
/
are dummy variables, then we can do k
/
k and o
/
o
e

DS
2
2s
2
=
1
a
2
_
A(p; E; k; o)e

(kxot)
2
2
dk do
therefore a = 1 and
p = sk ;
E = so ; (105)
which are De Broglies and Planks equations, respectively. Again we obtain that
s = _ : & (106)
The new integral equation is
e

(pxEt)
2
2s
2
=
_
A(p; E; k; o)e

(kxot)
2
2
dk do
this implies that
A(p; E; k; o) = d
E
s
o
_ _
d
p
s
k
_ _
(107)
and obviously
e

(pxEt)
2
2s
2
=
_
d
E
s
o
_ _
d
p
s
k
_ _
e

(kxot)
2
2
dk do : (108)
Let us analyze every term in Eq. (108)
d
E
s
o
_ _
=
1
2ps
_
o
o
e
i
(Eso)
s
t
dt ;
d
p
s
k
_ _
=
1
2ps
_
o
o
e
i
(psk)
s
x
dx (109)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 525
from there
1
(2ps)
2
_
e
i
(pxEt)
s
e
i
(skxsot)
s
dxdt = A(p; E; k; o) : (110)
Now we are able to dene the wave functions
C
p;E
=
1
2ps
e
i
(pxEt)
s
;
C
p
/
;E
/
=
1
2ps
e
i
(p
/
xE
/
t)
s
: (111)
Or considering only p (and the fact that s = _)
C
p
=
1

2p_
_ e
i
px
_
; (112)
_
C
p
C
n
p
/ dx = d(p p
/
) ; (113)
which is the Diracs condition of normalization. If we consider only E; we have
C
E
=
1

2p
_
_
e
i
Et
_
; (114)
_
C
E
C
n
E
/ dt = d(E E
/
) ; (115)
which is a energy conservation law. From Eq. (112) we see that
i_
q
qx
C
p
= pC
p
(116)
and from Eq. (114) we obtain
i_
q
qt
C
E
= EC
E
: (117)
We see the way in which all concepts of Quantum Mechanics appear naturally.
In general, the wave function is
C = Ae
i DS
_
: (118)
Given that
DS =

i
p
i
q
i
Et ; (119)
where p
i
and q
i
are physical conjugate observables.
i_
qC
qq
i
= p
i
C (120)
in this way, every physical (p
i
) observable has associated a Hermitian operator
( ^ p
i
= i_
q
qq
i
), such that its mean values are the eigenvalues of that operator.
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 526
Let us consider a photon
E = pc ;
E
2
c
2
p
2
= 0 (121)
given that E =
qS
qt
and p =
qS
qx
then
1
c
2
qS
qt
_ _
2

qS
qx
_ _
2
= 0 : (122)
This bring us to dene the Lagrangian density
L=
qS
qx
_ _
2

1
c
2
qS
qt
_ _
2
: (123)
From EulerLagrange equation
q
qx
qL
qS
qx
_ _
_ _

q
qt
qL
qS
qt
_ _
_ _
= 0 ; (124)
q
2
S
qx
2

1
c
2
q
2
S
qt
2
= 0 ; (125)
where we obtain
S = S
0
e
i(kxot)
(126)
the energy
E =
qS
qt
= ioS
0
e
i(kxot)
: (127)
The square modulus
[E[
2
= o
2
[S
0
[
2
: (128)
The average is
[E[
2
) =
1

2p
_
s
_
o
o
o
2
[S
0
[
2
e

DS
2
0
2s
2
dDS
0
= o
2
_
2
(129)
i.e.
E
fot on
=

[E[
2
)
_
= o_ : (130)
What we measure, in fact, is the mean quadratic value of the random uctuations of
energy.
Given that
DS
2
0
2s
2
=
DE
2
2s
2
E

Dt
2
2s
2
t
; (131)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 527
we nd
s
E
s
t
=
s
2
=
_
2
(132)
or in general
DEDtX
_
2
; (133)
which is the other Heisenbergs uncertainty equation. Eq. (131) explains the reason
there is a Gaussian dispersion of energy in a monochromatic LASER and not a
Diracs delta (see Fig. 4).
Remark 1. Every eigenvalue of a physical observable is a mean quadratic value of its
random uctuations.
p
i
=

[p
2
i
[)
_
(134)
if we measure its conjugate observables
DS
2
2s
2
=
Dp
2
i
2s
2
p

Dq
2
i
2s
2
q
(135)
and
Dp
i
Dq
i
X
_
2
; (136)
ARTICLE IN PRESS
200 400 600 800 1000 1200
100
200
300
400
500
600
700
800
900
E=h
0
E=()

Fig. 4. Spectrum of dispersion of a minimum uncertainty LASER.


E. Jimenez, D. Moya / Physica A 348 (2005) 505543 528
p
2
i
) =
_
p
2
i
e

DS
2
2s
2
dDS
_
e

DS
2
2s
2
dDS
=
_
p
2
i
e

Dp
2
i
2s
2
p
dDp
_
e

Dp
2
i
2s
2
p
dDp
(137)
on the other hand,
p
i
) = 0 : (138)
Therefore, p
i
=

p
2
i
)
_
is the only one magnitude which we can measure. Quantum
Mechanics consider it as the eigenvalues of the observable p. Because p is a number,
it has standard deviation zero.
s
2
p
= p
2
i
) p
i
)
2
=
_
C
n
^ p
i
2
Cd
3
x
_
C
n
^ p
i
Cd
3
x
_ _
2
;
s
2
p
= p
2
i
p
2
i
= 0 : (139)
Copenhagen school concludes that eigenvalues do not have standard deviation, then
they have to nd other arguments (for example, the uncertainty principle) to explain
the standard deviation observed in experiments. That is not scientically accepted
because Occams razor principle (see [30,31]) exhorts to introduce the minimum
number of arbitrary postulates in order to choose the best of the competing theories.
We justify that in experimental measurements, a maximum will be obtained in p
i
=

p
2
i
)
_
and around this value there will be a statistical dispersion.
3.4.1. Spontaneous decay transitions
Optimal conguration are those congurations that satises the condition of
maximum of mean information quantity. Let us consider a number of atoms in their
stationary states.
Let E
0
be a stationary state, i.e., which satises that
I
mop
= I
m
(E
0
) :
Let us suppose a transition E
0
E
0
DE
I
m
(E
0
DE) = I
m
(E
0
)
dI
m
dE

E
0
DE
1
2
dI
2
m
dE
2

E
0
DE
2
O(DE
3
) ;
where O(DE
3
) is negligible. If DE = 10 eV; then, DE is of order of 10
18
J; therefore,
DE
2
is of the order of 10
36
J
2
; which is a very small quantity.
dI
m
dE

E
0
= 0 (maximum) ;
a
2
=
dI
2
m
dE
2

E
0
o0 (maximum) ;
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 529
I
m
(E DE) = I
m
(E
0
)
a
2
2
DE
2
:
Therefore, the amount of information decreases from its maximum value, which is
not a stable condition. The system will evolve in such a way that it will try to remove
the energy DE in order to reach the maximum amount of information,
from DI
m
=
a
2
2
DE
2
;
to DI
m
= 0 : (140)
Now, let us suppose that the probability varies in 51 around P =
1
2
where the
maximum of the distribution is produced. The variation on P in a small quantities
is equivalent to study the replicator dynamics see Ref. [32].
I
m
= P ln P (1 P) ln P ;
I
m
=
1
2

_ _
ln
1
2

_ _

1
2

_ _
ln
1
2

_ _
= 4
2
: (141)
In the next sections we will show that (Eq. (187))
DI
m
=
2DS

2p
_
s
_ _
2
(142)
therefore
=
DS

2p
_
s
(143)
but =
DN
N
; where N is the number of excited atoms and DS = DEDt:
DN
N
=
DEDt

2p
_
s
:
In order to provide these arguments with physical sense, we dene the decay speed as
dN
dt
=
NDE

2p
_
s
or
dN
N
=
DE dt

2p
_
s
; (144)
ln
N
N
0
_ _
=
DEt

2p
_
s
; (145)
N = N
0
e

DEt

2p
_
s
: (146)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 530
Dening
t =

2p
_
s
DE
; (147)
N = N
0
e

t
t
: (148)
Therefore, the excited atoms will decrease exponentially. We cannot know which
atom will be affected by a transition E
0
DE E
0
emitting a photon during the
process, the only thing we can say is that the number of atoms will evolve as is stated
in (148).
t measures the mean time of de-excitation of the population of atoms
t) =
_
o
0
te

t
t
dt
_
o
0
e

t
t
dt
= t ; (149)
because of DE = _o and s = _; we have
t =

2p
_
o

1
o
(150)
this is the expected result. This theoretical argumentation gives a solid base to
Fermis study (calculation of the speed of transition of an excited atom).
3.4.2. Schrodingers equation
Let a quantum mechanical system be conned in a potential U. We can expect that
the wave function will be scattered an innite number of times in the potential.
Forming a wave package which is described by
C(~x; t) =
_
o
o
A(
~
k)e
i(
~
k~xot)
d
3
k (151)
with
o = o(k) (152)
and
p = _k; E = _o (153)
these equations are the momentum and total energy. Applying the Laplace operator
in Eq. (151)
V
2
C(~x; t) =
_
o
o
k
2
A(
~
k)e
i(
~
k~xot)
d
3
k ;

_
2
V
2
C
2m
0
=
_
o
o
p
2
2m
0
A(
~
k)e
i(
~
k~xot)
d
3
k : (154)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 531
In the same way
qC
qt
=
_
o
o
ioA(
~
k)e
i(
~
k~xot)
d
3
k ;
i_
qC
qt
=
_
o
o
EA(
~
k)e
i(
~
k~xot)
d
3
k (155)
but
E =
p
2
2m
0
U ; (156)
i_
qC
qt
=
_
o
o
p
2
2m
0
A(
~
k)e
i(
~
k~xot)
d
3
k
_
o
o
U(~x)e
i(
~
k~xot)
d
3
k :
Thus, we obtain Schro dingers equation.
i_
qC
qt
=
_
2
2m
0
V
2
CU(~x)C (157)
because U(~x) depends on the position, the integral is calculated respect to k. Eq.
(157) is the non-relativistic Schro dingers equation.
3.4.3. Quantum mechanics postulates
1. We have seen that to every physical observable corresponds a Hermitian
operator. The measured eigenvalues are the eigenvalues of this operator.
^ p
i
C = p
i
C (158)
with
^ p
i
= i_
q
qq
i
(159)
also
p
i
=

p
2
i
)
_
: (160)
2. Every physical system is described by a complex function C which satises
Schro dingers equation.
i_
qC
qt
=
_
2
2m
V
2
CU(~x)C : (161)
3. The wave function C contains all the information of the physical system
(because it is a function of the action of the system) and its square modulus is the
probability density of measuring an interaction in the element d
3
x: In fact,
multiplying (161) by C
n
; and taking the complex conjugate of (161) and multiplying
it by C; we have:
i_C
n
qC
qt
=
_
2
2m
C
n
V
2
CU(~x)C
n
C ;
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 532
i_C
qC
n
qt
=
_
2
2m
CV
2
C
n
U(~x)C
n
C :
Subtracting them
i_C
n
qC
qt
i_C
qC
n
qt
=
_
2
2m
(C
n
V
2
CCV
2
C
n
) : (162)
i_
q
qt
(C
n
C) =
_
2
2m
(C
n
V
2
CCV
2
C
n
)
or
q
qt
[C[
2
=
i_
2m
V (C
n
VCCVC
n
) : (163)
Integrating respect the volume and applying Gauss theorem, we have
d
dt
_
[C[
2
d
3
x =
i_
2m
_
s
(C
n
VCCVC
n
) ~n ds : (164)
The system is conned in a potential C(o) = 0; so, the value of the integral in the
innite is zero
d
dt
_
U
[C[
2
d
3
x = 0 ; (165)
where U represents the whole space. Therefore, this integral does not depends on
time, it is a constant whose value we decide is 1.
_
U
[C[
2
d
3
x = 1 : (166)
Therefore, [C[
2
is a density probability. On the other hand, if we dene
~
J =
i_
2m
(C
n
VCCVC
n
) (167)
for a volume V limited by a surface A.
d
dt
_
V
[C[
2
d
3
x
_
A
~
J ~n ds = 0 (168)
or
q
qt
[C[
2
V
~
J = 0 ; (169)
which is the law of conservation of probability. Because probability is conserved,
information will be conserved too.
Conservation of the amount of information is associated with physical
symmetries. If a symmetry is broken, the amount of information decreases and
mass is generated, as we will see later in this paper.
4. Given that wave functions are eigenfunctions of a Hermitian operator, they
form a Hilbert space and if they are not degenerated then the space is complete. Any
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 533
other function will be expressed as a linear combination of the elements of the base
C =

i
C
i
C
i
: (170)
This is what is called superposition principle.
5. The mean value of a physical magnitude is

f =

i
f
i
P
i
;
where P
i
is the probability of measure f
i
: Given that
^
f
i
C
i
= f
i
C
i
(171)
and
_
C
n
i
C
j
d
3
x = d
ij
(172)
also
P
i
C
i
C
n
j
d
ij
(173)
therefore

f =

i
f
i
C
i
C
n
j
d
ij
=

i;j
f
i
C
i
C
n
j
_
C
n
j
C
i
d
3
x ; (174)

f =

i;j
C
i
C
n
j
_
C
n
j
^
f C
i
d
3
x ;

f =
_

i;j
C
n
j
^
f C
i
d
3
xC
i
C
n
j
=
_

j
C
n
j
C
n
j
^
f

i
C
i
C
i
d
3
x ; (175)

f =
_
C
n
^
f Cd
3
x : (176)
This is called expected value or mean value. All of this is in perfect agreement with
Landaus hypothesis (see Ref. [33]).
3.5. Information and matter
Theorem 6. If symmetry is broken then the variation of information implies the
variation of action according to Eq. (187).
Proof. The distribution function of action is
F(S) =
1

2p
_
s
e

Ds
2
2s
2
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 534
and the probability
P([S[pS
0
) =
_
S
0
S
0
F(s) dDS
whose diagrams are represented in Fig. 5.
F(S) =
1

2p
_
s
reaches its maximum value when DS = 0 which corresponds to P =
1
2
:
So
P(c) = P(c) =
1
2
(177)
considering that DS o Ds (the relativistic interval) and Ds
2
= c
2
Dt
2
Dx
2
= 0 for
light, then DS = 0 corresponds to the action for light; therefore
v = c : (178)
When matter exists DSa0 and Pa
1
2
: The symmetry is broken.
Let us consider:
0o
DS
s

51 ; (179)
P(DS) =
1

2p
_
s
_
0
o
e

DS
2
2s
2
dDS
1

2p
_
s
_
DS
0
e

DS
2
2s
2
dDS ;
ARTICLE IN PRESS
200 400 600 800 1000 1200
100
200
300
400
500
600
700
800
900
F(S)
P(S)
0
0
1
2
S
S
1
Fig. 5. Distribution function and accumulated probability of action.
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 535
P(DS) =
1
2

DS

2p
_
s
: (180)
In the same way
P(DS) =
1
2

DS

2p
_
s
;
P(DS) P(DS) = 1 (181)
and the amount of information is
I
m
=
1
2

DS

2p
_
s
_ _
ln
1
2

DS

2p
_
s
_ _

1
2

DS

2p
_
s
_ _
ln
1
2

DS

2p
_
s
_ _
: (182)
Let us analyze the term
ln
1
2

DS

2p
_
s
_ _
= ln
1
2
1
2DS

2p
_
s
_ _ _ _
= ln
1
2
_ _
ln 1
2DS

2p
_
s
_ _
= ln
1
2
_ _

2DS

2p
_
s
; (183)
ln
1
2

DS

2p
_
s
_ _
= ln
1
2
_ _

2DS

2p
_
s
: (184)
From here
I
m
=
1
2

DS

2p
_
s
_ _
ln
1
2
_ _

2DS

2p
_
s
_ _

1
2

DS

2p
_
s
_ _
ln
1
2
_ _

2DS

2p
_
s
_ _
;
I
m
= ln
1
2
_ _

4DS
2
2ps
2
: (185)
The optimal information is
I
mop
=
1
2
ln
1
2
_ _

1
2
ln
1
2
_ _
= ln
1
2
_ _
(186)
from here we obtain
DI
m
=
4DS
2
2ps
2
(187)
and

DI
m
_
= i
2DS

2p
_
s
or
i
DS
s
=

pDI
m
2
_
: & (188)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 536
Mass is generated due to the interaction
_ p =
dE
dx
= px = Et = DS = px Et = 2px : (189)
For a wave that travels at light speed, we have
DS = 0 = px Et ;
C
p light
= Ae
i
px
_
; (190)
DC
p light
= i
D(px)
_
C
p luz
(191)
and the dispersion relation
DC
p mass
= i
D(2px)
_
C
p
= i
DS
_
C
p
(192)
with
i
DS
_
=

pDI
m
2
_
: (193)
For the wave that travels at speed of light, we have to consider the statistical weight
1
2
; which is the probability
P(c) = P(c) =
1
2
;
DC
p light
=
1
2
i
DS
_
C
p
; (194)
DC
p light
= i
DS
L
_
C
p
(195)
with
DS = 2DS
L
: (196)
Note that the angular momentum of the interaction that generates mass is
DS
Df
= 2
DS
L
Df
; (197)
L
z grav
= 2_ ; (198)
which is the spin of the graviton.
Let us consider the rst proposed problem in Feynmans book Addition of paths
and Quantum Mechanics (see Fig. 6).
A free Diracs particle satises the following Hamiltonian
^
H = c(^ a ^ p) m
0
c
2
^
b (199)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 537
with
(200)
and
(201)
where 1 is the identity matrix and ^ s
i
Paulis spinors. Calculating a speed in one
direction, for instance, x; we have
i
_
[
^
H; ^ x] =
_
^ x ; (202)
i
_
[c(^ a ^ p) m
0
c
2
^
b; ^ x] =
i
_
[c(^ a ^ p); ^ x] ;
c
i
_
[ ^ a
x
^ p
x
^ a
y
^ p
y
^ a
z
^ p
z
; ^ x] = c
i
_
[ ^ a
x
^ p
x
; ^ x]
= c
i
_
^ a
x
[ ^ p
x
; ^ x] = c
i
_
^ a
x
(i_) = ^ a
x
c ; (203)
ARTICLE IN PRESS
50 100 150 200 250 300 350 400 450 500 550
50
100
150
200
250
300
ct
x=ct x=-ct
0
A
x
C
Fig. 6. Contribution of virtual Diracs particles to the process of generation of mass.
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 538
_
^ x = ^ a
x
c which applied to a state function gives
_
^ x
C
1
C
2
_ _
= c ^ a
x
C
1
C
2
_ _
: (204)
Let us calculate the eigenvalues of ^ a
x
C
1
C
2
_ _
= l
C
1
C
2
_ _
(205)
given that ^ a
x

= ^ a
x
:
C
1
C
2
_ _
= 0 : (206)
Its determinant is
l
2
1 ^ s
x
2
= 0; ^ s
x
2
= 1 ; (207)
(l
2
1)1 = 0 ; (208)
l = 1
therefore
_
^ xC = cC (209)
and the eigenvalue of speed is c:
Feynman calls that particle Diracs particle and gives it the propagator i
DS
L
_
; so
DC
p
= i
DS
L
_
C
p
He divides the segment OA in n parts and build different paths for the propagator,
this is shown in Fig. 6, which represents a particle propagating in the x-axis with c
or c:
Actually, this is a very clever application of Huygens principle because every
change in the direction represents the point from a wave is propagating, as it is
indicated in Fig. 6 in point C.
The propagator which goes from O to A is
K(A; 0) =

n
p=0
n
p
_ _
iDS
L
_
_ _
p
K(0) : (210)
The number of changes of direction is N = 2
n
; because of the sum

n
p=0
n
p
_ _
= 2
n
; (211)
which is deduced from the binomial
(1 x)
n
=

n
p=0
n
p
_ _
x
p
(212)
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 539
for x = 1: The derivative of the binomial is
n(1 x)
n1
=

n
p=0
p
n
p
_ _
x
p1
so that
n2
n1
=

n
p=0
p
n
p
_ _
: (213)
The action gained in p changes of direction is
S
p
= 2p
n
p
_ _
DS
L
(214)
in order to preserve the mirror symmetry between x and x in 0. The total gained
action is
S
total
=

n
p=0
2p
n
p
_ _
DS
L
= 2n2
n1
= n2
n
: (215)
The mean action trough ct axis is

S =
S
total
N
=
n2
n
2
n
DS
L
= nDS
L
(216)
therefore
DS
L
=

S
n
(217)
and
K(A; 0) = 1
i

S
_n
_ _
n
K(0) (218)
When n o
K(A; 0) = K(0)e
i

S
_
(219)
with

S = m
0
c
2
Dt
p
; (220)
where Dt
p
is the proper time.
Therefore, particles does not exist, they are dispersion processes of waves.
Consider an observer moving with speed v; then

S = m
0
c
2

1
v
2
c
2
_
Dt ; (221)
where Dt is not the proper time.
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 540
For v5c

S - m
0
c
2

m
0
v
2
2
and
v =
x
A
x
0
t
A
t
0
;
Dt = t
A
t
0
:
Therefore,
K(A; 0) = K(0)e
i(m
0
c
2
=_)Dt
e
i
m
0
(x
A
x
0
)
2
2_(t
A
t
0
)
(222)
and by smoothing the function because
m
0
c
2
_
= o is a high frequency
K(A; 0) = K(0)e
i
m
0
(x
A
x
0
)
2
2_(t
A
t
0
)
(223)
or
K(A; 0) = K(0)e

(x
A
x
0
)
2
2
i_(t
A
t
0
)
m
0
_
with
s
2
=
i_(t
A
t
0
)
m
0
_ _
(224)
given that
_
o
o
K(A; 0) dx = 1 (225)
K(0)

2p_(t t
0
)i
m
0

= 1 (226)
and
K(0) =

m
0
2p_i(t t
0
)
_
;
which gives the Feynmans non-relativistic propagator
K(A; 0) =

m
0
2p_i(t t
0
)
_
e
i
m
0
(x
A
x
0
)
2
2_(t
A
t
0
)
: (227)
4. Conclusions
1. The Universe is structured in optimal laws. Random processes are those that
maximize the mean information and are strongly related to symmetries, therefore, to
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 541
conservational laws. Random processes have to do with optimal processes to
manage information, but they do not have anything to do with the contents of it, this
is the anthropic principle: Laws and physical constants are designed to produce life
and conscience see Ref. [34].
2. Every physical process satises that the action is a locally minimum. It is the
most important physical magnitude, after information, because through it is possible
to obtain the energy, angular momentum, momentum, charge, etc. Every physical
theory must satisfy the necessary (but not sufcient) condition dS = 0; because it is
an objective principle.
3. We have demonstrated that the concepts of information and the principle of
minimum action dS = 0 leads us to develop the concepts of Quantum Mechanics
and to explain the spontaneous decay transitions. Also, we have understood that all
physical magnitudes are quadratic mean values of random uctuations.
4. Information connects every thing in nature, each phenomenon is an expression
of totality. Moreover, in quantum gravity, this fact is explained by WheelerDeWitt
equation.
5. Mass appears were certain symmetries are broken, or equivalently, when the
mean information decreases. A very small decrease in information produces
enormous amounts of energy. Also, we have shown that there exist a relation
between the amount of information and the energy of a system (Eq. (187)).
Information, mass and energy are conservative quantities which are able to
transform one into another.
6. A interdisciplinary scope it is possible, only, if we suppose that General System
Theory is valid because games and information can be seen as particular cases of
complex systems. Quantum Mechanics is obtained in the form of a generalized
complex system where entities, properties and relations conserve his primitive form
and add some other ones related with nature of the physical system.
References
[1] J. Mycielski, Games with perfect information, in: Handbook of Game Theory, vol. I, 1992
(Chapter 3).
[2] S. Zamir, Repeated Games of Incomplete Information: Zero-Sum, in: Handbook of Game Theory,
vol. I, Wiley, New York, 1992 (Chapter 5).
[3] F. Forges, Repeated Games of Incomplete Information: Non-Zero-Sum, Handbook of Game
Theory, vol. I, 1992 (Chapter 6).
[4] H. Varian, Analyse Microe conomique, Troisie` me e dition, De Boeck Universite , Bruxelles, 1995.
[5] S. Braunstein, Quantum Computing, Wiley-Vch, Weinheim, Germany, 1999.
[6] N. Boccara, Modeling Complex Systems, Springer, Heidelberg, 2004.
[7] Y. Bar-Yam, Dynamics of Complex Systems, Addison-Wesley, Reading, MA, 1997.
[8] D. Bouwmeester, A. Eckert, A. Zeilinger, The Physics of Quantum Information, Springer, London,
UK, 2001.
[9] R. Myerson, Game Theory Analysis of Conict, Harvard University Press, Massachusetts, MA,
1991.
[10] L. Landau, Mecanica Cua ntica, Editorial Reverte , 1978.
[11] D. Moya, El Campo de Accio n, una nueva interpretacio n de la Meca nica Cua ntica, Memorias de
Encuentros de F sica de IV, V, VI, VII y VIII, Polite cnica Nacional, 1993.
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 542
[12] E. Jime nez, Preemption and Attrition in Credit/Debit Cards Incentives: Models and Experiments,
Proceedings, 2003 IEEE International Conference on Computational Intelligence for Finantial
Engineering, Hong Kong, 2003a.
[13] E. Jime nez, Quantum games and minimum entropy, in: Lecture Notes in Computer Science,
vol. 2669, Springer, Canada, 2003b, pp. 216225.
[14] E. Jime nez, Quantum games: mixed strategy Nashs equilibrium represents minimum entropy,
J. Entropy 5 (4) (2003c) 313347.
[15] M. Nielsen, I. Chuang, Quantum Computation and Quantum Information, Cambridge University
Press, Cambridge, United Kingdom, 2000.
[16] C.E. Shanonn, A mathematical theory of communication, Bell System Tech. J. 27 (1948) 379656.
[17] C. Machiavello, G. Palma, A. Zeilinger, Quantum Computation and Quantum Information Theory,
World Scientic, London, UK, 2000, 1993.
[18] A. Einstein, B. Podolsky, N. Rosen, Phys. Rev. 47 (1935) 777.
[19] J. Stiglitz, La Grande De sillusion, Fayard, Le Libre de Poche, Par s, 2003.
[20] J. Stiglitz, Quand le Capitalisme Perd la Tete, Fayard, Par s, 2003.
[21] J. Eisert, M. Wilkens, M. Lewenstein, Quantum Games and Quantum Strategies, Working paper,
University of Postdam, Germany, 2001.
[22] Du Jiangfeng et al., Experimental realizations of quantum games on a quantum computer, Mimeo,
University of Science and Technology of China, 2002.
[23] D. Meyer, Quantum strategies, Phys. Rev. Lett. 82 (1999) 10521055.
[24] D. Meyer, Quantum games and quantum algorithms, AMS contemporary Mathematics Volume:
Quantum Computation and Quantum Information Science, USA, 2000.
[25] R. Myerson, Communication, in: Correlated Equilibria and Incentive Compatibility, Handbook of
Game Theory, vol. II, 1994 (Chapter 24).
[26] D. Fudenberg, J. Tirole, Game Theory, The Mit Press, Cambridge, MA, 1991.
[27] E. Rasmusen, Games and Economic Information: An Introduction to Game Theory, Basil Blackwell,
Oxford, 1989.
[28] H. Goldstein, Meca nica Cla sica, Editorial Aguilar, 1966, Seciones: 1.4, p. 25; 2.3, p. 48.
[29] L. Elsgoltz, Ecuaciones Diferenciales y Ca lculo Variacional, editorial MIR, Moscu , 1969. p. 370.
[30] F. Mora, Diccionario de la Filosof a, OccamCuchilla o Navaja de Occam, Tomo III, p. 2615,
Editorial Ariel, Barcelona, 1994a.
[31] F. Mora, Diccionario de Filosof a, Econom a, Tomo II E-J, p. 967, Editorial Ariel, Barcelona, 1994b.
[32] P. Hammerstein, R. Selten, Game Theory and Evolutionary Biology, in: Handbook of Game Theory,
vol. 2, Elsevier BV, Amsterdam, 1994.
[33] L. Landau, Meca nica Cua ntica no Relativista, Reverte SA, 1967, Cap tulo I, p. 3.
[34] S. Hawking, El Universo en una Ca scara de Nuez, Cr tica/Planeta, Barcelona, edicio n espan ola, 2002.
p. 86.
[35] J. Eisert, S. Scheel, M.B. Plenio, Distilling Gaussian states with Gaussian operations is impossible,
Phys. Rev. Lett 89 (2002) 137903.
[36] D. Moya, P. Alvarez, A. Cruz, Elementos de la Computacion Cuantica, I+D Innovacion, vol. 7(13)
Quito Ecuador, 2004.
[37] S. Sorin, Repeated games with complete information, Handbook of Game Theory, vol. 1, Chap. 4,
1992.
ARTICLE IN PRESS
E. Jimenez, D. Moya / Physica A 348 (2005) 505543 543

S-ar putea să vă placă și