00 voturi pozitive00 voturi negative

111 vizualizări21 paginiDidier Gonze, 2012

Sep 26, 2016

© © All Rights Reserved

PDF, TXT sau citiți online pe Scribd

Didier Gonze, 2012

© All Rights Reserved

111 vizualizări

00 voturi pozitive00 voturi negative

Didier Gonze, 2012

© All Rights Reserved

Sunteți pe pagina 1din 21

Didier Gonze

October 5, 2012

Introduction

The logistic equation (sometimes called the Verhulst model or logistic growth curve) is a

model of population growth first published by Pierre-Francois Verhulst (1845,1847). The

model is continuous in time, but a modification of the continuous equation to a discrete

quadratic recurrence equation known as the logistic map is also widely studied.

The continuous version of the logistic model is described by the differential equation:

dN

= rN

dt

N

1

K

(1)

where r is the Malthusian parameter (rate of maximum population growth) and K is the

carrying capacity (i.e. the maximum sustainable population). Dividing both sides by K

and defining X = N/K then gives the differential equation

dX

= rX(1 X)

dt

(2)

Xn+1 = rXn (1 Xn )

(3)

Here we will first describe the fascinating properties of the discrete version of the logistic

equation and then present the continuous form of the equation.

Before reading this section, you are invited to do some exploration using the calculator.

The problem is to understand the behavior of the following innocent-looking difference

equation:

Xn+1 = f (Xn ) = rXn (1 Xn )

(4)

Let r = 0.5 and x0 = 0.1, and compute x1 ,x2 ,... x30 using equation (4). Now repeat the

process for r = 2.0, r = 2.7, r = 3.2, r = 3.5, or r = 3.8. As r increases you should

observe some changes in the type of solution you get.

n

0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

r = 0.5

0.1000

0.0450

0.0215

0.0105

0.0052

0.0026

0.0013

0.0006

0.0003

0.0002

0.0001

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

r = 2.0

0.1000

0.1800

0.2952

0.4161

0.4859

0.4996

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

0.5000

r = 2.7

0.1000

0.2430

0.4967

0.6750

0.5923

0.6520

0.6126

0.6407

0.6215

0.6351

0.6257

0.6323

0.6277

0.6310

0.6287

0.6303

0.6292

0.6300

0.6294

0.6298

0.6295

0.6297

0.6296

0.6297

0.6296

0.6296

0.6296

0.6296

0.6296

0.6296

0.6296

r = 3.2

0.1000

0.2880

0.6562

0.7219

0.6424

0.7351

0.6231

0.7515

0.5975

0.7696

0.5675

0.7854

0.5393

0.7951

0.5214

0.7985

0.5148

0.7993

0.5133

0.7994

0.5131

0.7995

0.5131

0.7995

0.5130

0.7995

0.5130

0.7995

0.5130

0.7995

0.5130

r = 3.5

0.1000

0.3150

0.7552

0.6470

0.7993

0.5614

0.8618

0.4168

0.8508

0.4443

0.8641

0.4109

0.8472

0.4531

0.8673

0.4029

0.8420

0.4657

0.8709

0.3936

0.8353

0.4814

0.8738

0.3860

0.8295

0.4950

0.8749

0.3830

0.8271

0.5005

0.8750

r = 3.8

0.1000

0.3420

0.8551

0.4707

0.9467

0.1916

0.5886

0.9202

0.2790

0.7645

0.6842

0.8211

0.5583

0.9371

0.2240

0.6606

0.8519

0.4793

0.9484

0.1861

0.5755

0.9284

0.2527

0.7177

0.7699

0.6731

0.8362

0.5206

0.9484

0.1860

0.5753

The first thing you should notice about eq. (4) is that it is non-linear, since it involves

a term Xn2 . Because of this non-linearity, this equation has remarkable non-trivial properties, but can not be solved analytically. Therefore we must resort to other methods.

This equation and its variants still puzzle the mathematician. Why then should we study

non-linear difference (and continuous) equations? Maybe because mathematicians like to

be puzzled, but mainly because all biological processes are truly non-linear.

The concept of steady state (or equilibrium) relates to the absence of changes in a system.

In the context of difference equations, the steady state Xss is defined by

Xn+1 = Xn = Xss

(5)

Xss = rXss (1 Xss )

(6)

2

rXss

Xss (r 1) = 0

(7)

(8)

By definition, a stable steady state is a state that can be reached from the neighbour

states, whereas an unstable steady state is a state that the system will leave as soon as

a small perturbation will move the system out of this state. The notion of stability is

schematized here: The stability is a local property. It can be calculated by applying a small

perturbation xn from the steady state. Then we look at the evolution of the perturbation.

If it is decreasing (... < xn+2 < xn+1 < xn ), this means that the perturbation is damped

out and the steady state is stable. Otherwise (... > xn+2 > xn+1 > xn ), the perturbation

is amplified, the system leaves its steady state and the latter is then unstable.

Lets apply this in the case of the logistic equation (for the steady states Xss1 and Xss2 ).

Consider a small perturbation x that moves the system out of its steady state:

Xss Xn = Xss + xn

(9)

(10)

Combining eq (4) and eq (10) we get

xn+1 = Xn+1 Xss

= f (Xn ) Xss

= f (Xss + xn ) Xss

(11)

Unfortunately, eq (11) is still not useable because it involves the evaluation of the function

f at Xss + xn , which is unknown. Hopefully, to overcome this difficulty, there is a trick.

3

We can indeed exploit the fact that xn is small compared to Xss and develop the function

as a Taylor expansion around Xss :

df

f (Xss + xn ) = f (Xss ) +

xn + O(x2n )

(12)

dX X=XSS

The very small terms O(x2n ) can be neglected, at least close to the steady state (i.e. when

xn is small). This approximation results in some cancellation of terms in eq. (11) because

f (Xss ) = Xss . Thus the approximation

df

df

xn =

xn

(13)

xn+1 = f (Xss ) Xss +

dX X=XSS

dX X=XSS

can be written as

xn+1 = axn

where

a=

df

dX

(14)

(15)

X=XSS

Clearly, if |a| < 1, the steady state is stable (the perturbation xn tends to 0 as n increases),

while if |a| > 1, the steady state is unstable (the perturbation xn increases as n increases).

In the case of the logistic equation, we have for the steady state Xss1

df

a=

= (r 2rX)X=Xss1 =0 = r

dX X=Xss1

(16)

Similarly, for the second steady state, Xss2, we have

df

a=

= (r 2rX)X=Xss2 =11/r = 2 r

dX X=Xss2

(17)

We conclude that the steady state Xss2 of the logistic equation will be stable when 1 <

r < 3. The steady state Xss1 thus becomes unstable when the second steady state Xss2

starts to exist and is stable. Now the $1000 question is: what happens when r > 3 ?

1

0.9

=11/r

SS2

0.8

(unstable)

0.7

ss

0.6

XSS2=11/r

(stable)

0.5

0.4

0.3

0.2

0.1

=0 (stable)

SS1

=0 (unstable)

SS1

0

0

0.5

1.5

2.5

3.5

4

Graphical method

In this section we examine a simple technique to visualize the solution of a first-order

difference equation as the logistic equation.

First, let us draw the graph of f (X), the next generation function. In our case, f (X) =

rX(X 1), so that f (X) is a parabola passing through 0 at X = 1 and X = 0, and with

a maximum at X = 1/2 (red curve in fig. 2).

Choosing an initial value X0 , we can read X1 = f (X0 ) directly from the parabolic curve.

To continue finding X2 = f (X1 ), X3 = f (X2 ), and so on, we need to similarly evaluate

f (X) at each succeeding value of Xn . One way of achieving this is to use the line Xn+1 =

Xn to reflect each value of Xn+1 back to the Xn axis (blue trajectory in fig. 2). This

process, which is equivalent to bouncing between the curves Xn+1 = Xn (diagonal line) and

Xn+1 = f (X) (parabola) is a recursive graphical method for determining the population

level at each iterative step n.

As we can see in figure 2 (for r = 2.8), the sequence of points converges to a single point

at the intersection of the parabola with the diagonal line. This point satisfies Xn+1 = Xn .

This is bydefinition

the steady state of the equation. Recall that the condition for stability

df

is |a| =

< 1. Interpreting graphically, this condition means that the tangent

dX Xss

line L to f (x) at the steady state must have a slope not steeper that 1.

1

r = 2.8

n+1

0.8

0.6

0.4

0.2

0.2 X

1

X0

0.4

X2 0.6

0.8

0.8

0.6

0.4

0.2

10

15

20

25

30

step, n

are shown. When the parameter r increases, this effectively increases the steepness of the

parabola, which makes the slope of this tangent steeper, so that eventually the stability

condition is violated. The steady state then becomes unstable and the system undergoes

oscillations. When r further increases the periodic solution becomes unstable and higher

period oscillations are observed. When all the cycles become unstable, chaos is observed.

In the next section, we discuss the period-2 oscillations observed just beyond r = 3 and

its stability.

0.8

0.8

0.8

0.6

0.6

0.6

0.4

0.2

0

0.5

Xn

Xn

Xn+1

Xn+1

r=2

0.4

0.4

0.2

0.2

0.5

Xn

10

20

30

20

30

20

30

20

30

Step, n

0.8

0.8

0.8

0.6

0.6

0.6

0.4

0.2

0

0.5

X

Xn

Xn+1

Xn+1

r = 3.2

0.4

0.4

0.2

0.2

0.5

X

10

Step, n

0.8

0.8

0.8

0.6

0.6

0.6

0.4

0.2

0

0.5

Xn

Xn

Xn+1

Xn+1

r = 3.5

0.4

0.4

0.2

0.2

0.5

Xn

10

Step, n

0.8

0.8

0.8

0.6

0.6

0.6

0.4

0.2

0

0.5

Xn

Xn

Xn+1

Xn+1

r = 3.8

0.4

0.4

0.2

0.2

0.5

Xn

10

Step, n

Beyond r=3...

We shall resort to the trick proposed by May (1976) to prove that as r increases slightly

beyond r = 3, stable oscillations of period 2 appear. A stable oscillation is a periodic

behavior that is maintained despite small perturbations. Period 2 implies that successive

generations alternate between two fixed values of X, which we will call X1 and X2 . Thus

period 2 oscillations (sometimes called two-point cycles) simultaneously satisfy the two

equations:

Xn+1 = f (Xn )

Xn+2 = Xn

(18)

(19)

Xn+2 = f (Xn+1 ) = f (f (Xn )) = Xn

(20)

g(X) = f (f (Xn ))

(21)

and let k be the new index that skips every two generation:

k = n/2

(22)

Xk+1 = g(Xk )

(23)

The steady state X of this equation, i.e. the fixed point of g(X), is the period 2 solution

of equation (4). Note that there must be two such values, X1 and X2 since by assumption

X oscillates between two fixed values.

By this trick, we have reduced the new problem to one which we are familiar. Indeed, the

stability of a period 2 oscillation can be determined by using the method described here

above. Briefly, suppose an initial small perturbation x: X X + x. Stability implies

that periodic behavior will be reestablished, i.e. that the deviation x from this behavior

will decrease. This will happen when:

dg

<1

(24)

dX

X=X

df

dX

X=X

df

dX

X=X

<1

(25)

From this equation, we conclude that the stability of period 2 oscillations depends on the

size of df /dX at X .

We will now apply this approach to the logistic equation. First we have to determine the

two fixed points X1 and X2 of equation (4).

To do so, we first make the composite function g(X) = f (f (X)) explicit:

g(X) = r(rX(1 X)(1 (rX(1 X))

= r 2 X(1 X)(1 rX(1 X))

(26)

X = r 2 X (1 X )(1 rX (1 X ))

1 = r 2 (1 X )(1 rX (1 X ))

0 = r 2 (1 X )(1 rX (1 X )) 1

(27)

In order to solve this third-order polynomial expression, we will make use of the fact that

the solution of eq. (7) is also solution of eq. (27). Indeed

Xss = Xn = Xn+1 = Xn+2

(28)

(29)

As we have seen above, the solution of eq. (7) is X = 1 1/r and this must be a solution

of eq. (27). This enables us to factor the polynomial so that the problem is reduced to

solving a quadratic equation. To do this, we expand the eq. (27):

1

1

1

3

2

X 2X + 1 +

X+

+

=0

(30)

r

r3 r

1

in evidence, we get

Putting the factor X 1

r

1

1

1

1

2

X 1

X 1+

X+

=0

(31)

+

r

r

r2 r

The second factor is a quadratic expression whose the roots are solutions of the equation

r+1

r+1

2

=0

(32)

X

X+

r

r2

Hence

X =

1 r + 1

2

r

s

r+1

r

p

2

4(r + 1)

r2

(33)

(r 3)(r + 1)

(34)

2r

The possible roots, denoted X1 and X2 , are real if r < 1 or r > 3. Thus, for positive

values of r, steady states of the two-generation map f (f (Xn )) exist only when r > 3.

Note that this occurs when Xss = 1 1/r ceases to be stable.

X1 , X2 =

r+1

With X1 and X2 computed it is possible (albeit algebraically messy) to test their stability.

To do so, it is necessary to compute dg/dX and to evaluate this derivative at the values

X1 and X2 . When this is done, we obtain a second range of behavior: stability of the

two-fixed point cycles for 3 < r < 3.3. Again, we could ask a $ 1000 question: What

happen beyond r = 3.3?

In fig. 4 we represented in red the third-order function g(X) (26). The steady states

corresponds to the intersection of this function with the diagonal line. The stability is

determined by the slope dg/dX at the steady states.

It should be emphasized that the trick used in exploring period 2 oscillations could be

used for any higher period n: n = 3, 4, ... Because the analysis becomes increasingly

cumbersome, this method will not be further applied here.

1

r=2

Xn+2

0.8

0.6

0.4

0.2

0

0.2

0.4

0.2

0.4

0.2

0.4

Xn

0.6

0.8

0.6

0.8

0.6

0.8

1

r = 2.8

Xn+2

0.8

0.6

0.4

0.2

0

Xn

1

r = 3.5

Xn+2

0.8

0.6

0.4

0.2

0

Xn

Bifurcation diagram

One way of summarizing the range of behaviours encountered when r increases is to

construct a bifurcation diagram. Such a diagram gives the value and stability of the

steady state and periodic orbits (fig. 5). In this diagram, for each value of r is reported

the local maximum of values of Xn . The transition from one regime to another is called

a bifurcation.

1

0.9

0.8

max (X)

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0.5

1.5

2.5

3.5

Parameter r

Figure 5: Bifurcation diagram. The inset is a zoom on the right part of the diagram. This

diagram is obtained by computing for each value of r the steady state or the maxima and

minima of Xn after the transients, e.g. from X100 to X1000 .

10

The schematic representation shown in fig. 6 highlights the structure of the bifurcation

diagram: As r increases, the system undergoes successively cycles of period 2, 4, 8, 16,...

Such a sequence is called a period doubling cascade. It ultimately leads to a chaotic

attractor. Note that in a chaotic attractor, Xn never take two times the same value.

Chaotic attractor is obtaine after an infinite number of period doubling. Feigenbaum

(1987) studied the behavior of the ratio

n =

rn rn1

rn+1 rn

(35)

lim n = 4.66920160910299067185320382...

(36)

constant.

11

Chaotic behaviours are characterized by a high sensitivity to initial conditions: starting from initial conditions arbitrarily close to each other, the trajectories will rapidely

diverge (Fig. 7). Said otherwise, a small difference in the initial condition will produce

large differences in the long-term behaviour of the system. This property is sometimes

called the butterfly effect.

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

10

15

20

25

30

Figure 7: Sensitivity to initial conditions. Both curves have been obtained for r = 3.8

but differ by their initial conditions: x0 = 0.4 for the blue curve and x0 = 0.41 for the

red curve.

Another way to appreciate the sensitivity to initial condition is to observe the evolution

of a small interval of initial conditions (Fig. 8).

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

10

Iteration

Figure 8: Evolution of the interval [0.47,0.48]. The red dots indicate the initial boundary

of the interval (i.e. x0 = 0.47 and x0 = 0.48). These results have been obtained for r = 4.

12

The sensitivity to initial conditions can be quantified by the Lyapunov exponent. Given

an initial condition x0 , consider a nearby point x0 + 0 , where the initial separation 0 is

extremely small. Let n be the separation after n iterations. If

|n | |0 | en

(37)

Figure 9:

A precise and computationally useful formula for can be derived. By definition:

n = f n (x0 + 0 ) f n (x0 )

(38)

1 n

=

ln

n 0

1 f n (x0 + 0 ) f n (x0 )

=

ln

n

0

1

=

ln |(f n )0 (x0 )|

n

(39)

where we have taken the limit 0 0 and applied the definition of the derivative:

f (x + x) f (x)

x0

x

f 0 (x) = lim

(40)

The term inside the logarithm can be expanded by the chain rule:

(f n )0 (x0 ) = (f (f (f (....f (x0))))0

= f 0 (x0 ).f 0 (f (x0 ))..f 0 (f (f (x0 )))...

= f 0 (x0 ).f 0 (x1 ).f 0 (x2 )...f 0 (xn1 )

n1

Y

=

f 0 (xi )

(41)

i=0

Hence:

n1

Y

1

0

f (xi )

=

ln

n

=

1

n

i=0

n1

X

ln |f 0 (xi )|

i=0

13

(42)

If this expression has a limit as n , we define that limit as the Lyapunov exponent

for the trajectory starting at x0 :

" n1

#

1X

= lim

ln |f 0 (xi )|

(43)

n n

i=0

Note that depends on x0 . However it is the same for all x0 in the basin of attraction of

a given attractor. The sign of is characteristic of the attractor type: For stable fixed

points (steady states) and (limit) cycles, is negative; for chaotic attractors, is positive.

For the logistic map,

f (x) = rx(1 x)

(44)

f 0 (x) = r 2rx,

(45)

we have, by definition:

(r) = lim

"

n1

1X

ln |r 2rxi |

n i=0

(46)

10001

1 X

ln |r 2rxi |

(r)

1000 i=0

(47)

Figure 10 shows the Lyapunov exponent computed for the logistic map, for 3 < r < 4.

We notice that remains negative for r < r 3.57, and approaches 0 at the period

doubling bifurcation. The negative spikes correspond to the 2n cycle. The onset of

chaos is visible near r = r , where becomes positive. For r > r , windows of periodic

behaviour are cleary visible (spikes of < 0).

1

0.5

Lyapunov exponent

0

0.5

1

1.5

2

2.5

3

3.5

4

3.1

3.2

3.3

3.4

3.5

3.6

3.7

3.8

3.9

Parameter r

Figure 10: Lyapunov exponent for 3 < r < 4 (transients = 200; number of iterations =

5000).

14

Maroto (1982) studied the following discrete map:

Xn+1 = rXn2 (1 Xn )

(48)

The logistic equation can be generalized as:

Xn+1 = rXnp (1 Xn )q

(49)

The dynamical properties of such equation have been investigated namely by Levin &

May (1976), Hernandez-Bermejo & Brenig (2006), Briden & Zhang (1995, 1994), and

others.

A delayed version of the discrete logistic equation was proposed by Meynard-Smith (1968):

Xn+1 = rXn (1 Xn1 )

(50)

Note that if we define the new variable Yn = Xn1 , the 2-order equation can be converted

into a system of two equations of the first order:

Xn+1 = rXn (1 Yn )

Yn+1 = Xn

(51)

The tent map is defined by:

Xn+1 =

rxn

r(1 xn )

15

if xn < 1/2

if xn 1/2

(52)

The continuous form of the logistic equation is written

dX

= rX(1 X)

dt

(53)

This equation can be solved either numerically, using the usual integration algorithm, or

analytically.

Lets apply the Eulers method for the differential equation:

If we define Yn =

dX

= X(1 X)

dt

(54)

Xn+1 Xn = Xn (1 Xn )t

(55)

Xn+1 = Xn + Xn (1 Xn )t

= Xn (1 + t Xn t)

(56)

t

Xn and r = 1 + t, then we find

1 + t

t

Xn (1 + t Xn t)

1 + t

= Yn (r rYn )

= rYn (1 Yn )

Yn+1 =

(57)

The Euler method does not work well for to solve the continuous logistic equation. Other

integration methods (such as Runge-Kutta methods), however, give reliable solution (Fig.

11).

16

2

1.8

r = 0.5

1.6

1.4

1.2

1

0.8

0.6

0.4

0.2

0

10

Time

Figure 11: Numercial solution of the logistic equation obtained for r = 0.5, for various

initial condition X(0).

The logistic equation (58) can be solved analytically and the solution is

1

X(t) =

1+

1

1 ert

X0

(58)

where X0 = X(0)

The demonstration is left as an exercise.

It sometimes occurs that we have a differential equation for a system, but we are only

interested in the behavior at fixed time intervals. For instance, the logistic differential

equation is sometimes used to model population growth, but we might only have census

data at intervals of five or ten years. It then makes little sense to look at the whole

continuous solution.

Suppose for instance that we want solutions of the logistic differential equation (not solutions of some numerical approximation like the Euler method iterates) at fixed intervals

T . The solution of the logistic differential equation is (cf. eq (58)).

X(t) =

1

X(t) =

1

(X0 1)

X0

rt

X0

ert =

X0

rt

e (X

1)

X0 (X(t) 1)

X(t)(X0 1)

17

(59)

(60)

(61)

X(t + T ) =

X0

X0

(t+rT

) (X

e

(62)

1)

If we substitute for the exponential in eq (62) using eq (61) we get, after a little rearranging,

X(t + T ) =

X(t)

X(t)

erT (X(t)

(63)

1)

This last equation is a solution map. It lets us calculate X(t + T ) knowing only X(t) and

some parameters.

Unlike equation (57) which gives approximations to the solution of the logistic differential

equation at fixed intervals t, equation (63) is exact. We were able to obtain this equation

because we were able to solve the differential equation. In general of course, we cant do

that, but we can still obtain numerical representations of the solution map by sampling

the numerical solution (obtained with a good numerical method, of course) at fixed time

intervals and plotting X(t + T ) vs X(t). This is sometimes called a Ruelle plot.

0.8

0.6

0.4

Continuous solution

Map solution (T=1)

Euler (dt=0.5)

0.2

10

Time

18

As for the discrete version, the continuous logistic equation can be generalized:

q

dX

X

= rXp 1

dt

K

(64)

A delay variant of the logistic equation was studied by Cunningham (1954), Wangersky

& Cunningham (1956), and more recently by Arino et al (2006). It can be formulated as:

X(t )

dX

= rX 1

dt

K

19

(65)

References

Text books

Edelstein-Keshet, L (2005; originally 1988) Mathematical Models in Biology, SIAM

Editions.

Glass L & MacKey MC (1988) From Clocks to Chaos. Princeton Univ. Press.

Murray JD (1989) Mathematical Biology, Springer, Berlin.

Nicolis G (1995) Introduction to Nonlinear Science, Cambridge Univ. Press.

Original papers

Verhulst PF (1845) Recherches mathematiques sur la loi daccroissement de la population. Nouv. mem. de lAcademie Royale des Sci. et Belles-Lettres de Bruxelles

18:1-41.

Verhulst PF (1847) Deuxi`eme memoire sur la loi daccroissement de la population.

Mem. de lAcademie Royale des Sci., des Lettres et des Beaux-Arts de Belgique

20:1-32.

Cunningham WJ.(1954) A non-linear differential-differennce equaiton of the growth.

Proc Natl Acad Sci USA 40:708-13.

Wangersky PJ, Cunningham WJ (1956) On time mag in equations of growth, Proc

Natl Acad Sci USA 42:699-702.

Meynard-Smith J (1968) Mathematical Ideas in Biology. Cambridge University

Press. (p.23)

May RM (1974) Biological populations with nonoverlapping generations: stable

points, stable cycles, and chaos. Science 186:645-7.

May RM. (1975) Biological populations obeying difference equations: stable points,

stable cycles, and chaos. J Theor Biol. 51:511-24.

May R (1976) Simple mathematical models with very complicated dynamics, Nature

261: 459-467.

Levin SA, May RM (1976) A note on difference-delay equations. Theor Popul Biol

9:178-87.

Feigenbaum MJ (1978) Quantitative Universality for a Class of Non-Linear Transformations. J. Stat. Phys. 19:25-52.

Marotto FR (1982) The Dynamics of a discrete population model with threshold.

Math. Biosci. 58:123-128.

20

Briden W, Zhang S (1994) Stability of solutions of generalized logistic difference

equations, Periodica Mathematica Hungarica 9:81-87.

Arino J, Wang L, Wolkowicz GS (2006) An alternative formulation for a delayed

logistic equation. J Theor Biol 241:109-19.

Hernandez-Bermejo B, Brenig, L (2006) Some global results on quasipolynomial

discrete systems, Nonlin Anal-RealWorld Applic 7:486-496.

21

## Mult mai mult decât documente.

Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.

Anulați oricând.