Sunteți pe pagina 1din 290

Modeling and Simulation in Science,

Engineering and Technology

Alberto d'Onofrio
Editor

Bounded Noises
in Physics,
Biology, and
Engineering
Modeling and Simulation in Science, Engineering and Technology
Series Editor
Nicola Bellomo
Politecnico di Torino
Torino, Italy

Editorial Advisory Board


K.J. Bathe P. Koumoutsakos
Department of Mechanical Engineering Computational Science & Engineering
Massachusetts Institute of Technology Laboratory
Cambridge, MA, USA ETH Zurich,
Zurich

Switzerland
M. Chaplain
Division of Mathematics
H.G. Othmer
Department of Mathematics
University of Dundee
Dundee, Scotland, UK University of Minnesota
Minneapolis, MN, USA

P. Degond K.R. Rajagopal



Institut de Mathematiques de Toulouse Department of Mechanical Engineering
CNRS and Universite Paul Sabatier Texas A&M University
Toulouse, France College Station, TX, USA

A. Deutsch T. Tezduyar
Center for Information Services Department of Mechanical Engineering &
and High-Performance Computing Materials Science
Dresden
Technische Universitat Rice University
Dresden, Germany Houston, TX, USA

A. Tosin
M.A. Herrero Garcia
Istituto per le Applicazioni del Calcolo
Departamento de Matematica Aplicada
M. Picone Consiglio Nazionale delle
Universidad Complutense de Madrid
Ricerche
Madrid, Spain
Roma, Italy

For further volumes:


http://www.springer.com/series/4960
Alberto dOnofrio
Editor

Bounded Noises in Physics,


Biology, and Engineering
Editor
Alberto dOnofrio
Department of Experimental Oncology
European Institute of Oncology
Milan, Italy

ISSN 2164-3679 ISSN 2164-3725 (electronic)


ISBN 978-1-4614-7384-8 ISBN 978-1-4614-7385-5 (eBook)
DOI 10.1007/978-1-4614-7385-5
Springer New York Heidelberg Dordrecht London
Library of Congress Control Number: 2013944907

Mathematics Subject Classification (2010): 60-Gxx, 60-H10, 82-C31, 37-Hxx, 60-H15, 82-Cxx, 92-XX,
92-C40, 34-K18, 34-A08, 93-XX

Springer Science+Business Media New York 2013


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection
with reviews or scholarly analysis or material supplied specifically for the purpose of being entered
and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of
this publication or parts thereof is permitted only under the provisions of the Copyright Law of the
Publishers location, in its current version, and permission for use must always be obtained from Springer.
Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations
are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.birkhauser-science.com)


To Ninuccia Sabelli (19282010), my beloved
mom: a courageous captain in the storming
sea of life.
Preface

Since the hallmark seminal works on Brownian motion by Einstein and Langevin,
Gaussian noises (GNs) have been one of the main concepts used in non-equilibrium
statistical physics and one of the main tools of its applications, from engineering
to biology. The later, and quite dichotomic, mathematical works by Ito and
Stratonovich laid a firm theoretical basis for the mathematical theory of stochas-
tic differential equations, as well as a long-lastingand currently unresolved
controversy on which of the two approaches is best suited for describing mathe-
matical models of the real world. Other hallmarks in stochastic physics were in the
1970s the birth, in the framework of the Ilya Prigogine school, of the theory of noise-
induced transitions by Horsthemke and Lefever; in the early 1980s, in the framework
of the Rome school, the introduction of the concept of stochastic resonance, first
introduced by Benzi, Parisi, Sutera, and Vulpiani to model climatic changes. Finally,
in nonlinear analysis, starting from 1990s of the past century a rigorous theory of
stochastic bifurcationsboth phenomenological and dynamicshas been and it is
being developed.
As far as the many applications of stochastic dynamical systems are concerned,
in biology and biochemistry noise and noise-induced phenomena are acquiring a
(somewhat unforeseen) fundamental relevance, due to recent discoveries that are
showing the constructive role of noise for some biological functions, for example
cellular differentiation. The increasing importance of noise in understanding intra-
and intercellular mechanisms can indeed be summarized with the motto noise is
not a nuisance.
The above-summarized body of research is essentially based on the use of GNs,
which is backgrounded in the Central Limit Theorem, and which is, it must be
clearly said here, the best approximation of reality in many cases. However, since
1960s an increasing number of experimental data motivated theoretical studies
stressing that many real-life stochastic processes do not follow white or colored
Gaussian laws, but other densities such as fat-tail power laws. Although this is
not the topic of this book, it is important to recall the pioneering studies by Benoit
Mandelbrot and his introduction of the concepts of fractal Brownian motion.

vii
viii Preface

More recently, a vast body of research focused on another important class of


non-Gaussian stochastic processes: the bounded noises. Previously, in the above-
summarized historical framework, the studies on bounded noises, apart from some
sporadic exception, were mainly confined to the applications of telegraph noise,
nowadays better known as dichotomous Markov noise (DMN). In the last 20 years,
together with a renewal of theoretical interest for DMN, other classes of bounded
noises were defined and intensively studied in the statistical physics and in the
stochastic engineering communities, andto a lesser degreein mathematics and
quantitative biology.
The rise of scientific interest on bounded noises is motivated by the fact that in
many applications both GNs and fat-tailed stochastic processes are an inadequate
mathematical model of the physical world because they are unbounded. This should
preclude their use to model stochastic fluctuations affecting parameters of linear or
nonlinear dynamical systems, which must be bounded. Moreover, in many relevant
cases, especially in biology, the parameters must also be strictly positive. As a
consequence, not taking into account the bounded nature of stochastic fluctuations
may lead to unrealistic model-based inferences. For example, in many cases the
onset of noise-induced transitions depends on trespassing of a threshold by the
variance of noise. In the case of GN this often means making negative or excessively
large a parameter. To give an example taken from real life, a GN-based modeling
of the unavoidable fluctuations affecting the pharmacokinetics of an antitumor drug
delivered by means of continuous infusion leads to the paradox that the probability
that the drug increases the number of tumor cells may become nonzero, which
is absurd. The problems sometimes induced by the scientific artifacts caused by
a bona fide but acritical use of GN-based models of noises may go beyond the
purely scientific framework, specially in engineering and other applications, where
economical side effects of the bad modeling is a relevant issue. For example, in
probabilistic engineering, the use of unbounded noises leads to overconservative
design, which induces a remarkable increase in the costs. In order to avoid these
problems, the stochastic models should in these cases be built on bounded noises.
The deepening and development of theoretical studies on bounded noises led to
the attention of a vast readership on new phenomena, such as the dependence of
the transitions or of the stochastic resonance on the specific model of noise that has
been adopted. This means that, in the absence of experimental data on the density
and spectrum of the stochastic fluctuations for the problem in study, a scientific work
should often compare multiple kinds of possible stochastic perturbations. Moreover,
currently the bounded noise approach also implies that the possibility of obtaining
analytical results is remarkably reduced or sometimes annihilated. Indeed, for
example, in this field models are never based on a single scalar stochastic equation,
since the problems in study are in most simple cases at least bidimensional, one or
more additional equations being devoted to the modeling of the bounded stochastic
processes.
Preface ix

The aim of this collective book is to give, through a series of contributes by the
world-leading scientists, an overview of the state of the art and of the theory and
applications of bounded noises, and of its applications in the domains of statistical
physics, biology, and engineering.
Quite surprisingly, given that in the last 15 years an increasingly large body of
research has been and is being published on the subtle effects of bounded noises
on dynamical systems, this volume is probably the first book really devoted to the
general theory and applications of bounded noises. It is a pleasure to remind to the
reader that a single monographic volume was published in 2000, by Springer, in
a similar topic: Bounded Dynamic Stochastic Systems: Modeling and Control by
Prof. Hong Wang, which was focused on industrial applications, and was mainly
devoted to some innovative approximation methods introduced by its author. On the
contrary, our collective work is a basic science book.
This volume is organized into four parts.
The first part is entitled Modeling of Bounded Noises and Their Applications
in Physics, and it includes both contributes on the definition of the main kinds
of bounded noises and their applications in theoretical physics. Indeed, in this
moment, the theory of bounded stochastic processes is intimately linked to its
applications to mathematical and statistical physics, and it would be extremely
difficult and unnatural to separate theory from physical applications. In the first
contribute of the book, Zhu and Cai illustrate two major classes of bounded noises
the randomized harmonic model and the nonlinear filter modeland their statistical
properties, as well as effective algorithms to numerically simulate them. The second
contribute is written by the pioneer of the theory of bounded stochastic processes,
Prof. Dimentberg, who first introduced the randomized harmonic model in 1988
as a representation of a periodic process with randomized phase modulation. In
his contribute, Prof. Dimentberg focuses on the dynamics of the classical linear
oscillator under external or parametric bounded excitations, with an excursus in
an important nonlinear case. Another major example of bounded noise is the one
based on Tsallis statistics (aka -process or TsallisBorland noise). This noise is
introduced here in the contribute by Wio and Deza, who also illustrate its effects
in the most important noise-induced phenomena, such as stochastic resonance and
noise-induced transitions. Properties of dynamical systems driven by DMN are
investigated in the third contribute by Ridolfi and Laio, who also focus on the
application of DMN in environmental sciences.
Stochastic oscillators are a central topic in statistical physics, which is confirmed
by the next two chapters. The first, by Gitterman, is devoted to the study of Brownian
motion with adhesion, i.e., an oscillator with a random mass for which the particles
of the surrounding medium adhere to the oscillator for some random time after the
collision. The second, by Bobryk, is devoted to the numerical study of energetic
stability for a harmonic oscillator with fluctuating damping parameter, where the
stochastic perturbation is modeled by means of the sine-Wiener noise, a particular
case of the above-mentioned randomized harmonic model. In the next chapter
Hasegawa applies a moment method (MM) to the Langevin model for a Brownian
particle subjected to the above-mentioned TsallisBorland noise.
x Preface

The previous chapters were focused on nonspatial problems. Space is introduced


in the last paper of this part, written by de Franciscis and dOnofrio, where Tsallis
Borland, CaiLin, and sine-Wiener noises are extended in the spatiotemporal
setting. Phase transitions induced by these noises are considered for the case of
real GinzburgLandau model with additive perturbations.
The second part, Bounded Noises in the Framework of Discrete and Continuous
Random Dynamical Systems, is devoted to framing bounded noises in the theory of
random dynamical systems and random bifurcations. The first article, by Homburg,
Young, and Gharaei, is a mathematically very rigorous report on changes of
stationary measures of random differential equations with bounded noise, and, in
particular, on HopfAndronov bifurcations. Indeed, if the perturbation is bounded
one can consider nonlinear stochastic equations far more general than the Langevin
equations, where the dependence on the noise is linear. Moreover, an important
difference between Gaussian and bounded non-Gaussian perturbations is that the
stationary measure associated with the noisy system may in this second case be
non-unique. Then Rodrigues, de Moura, and Grebogi propose an article on the
effects of bounded random perturbations on discrete dynamical systems. This class
of dynamical systems when unperturbed easily exhibits very complex dynamics,
which become even richer in case of the presence of bounded noises.
The third part, Bounded stochastic fluctuations in Biology, is devoted to the
application of bounded stochastic processes in biology, one of the major areas of
potential applications of this subject. The first two works are devoted to mathemat-
ical oncology, whereas the third is on cellular biochemistry (aka systems biology).
The first chapter by dOnofrio and Gandolfi shows that bounded realistic changes
in pharmacokinetics and pharmacodynamics of antitumor drugs may induce a
form of nongenetic resistance to therapies. The second paper by Dong and Mei
is on the interaction of noise cross-correlations and unavoidable biological delays
in the dynamics of an immunogenic tumor. Indeed, in biological systems matter
transport (e.g., due to intercellular chemical or electrical signaling propagation) and
more complex phenomena (such as cellular maturation) induce delays that must
be included in their mathematical models, resulting in delay-differential equations.
The third paper, by Caravagna, Mauri, and dOnofrio, is on the dynamics of
biomolecular networks in the presence of both intrinsic noise and extrinsic bounded
fluctuations. From the mathematical point of view these networks of chemical
reactions are modeled by doubly stochastic processes, i.e., time-inhomogeneous
birthdeath nonlinear processes with randomly varying rate constants.
Finally, the last (but not least!) sectionBounded noises: applications in
Engineeringconcerns applications of bounded stochastic processes in mechanical
engineering, the area where the renewal of interest for non-Gaussian bounded noises
started, and in control theory. The first paper by Deng, Xie, and Pandey consider
the stochastic stability of fractional viscoelastic systems driven by bounded noises.
Indeed, the mechanical (or dielectric) behavior of many materials seems endowed
by memory, so that their dynamics cannot be described by ordinary equations: they
require the use of fractional differential equations. In this way one can fully take into
account non-exponential long-tailed relaxations that are experimentally found. The
Preface xi

second article is by Field and Grigoriu, who illustrate the problem of model selection
for random functions with bounded range. This is an intriguing problem because
the available information on input and system properties is typically limited, and
as a consequence there may be more than one model that is consistent with the
available information. Finally Milanese, Ruiz, and Taragna examine the filter design
problem for linear time-invariant dynamical systems when no mathematical model
is available, but a set of initial experiments can be performed where also the variable
to be estimated is measured.
The above division of the present volume into four parts has, however, to be
understood as loose, and partially artificial, since the vast majority of the articles
here published are interdisciplinary.
We hope that this volume may trigger new studies in the field of bounded
stochastic processes and that it may be read by an interdisciplinary audience, or
by readers who are willing to extend their expertise to new domains.
I finally thank Prof. Nicola Bellomo and Birkhauser Science for having allowed
this book to exist and for their cooperating attitudeand remarkable patience!
during the development of this volume.

Milan, Italy Alberto dOnofrio


Contents

Part I Modeling of Bounded Noises and Their Applications


in Physics

1 On Bounded Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3


W.Q. Zhu and G.Q. Cai
2 Dynamics of Systems with Randomly Disordered Periodic
Excitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
M. Dimentberg
3 Noise-Induced Phenomena: Effects of Noises Based
on Tsallis Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Horacio S. Wio and Roberto R. Deza
4 Dynamical Systems Driven by Dichotomous Noise . . . . . . . . . . . . . . . . . . . . . 59
Luca Ridolfi and Francesco Laio
5 Stochastic Oscillator: Brownian Motion with Adhesion . . . . . . . . . . . . . . . 79
M. Gitterman
6 Numerical Study of Energetic Stability for Harmonic
Oscillator with Fluctuating Damping Parameter . . . . . . . . . . . . . . . . . . . . . . . 99
Roman V. Bobryk
7 A Moment-Based Approach to Bounded Non-Gaussian
Colored Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Hideo Hasegawa
8 Spatiotemporal Bounded Noises and Their Application
to the GinzburgLandau Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Sebastiano de Franciscis and Alberto dOnofrio

xiii
xiv Contents

Part II Bounded Noises in the Framework of Discrete


and Continuous Random Dynamical Systems

9 Bifurcations of Random Differential Equations


with Bounded Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Ale Jan Homburg, Todd R. Young, and Masoumeh Gharaei
10 Effects of Bounded Random Perturbations on Discrete
Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Christian S. Rodrigues, Alessandro P.S. de Moura,
and Celso Grebogi

Part III Bounded Stochastic Fluctuations in Biology

11 Bounded Stochastic Perturbations May Induce


Nongenetic Resistance to Antitumor Chemotherapy . . . . . . . . . . . . . . . . . . . 171
Alberto dOnofrio and Alberto Gandolfi
12 Interplay Between Cross Correlation and Delays
in the Sine-Wiener Noise-Induced Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Wei Guo and Dong-Cheng Mei
13 Bounded Extrinsic Noises Affecting Biochemical
Networks with Low Molecule Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Giulio Caravagna, Giancarlo Mauri, and Alberto dOnofrio

Part IV Bounded Noises: Applications in Engineering

14 Almost-Sure Stability of Fractional Viscoelastic Systems


Driven by Bounded Noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Jian Deng, Wei-Chau Xie, and Mahesh D. Pandey
15 Model Selection for Random Functions with Bounded
Range: Applications in Science and Engineering . . . . . . . . . . . . . . . . . . . . . . . 247
R.V. Field, Jr. and M. Grigoriu
16 From Model-Based to Data-Driven Filter Design . . . . . . . . . . . . . . . . . . . . . . . 273
M. Milanese, F. Ruiz, and M. Taragna
Contributors

Roman V. Bobryk Institute of Mathematics, Jan Kochanowski University, Kielce,


Poland
G.Q. Cai Department of Ocean and Mechanical Engineering, Florida Atlantic
University, Boca Raton, FL, USA
Giulio Caravagna Dipartimento di Informatica, Sistemistica e Comunicazione,
Universit`a degli Studi Milano-Bicocca, Milan, Italy
Sebastiano de Franciscis Department of Experimental Oncology, European
Institute of Oncology, Milan, Italy
Alessandro P.S. de Moura Department of Physics and Institute for Complex
Systems and Mathematical Biology, Kings College, University of Aberdeen,
Aberdeen, UK
Jian Deng Department of Civil and Environmental Engineering, University of
Waterloo, Waterloo, ON, Canada
Roberto R. Deza Instituto de Fsica de Cantabria, Universidad de Cantabria and
CSIC, Santander, Spain

IFIMAR, Universidad Nacional de Mar del Plata and CONICET, Mar del Plata,
Argentina
M. Dimentberg Department of Mechanical Engineering, Worcester Polytechnic
Institute, Worcester, MA, USA
Alberto dOnofrio Department of Experimental Oncology, European Institute of
Oncology, Milan, Italy
R.V. Field Sandia National Laboratories, Albuquerque, NM, USA
Alberto Gandolfi Istituto di Analisi dei Sistemi ed Informatica A. Ruberti - CNR
Viale Manzoni 30, Roma, Italy

xv
xvi Contributors

Masoumeh Gharaei KdV Institute for Mathematics, University of Amsterdam,


Amsterdam, The Netherlands
M. Gitterman Department of Physics, Bar Ilan University, Ramat Gan, Israel
Celso Grebogi Department of Physics and Institute for Complex Systems and
Mathematical Biology, Kings College, University of Aberdeen, Aberdeen, UK
M. Grigoriu Cornell University, Ithaca, NY, USA
Wei Guo Department of Physics, Yunnan University, Kunming, China
H. Hasegawa Tokyo Gakugei University, Tokyo, Japan
Ale Jan Homburg KdV Institute for Mathematics, University of Amsterdam,
Amsterdam, The Netherlands

Department of Mathematics, VU University Amsterdam, Amsterdam, The Nether-


lands
Francesco Laio DIATI, Politecnico di Torino, Torino, Italy
Giancarlo Mauri Dipartimento di Informatica, Sistemistica e Comunicazione,
Universit`a degli Studi Milano-Bicocca, Milan, Italy
Dong-Cheng Mei Department of Physics, Yunnan University, Kunming, China
Mario Milanese Politecnico di Torino, Dipartimento di Automatica e Informatica,
Corso Duca degli Abruzzi 24, Torino, Italy
Mahesh D. Pandey Department of Civil and Environmental Engineering, Univer-
sity of Waterloo, Waterloo, ON, Canada
Luca Ridolfi DIATI, Politecnico di Torino, Torino, Italy
Christian S. Rodrigues Max Planck Institute for Mathematics in the Sciences,
Leipzig, Germany
Fredy Ruiz Pontificia Universidad Javeriana, Jefe Seccion de Control Automatico,
Departamento de Electronica, D.C, Colombia
Michele Taragna Politecnico di Torino, Dipartimento di Automatica e Informat-
ica, Corso Duca degli Abruzzi 24, Torino, Italy
Horacio S. Wio Instituto de Fsica de Cantabria, Universidad de Cantabria and
CSIC, Santander, Spain
Wei-Chau Xie Department of Civil and Environmental Engineering, University of
Waterloo, Waterloo, ON, Canada
Todd R. Young Department of Mathematics, Ohio University, Morton Hall,
Athens, OH, USA
W.Q. Zhu Department of Mechanics, State Key Laboratory of Fluid Transmission
and Control, Zhejiang University, Hangzhou, China
Part I
Modeling of Bounded Noises and Their
Applications in Physics
Chapter 1
On Bounded Stochastic Processes

W.Q. Zhu and G.Q. Cai

Abstract Stochastic processes of bounded variation are generated based on their


two most important characteristics: spectral density functions and probability den-
sity functions. Two models are presented for the purpose: the randomized harmonic
model and the nonlinear filter model. In the randomized harmonic model, a random
noise is introduced in the phase angle; while in the nonlinear filter model, a set
of nonlinear Ito differential equations are employed. In both methods, the spectral
density of a stochastic process to be modeled, either with one peak or with multiple
peaks, can be matched by adjusting model parameters. However, the probability
density of the process generated by the randomized harmonic model has a fixed
shape and cannot be adjusted. On the other hand, the nonlinear filter model covers a
variety of profiles of probability distributions. For the Monte Carlo simulation using
these two models, equivalent and alternative expressions are proposed, which make
the simulation more effective and efficient.

Keywords Bounded noise Non-Gaussian processes Stochastic differential


equations

W.Q. Zhu
Department of Mechanics, State Key Laboratory of Fluid Transmission and Control,
Zhejiang University, Hangzhou 310027, China
e-mail: wqzhu@yahoo.com
G.Q. Cai ()
Department of Ocean and Mechanical Engineering, Florida Atlantic University,
Boca Raton, FL 33431, USA
e-mail: caig@fau.edu

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 3


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 1, Springer Science+Business Media New York 2013
4 W.Q. Zhu and G.Q. Cai

1.1 Introduction

Stochastic processes are involved in many areas, such as physics, engineering,


ecology, biology, medicine, psychology, and other disciplines. For the purposes of
analysis and simulation, stochastic processes are required to be properly modeled
and generated mathematically. It is important that a stochastic process to be
generated should resemble its measured or estimated statistical and probabilistic
properties. Two important measures have been used for the purpose: the probability
distribution and the power spectral density. Conventionally, the probability distribu-
tion is the one at one time instant, and it is the first-order property of the process.
The power spectral density, on the other hand, is a statistical property involving
two different time instants, and hence is a second-order property. For a stationary
stochastic process, both the probability distribution and the power spectral density
are invariant with time.
To describe the probability distribution, many mathematical models have been
proposed, such as the normal (Gaussian), log-normal, exponential, Gamma, Chi-
square, Weibull, and Rayleigh. However, the ranges of these distributions are
unbounded; namely, there exist probabilities of the random quantity to have very
large values. This violates the very nature of a real physical quantity which is
always bounded. In the reliability analysis of a physical system, the allowable failure
probability is usually very low. Thus, adoption of these distributions may affect the
reliability estimation significantly.
Another important property of a stochastic process, the power spectral density,
describes the energy distribution of the process in frequency domain. It may be
more important than the probability distribution [19]. For a Gaussian process,
mathematical models can be obtained to match any given spectral density [15].
However, generation of a stochastic process with a non-Gaussian distribution and
a given spectral density is much more complicated [6, 10, 16]. This is one of the
reasons for the popularity of the Gaussian distribution, besides its simplicity in
mathematical treatment.
Based on the above considerations, versatile models for bounded stochastic
processes are needed with both the probability distribution and spectral density
available. Among various known models of the probability distributions, the
uniform distribution is for a bounded process defined in any finite interval. But it
is difficult to match a given spectral density. A model for bounded processes has
been proposed [7, 17] by using a harmonic function with a constant amplitude, a
constant average frequency, a random initial phase, and a random noise in the phase
angle. Such a stochastic process is bounded by the constant amplitude assigned
in the model. It has been used, for example, to investigate a straight pipe with a
slug flow of a two-phase fluid [8], a structure with a spatially disordered traveling
parametric excitation [9], and log-span bridges in turbulent winds [14].
Another type of bounded stochastic processes were generated using nonlinear
filters [1, 4], in which the Ito type stochastic differential equations are employed
with the drift coefficient adjusted to match the spectral density and the diffusion
coefficient adjusted to match the probability density. Although the procedure
1 On Bounded Stochastic Processes 5

is capable to generate both unbounded and bounded stochastic processes, it is


especially suitable to model bounded stochastic processes with different types of
probability distributions, including uniform distribution. Both types of bounded
process models, randomized harmonic model and nonlinear filter model, are able to
match one-peak or multi-peak spectrum which can be either broad band or narrow
band.
This paper presents the two types of bounded stochastic processes using the
randomized harmonic model and nonlinear filter model. Both the probability
distribution and the spectral density of the two models are investigated in detail.
Selection scheme of model parameters are suggested to match both the probability
distribution and the spectral density of the stochastic processes. For carrying out
the Monte Carlo simulation more effective and more efficient, equivalent models
and alternative procedures are proposed for the two types of bounded stochastic
processes.

1.2 Randomized Harmonic Model

1.2.1 Randomized Harmonic Processes with


One Spectrum Peak

The randomized harmonic process is modeled as

X(t) = A sin (0t + B(t) +U) (1.1)

where A is a positive constant specifying the magnitude of the process, 0 and


are also positive constants representing the mean frequency and the randomness
level in the phase, respectively, B(t) is a unit Wiener process, and U is a random
variable uniformly distributed in [0, 2 ] and independent of B(t). The incursion of
the random variable U in (1.1) indicates that the initial phase is random, and also
renders process X(t) weakly stationary.
Model (1.1) was proposed independently by [7] and [17], respectively. In the
following, we give some mathematical details of the model and possible extension
to multiple spectrum peaks. Applications of the randomized harmonic process will
be discussed in another chapter of this book by Prof. Dimentberg.
Taking into consideration of (i) the Wiener process B(t) is Gaussian distributed
with zero mean, (ii) the random variable U is uniformly distributed, and (iii) X(t) is
periodic with respect to U, we have
 +  2
A
E[X(t)] = pB (b)db sin (0t + b + u) du = 0 (1.2)
0 2
 +  2 2
A 1
E[X 2 (t)] = pB (b)db sin2 (0t + b + u) du = A2 (1.3)
0 2 2
6 W.Q. Zhu and G.Q. Cai

E[X(t1 )X(t2 )] = A2 E[sin (0t1 + B(t1 ) +U) sin (0t2 + B(t2 ) +U)]
A2
= E[cos(0 (t2 t1 ) + (B(t2 ) B(t1 )))] (1.4)
2

where b and u are state variables for the stochastic process B(t) and the random
variable U, respectively. The convention of using a lowercase letter to represent the
state variable of an uppercase random quantity will be followed hereafter. Denote

Z = B(t2 ) B(t1 ) (1.5)

According to [11], its mean and variance are

E[Z(t1 ,t2 )] = 0, E[Z 2 (t1 ,t2 )] = E[(B(t2 ) B(t1 ))2 ] = t2 t1 , t2 t1 (1.6)

Since the Wiener process B(t) is Gaussian distributed, its increment Z is also
Gaussian distributed with
 
1 z2
pZ (z) =  exp (1.7)
2 (t2 t1 ) 2(t2 t1 )

Continuing the calculation of (1.4), we have

A2
E[X(t1 )X(t2 )] = E[cos(0 (t2 t1 )) cos( Z) sin(0 (t2 t1 )) sin( Z)]
2
 +  
A2 cos( z) z2
= cos(0 (t2 t1 ))  exp dz
2 2 (t2 t1 ) 2(t2 t1 )
 
A2 1 2
= cos(0 (t2 t1 ))exp (t2 t1 ) . (1.8)
2 2

Equation (1.8) shows that X(t) is a weakly stationary process with an autocorrela-
tion function
 
A2 1
RXX ( ) = cos(0 )exp 2 | | , = t2 t1 (1.9)
2 2

Carrying the Fourier transform, we have the power spectral density as follows

A2 2 ( 2 + 02 + 4 /4)
XX ( ) = . (1.10)
4 [( 2 02 4 /4)2 + 4 2 ]

Figure 1.1 depicts the spectral densities in the positive range for the case of 0 =
3 and several values of . It is seen that the spectral densities reach their peaks
near = 0 and exhibit different bandwidths for different values. The stochastic
1 On Bounded Stochastic Processes 7

Fig. 1.1 Spectral densities of randomized harmonic process X(t) with 0 = 3 and different
values

process X(t) reduces to a pure harmonic process with a random initial phase when
= 0. As increases, the bandwidth of the process becomes broader, indicating an
increasing randomness.
To find the probability density of X(t), denote

X(t) = A sin( (t)), = Y +U, Y = 0t + B(t) (1.11)

The probability density of as a function of Y and U can be calculated from


 +  
y
p ( ) = pYU (y, u)   du (1.12)
u

Since Y and U are independent, and


1
pU (u) = , 0 u 2 (1.13)
2
we have from (1.12)
 +  2 
1 1
p ( ) = pU (u)pY ( u)du = pY ( u)du = pY (y)dy
2 0 2 2

(1.14)

Note that (, +) according to (1.11). Since the harmonic sine function in


(1.11) is periodic with a period 2 , we can limit the phase angle within [0, 2 ) and
mark it as 1 , and convert X(t) from A sin( ) to A sin(1 ), i.e.,
8 W.Q. Zhu and G.Q. Cai

Fig. 1.2 Probability density of randomized harmonic process X(t)

X(t) = A sin(1 (t)), 0 1 2 (1.15)

The value of the probability density p1 (1 ), 1 [0, 2 ) should be obtained by


summing up all values of p ( ) at 1 +2k , where k includes all integers. Therefore

+ +  1 +2k
1
p1 (1 ) = p (1 + 2k ) =
2 pY (y)dy
k= k= 1 +2(k1)
 +
1 1
= pY (y)dy = (1.16)
2 2

In deriving (1.16), use has been made of (1.14). Equation (1.16) shows that 1
is uniformly distributed in [0, 2 ). According to the transformation rule of the
probability density functions,
 
 d 1 

pX (x) = p1 (1 )  = 1 , A < x < A. (1.17)
dx  A A2 x2

Figure 1.2 depicts the probability density of X(t). It has very large values near
the two boundaries. Note that the probability distribution only depends on A,
determined according to the physical boundary of the underlined phenomenon; thus,
the probability distribution is not adjustable. The parameters 0 and have no effect
on the probability distribution; however, they can be adjusted to match the spectral
density of X(t) to be modeled, according to the peak magnitude, peak location, and
bandwidth.
1 On Bounded Stochastic Processes 9

1.2.2 Randomized Harmonic Processes with Multiple


Spectrum Peaks

The randomized harmonic model can be extended to include more terms, as given by
+
X(t) = Ai cos(it + Bi (t) +Ui ) (1.18)
i=1

where Ai are positive constants, Bi (t) are mutually independent unit Wiener
processes, and Ui are mutually independent random variables uniformly distributed
in [0, 2 ]. The spectral density of X(t) is now

+
A2i i2 ( 2 + i2 + 4 /4)
XX ( ) = (1.19)
i=1 4 [( i i /4) + i ]
2 2 4 2 4 2

and the probability density can be calculated from



pX (x) = pY1 (y1 )pY2 (y2 ) . . . pYn (x y1 y2 yn1 )dy1 dy2 . . . dyn1 (1.20)
D

where:
1
pYi (yi ) =  (1.21)
A2i y2i

and the integration domain D is (n 1)-dimensional and determined according to x


and Ai . For the case of two terms (n = 2), (1.20) takes the form of
 b
dy
pX (x) =  (1.22)
a (A22 y2 )(A21 (x y)2 )

where the integration limits a and b are determined as follows:




(A2 , x + A1 ) for (A1 + A2 ) x (A1 A2 )

(a, b) = (A2 , A2 ) for (A1 A2 ) x A1 A2 (1.23)


(x A , A ) for A1 A2 x A1 + A2 .
1 2

In deriving (1.22) and (1.23), it is assumed, without losing generality, that A1 A2 .


Figure 1.3 shows the spectral density calculated from (1.19) for cases of A1 = 2,
A2 = 0.8, 1 = 3, 2 = 6, 1 = 1.2, and four different values of 2 = 0.6, 0.8,
1.0, and 1.2, while Fig. 1.4 is for cases of A1 = 1.4, A2 = 1.4, 1 = 3, 2 = 6,
1 = 1.0, 2 = 0.8, 1.0, 1.2, and 1.4. The figures show that two peaks are located
near 1 and 2 , respectively, the bandwidths of the two peaks are controlled by 1
10 W.Q. Zhu and G.Q. Cai

Fig. 1.3 Spectral densities of X(t) generated from randomized harmonic model (1.18) with two
terms for the case of A1 = 2, A2 = 0.8, 1 = 3, 2 = 6, 1 = 1.2. Taken from Ref. [4] (C) Elsevier
Science Ltd (2004)

and 2 , and their magnitudes depend on A1 , A2 , 1 , and 2 . Thus, for a process with
a two-peak spectral density, parameters in the model can be adjusted to match the
targeted spectral density. Since the probability density only depends on A1 and A2 ,
an identical one is found for the four cases in Fig. 1.3, and another one found for the
four cases in Fig. 1.4. They are drawn as a solid line and a dashed line in Fig. 1.5,
respectively. Although the boundaries for the two probability distributions are the
same, they have different shapes. The probability function is of a singular shape in
the sense that it is infinite at (A1 A2 ).
Randomized harmonic model is simple to apply and versatile to match the spec-
tral density by adjusting model parameters. However, the probability distribution of
the modeled process is of singular shape and cannot be adjusted. For cases in which
the probability distributions of the excitations have insignificant effects on system
behaviors, for example, when stationary responses of linear or weakly nonlinear
systems are of interest [2, 5], the randomized harmonic model is an advantageous
choice for excitation processes.

1.2.3 Monte Carlo Simulation

As shown above, the randomized harmonic process modeled in (1.1) is a stationary


process due to the introduction of the random variable U as a random initial phase.
1 On Bounded Stochastic Processes 11

Fig. 1.4 Spectral densities of X(t) generated from randomized harmonic model (1.18) with two
terms for the case of A1 = 1.4, A2 = 1.4, 1 = 3, 2 = 6, 1 = 1.0. Taken from Ref. [4] (C) Elsevier
Science Ltd (2004)

Fig. 1.5 Probability densities of X(t) generated from randomized harmonic model (1.18) with two
terms. Taken from Ref. [4] (C) Elsevier Science Ltd (2004)
12 W.Q. Zhu and G.Q. Cai

But it is this random variable U that renders the process not ergodic, and a large
number of samples are required in Monte Carlo simulation. If the system under
investigation is complex with many degrees of freedom, the computational time for
the simulation may be prohibitively long. To reduce the computational burden, an
equivalent representation is proposed below.
Let

X(t) = A sin( (t)), Y (t) = A cos( (t)), (t) = 0 + B(t) +U (1.24)

Applying the Ito differential rule [12], we obtain the following Ito differential
equations [13] from (1.24)
1
dX = (0Y 2 X)dt + Y dB(t)
2
1
dY = (0 X 2Y )dt XdB(t) (1.25)
2
Equation set (1.25) is equivalent to the stochastic differential equations in the
Stratonovich sense by taking account of the WongZakai correction [18],

X = 0Y +YW (t), Y = 0 X XW (t) (1.26)

where W (t) is a Gaussian white noise with a spectral density K = 2 /2 . It can be


shown that the stochastic process X(t) modeled in (1.25) and (1.26) is equivalent
to the one in (1.1) with the same probability density and the spectral density. One
advantage of the expressions (1.25) and (1.26) is that the stochastic process X(t)
is ergodic. When using either (1.25) or (1.26) for simulation, only one sample is
needed for simulation, which reduces the computational time significantly. Another
important factor to reduce the computational time by using (1.25) and (1.26) is that
calculation of the trigonometric function cos() is avoided, which is much more time
consuming than addition and multiplication.

1.3 Nonlinear Filter Model

1.3.1 Low-Pass Bounded Processes

Consider a stationary stochastic process X(t), defined on a bounded interval [xl , xr ].


Without loss of generality, assume that the X(t) has a zero mean; therefore, xl < 0
and xr > 0. Let X(t) be a diffusive Markov process governed by the following Ito
stochastic differential equation [13]

dX = Xdt + D(X)dB(t) (1.27)


1 On Bounded Stochastic Processes 13

where is a positive constant and B(t) is a unit Wiener process. Multiplying (1.27)
by X(t ) and taking the ensemble average, we obtain

dRXX ( )
= RXX ( ) (1.28)
d

where RXX ( ) = E[X(t)X(t )] is the correlation function of X(t). Let the


mean-square value of X(t) be

RXX (0) = E[X 2 (t)] = 2 (1.29)

which is the initial condition for (1.28). Then solution of Eq. (1.28) is given by

RXX ( ) = 2 exp( | |) (1.30)

The corresponding spectral density of X(t), i.e. the Fourier transform of RXX ( ), is
of the low-pass type
 +
1 2
XX ( ) = RXX ( )ei d = (1.31)
2 ( 2 + 2 )

Equation (1.31) shows that the central frequency is = 0, and the bandwidth is
controlled by the parameter .
The stationary probability density pX (x) of X(t) is governed by the reduced
FokkerPlanck equation
 
d 1 d
pX (x) + (D2 (x)pX (x)) = 0 (1.32)
dx 2 dx

If pX (x) is known, (1.32) leads to [1]


 x
2
D2 (X) = upX (u)du (1.33)
pX (x) xl

Thus the stochastic process X(t) generated from (1.27) with D(X) given by (1.33)
possesses a given stationary probability density and a low-pass spectral density
(1.31). The parameter can be used to adjust the spectral density, and function
D(X) is used to match any valid probability distribution.
Consider a bounded stochastic process with the following probability density
 
2 (2 + 2) x2
pX (x) = C( x ) = 2 +1
2
1 2 , > 1 (1.34)
2 ( ( + 1))2

where (.) is the Gamma function, and and are two parameters. It is clear from
(1.34) that X , and is the single parameter which determines the shape
14 W.Q. Zhu and G.Q. Cai

Fig. 1.6 Stationary probability densities of X(t) generated from nonlinear filter (1.27)

of pX (x). Since the mean square value 2 in (1.29) is uniquely determined by and
it is not an independent parameter. Substitution of (1.34) into (1.33) leads to
2
D2 (X) = X2 (1.35)
+1
stationary probability densities of stochastic processes generated from (1.27) are
depicted in Fig. 1.6 for several values. It is shown that the shapes of the probability
densities diverse for different values. For the case of < 0, the shape of the
probability density is similar to that of the randomized harmonic process, shown in
Fig. 1.2. It reaches minimum at x = 0 and approaches infinity at two boundaries.
The case of = 0 corresponds to a uniform distribution. For cases of > 0, the
probability density functions reach their maxima at the zero. For a fixed value,
the shapes of probability densities for different values are diverse, yet they may
share the similar spectral density (1.31).

1.3.2 Bounded Processes with Spectrum Peaks at Nonzero


Frequencies

Consider the following Ito stochastic differential equations


1 On Bounded Stochastic Processes 15

dX1 = (a11 X1 a12 X2 )dt + D1 (X1 , X2 )dB1 (t)


dX2 = (a21 X1 a22 X2 )dt + D2 (X1 , X2 )dB2 (t) (1.36)

where ai j are parameters and B1 (t) and B2 (t) are independent unit Wiener processes.
Multiplying the two equations in (1.36) by X1 (t), taking the ensemble average, and
denoting Ri j ( ) = E[Xi (t)X j (t + )], we obtain

d d d
R11 ( ) = a11 R11 ( ) a12 R12 ( )
d d d
d d d
R12 ( ) = a21 R21 ( ) a22 R12 ( ) (1.37)
d d d

Subjected to initial conditions

R11 (0) = E[X12 ] = 2 , R12 (0) = E[X1 X2 ] (1.38)

(1.37) can be solved for the correlation functions. In modeling a stochastic process,
its spectral density is usually of interest. Following a procedure proposed in [3], the
spectral densities can be obtained directly without solving (1.37) and performing a
Fourier transform. Define the following integral transformation
 +
1
i j ( ) = F [Ri j ( )] = Ri j ( )ei d (1.39)
0

It can be shown that


d 1
F[ Ri j ( )] = i i j ( ) E[Xi X j ] (1.40)
d
and
1
i j ( ) = Re[ i j ( )] = [ i j ( ) + ij ( )] (1.41)
2
Using (1.39) and (1.40), (1.37) can be transformed to

1
i 11 E[X12 ] = a11 11 a12 12

1
i 12 E[X1 X2 ] = a21 11 a22 12 (1.42)

Solutions are readily obtained from complex linear algebraic equation set (1.42),
leading to

(a11 2 + A2 a22 )E[X12 ] + a12 ( 2 A2 )E[X1 X2 ]


11 ( ) = (1.43)
[(A2 2 )2 + A21 2 ]
16 W.Q. Zhu and G.Q. Cai

where A1 = a11 + a22 and A2 = a11 a22 a12 a21 . By adjusting parameters ai j , (1.43)
can represent a spectral density with a peak at a specified location and a given
bandwidth.
The FokkerPlanck equation for the joint stationary probability density pX1 X2
(x1 , x2 ) of X1 (t) and X2 (t) corresponding to (1.36) is given by

[(a11 x1 a12 x2 )p] + [(a21 x1 a22 x2 )p]
x1 x2
1 2 2 1 2 2
[D1 (x1 , x2 )p] [D (x1 , x2 )p] = 0 (1.44)
2 x1
2 2 x22 2

Equation (1.44) is satisfied if the following three conditions are met

p p
a12 x2 a21 x1 =0 (1.45)
x1 x2

1 2 2
a11 x1 p [D (x1 , x2 )p] = 0 (1.46)
2 x12 1

1 2 2
a22 x2 p [D (x1 , x2 )p] = 0 (1.47)
2 x22 2

indicating that the system belongs to the case of detailed balance [11]. The general
solution for Eq. (1.45) is given by

p = ( ), = k1 x12 + k2 x22 (1.48)

where is an arbitrary function of , k1 and k2 are positive constants satisfying the


following condition

k1 a12 + k2 a21 = 0 (1.49)

Substituting (1.48) into (1.46) and (1.47), we obtain


 x1  m
2a11 a11
D21 (x1 , x2 ) = upX1 X2 (u, x2 )du = ( )d (1.50)
pX1 X2 (x1 , x2 ) x1l k1 ( )
 x2  m
2a22 a22
D22 (x1 , x2 ) = vpX1 X2 (x1 , v)dv = ( )d (1.51)
pX1 X2 (x1 , x2 ) x2l k2 ( )

where m is the maximum value of .


Consider the case in which x1 and x2 are bounded by

k1 x12 + k2 x22 k1 2 (1.52)

and
1 On Bounded Stochastic Processes 17

1
( ) = C(k1 2 2 ) 1/2 , > (1.53)
2
The joint stationary probability density is

pX1 X2 (x1 , x2 ) = C(k1 2 k1 x12 k2 x22 ) 1/2 (1.54)

 k1 ( 2 x2 )/k2
pX1 X2 (x1 , x2 )dx2 = C1 (k1 2 x12 )
1
pX1 (x1 ) = 2 (1.55)
0

where C1 is a normalization constant. Substituting (1.55) into (1.50) and (1.51), we


obtain
2a11
D21 (x1 , x2 ) = (k1 2 k1 x12 k2 x22 ) (1.56)
k1 (2 + 1)
2a22
D22 (x1 , x2 ) = (k1 2 k1 x12 k2 x22 ) (1.57)
k1 (2 + 1)

The probability density (1.55) has the same form as (1.34), but with a more
restrictive range for parameter due to the validity of the joint probability density
(1.54) and the positivity requirement of (1.56) and (1.57). Thus, equation set (1.36),
with D1 (X1 , X2 ) and D2 (X1 , X2 ) given by (1.56) and (1.57), respectively, can be used
to generate a stochastic process X1 (t) with a spectral density (1.43) and a probability
density (1.55). Parameters ai j (i, j = 1, 2) are used to adjust the spectral density,
is determined by the allowable range of process X1 (t), and is used to match the
shape of its probability distribution.
Two examples are listed below for illustration.
Example 1. a11 = 0, a12 = 1, a21 = 02 , a22 = 2 0 , D21 = 0, D22 = (4 03 /
(2 + 1))( 2 X12 X22 /02 )

2 03 2
11 ( ) = , pX1 (x1 ) = C1 ( 2 x12 )
[(0 2 )2 + 4 2 02 2 ]
2

Example 2. a11 = 2 0 , a12 = 02 , a21 = 1, a22 = 0, D21 = (4 03 /(2 + 1))


( 2 X12 02 X22 ), D22 = 0

2 0 2 2
11 ( ) = , pX1 (x1 ) = C1 ( 2 x12 )
[(0 2 )2 + 4 2 02 2 ]
2

In both cases, and 0 can be used to adjust the spectral density and and are
used to match the probability density.
Figures 1.7 and 1.8 show the spectral density functions for the two examples with
0 = 3 and several different values of . It is seen that the two example models yield
18 W.Q. Zhu and G.Q. Cai

different shape of spectral densities. The spectral density vanishes at zero-frequency


in Example 2, while it does not in Example 1. In both cases, 0 determines the peak
location and controls the bandwidth.

1.3.3 Bounded Processes with Multiple Spectrum Peaks

The nonlinear filter model can also be extended to include cases with multiple peaks
in the spectra. Consider the following governing equations
n
dXi = ai j X j dt + Di (X)dBi (t), j = 1, . . . , n (1.58)
j=1

whereX = {X1 , . . . , Xn }T , and Bi (t) are unit Wiener processes mutually independent
for different i. Following the same procedure as in the preceding section, we can
model a bounded stochastic process X1 (t) with a probability density

n3
pX1 (x1 ) = C1 ( 2 x12 ) , > (1.59)
2
and a spectral density obtained from solving the equations
n
1
i 1i E[X1 Xi ] = ai j 1 j , i = 1, . . . , n (1.60)
j=1

In constructing equation set (1.58),


 
n
aii
D2i (X) = n3 k1 2 k j X j2 , i = 1, . . . , n (1.61)
ki 2 j=1

and ai j in (1.58) and ki in (1.61) should satisfy

ki ai j + k j a ji , i, j = 1, . . . , n (1.62)

It can be shown that the spectral density 11 ( ) has multiple peaks if n > 2. The
locations of the peaks and the bandwidth of each peak are adjustable by selecting
coefficients ai j . The low-pass case of n = 1 and the case of a single peak at a nonzero
frequency, n = 2, are special cases of (1.58).
An example of the case n = 4 is given below for illustration. The nonlinear filter
model is governed by

dX1 = X2 dt
1 On Bounded Stochastic Processes 19

Fig. 1.7 Spectral densities of X1 (t) generated from 2-D nonlinear filter model (1.36) for
Example 1 with 0 = 3. Taken from Ref. [4] (C) Elsevier Science Ltd (2004)

Fig. 1.8 Spectral densities of X1 (t) generated from 2-D nonlinear filter model (1.36) for
Example 2 with 0 = 3. Taken from Ref. [4] (C) Elsevier Science Ltd (2004)
20 W.Q. Zhu and G.Q. Cai

dX2 = (12 X1 21 1 X2 a24 X4 )dt + D2 (X)dB2 (t)


dX3 = X4 dt
dX4 = (a42 X2 22 X3 22 2 X4 )dt + D4 (X)dB4 (t) (1.63)

where 1 , 2 , 1 , and 2 are positive parameters, a24 and a42 are coupling
parameters with opposite signs, and
1 2 a24 22 2 a24 2
D22 (X) = 41 2 ( 2 X12 X + X + X )
12 2 a42 12 3 a42 12 4
2 1 2 2
D24 (X) = D2 (X) (1.64)
1
The process X1 (t) possesses a spectral density determined by (1.60) and a probabil-
ity density
1
pX1 (x1 ) = C1 ( 2 x12 ) , > (1.65)
2
Thus, parameters and can be used to adjust the probability density, while 1 ,
2 , 1 , 1 , a24 , and a42 can be used to match the spectral density. Figure 1.9 shows
the spectral density functions for three cases of 1 = 6, 2 = 2, 1 = 2 = 0.05 and
a24 = a42 = 1, 3, 4. By changing a single parameter a24 , the spectral density has
different shapes. For a more complicated shape of a spectral density, optimization
may be needed to select a set of ai j parameters in the model (1.58).
It may be noted that, in all the examples given above, the bounded processes
are defined on a symmetrical interval [ , ]. This interval can be shifted to an
asymmetrical one, simply by adding a constant to the process. The terms D2i (X) in
these examples are polynomials up to the second order, although other nonnegative
expressions are also admissible. In passing, we note that if one of the two spectrum
peaks is located at = 0, then only a three-dimensional filter will be required.

1.3.4 Monte Carlo Simulation

The bounded processes modeled by the nonlinear filters (1.27), (1.36), and (1.58)
with the same probability distribution (1.34) have their diffusion coefficients
given by (1.35), (1.56), (1.57), and (1.61), respectively. They are not suitable for
carrying out Monte Carlo simulation directly since the state variables may exceed
their respective boundaries during the numerical calculations. Taking the one-
dimensional nonlinear filter for example, D2 (X) in (1.35) will be negative during
the simulation if |X| > . To overcome the difficulty, transformations are proposed
to obtain sets of Ito stochastic differential equations for new variables. Two cases
are considered below for illustration.
1 On Bounded Stochastic Processes 21

Fig. 1.9 Spectral densities of X1 (t) generated from 4-D nonlinear filter model (1.58) for 1 = 6,
2 = 2, 1 = 2 = 0.05 Taken from Ref. [4] (C) Elsevier Science Ltd (2004)

First we consider the low-pass nonlinear filter given by (1.27) and (1.35). Make
the transformation

X(t) = sin( (t)) (1.66)

and obtain

d 1 d2 sin( )
= , = 2 3 (1.67)
dX cos( ) dX 2 cos ( )

Applying the Ito differential rule [12] and using (1.27) and (1.35), we obtain an Ito
equation for the new variable

2 + 1
d = tan( )dt + sgn(cos( ))dB(t) (1.68)
2( + 1) +1

where sgn(.) denotes the sign function. The Ito equation (1.27) is equivalent to a
stochastic differential equation in Stratonovich sense

2 + 1
X = X + ( 2 X 2 )W (t) (1.69)
2( + 1)
22 W.Q. Zhu and G.Q. Cai

where W (t) is a Gaussian white noise with a spectral density /2 ( + 1). Then
we have from (1.69)
2 + 1
= tan( )dt + sign(cos( ))W (t) (1.70)
2( + 1)
Either (1.69) or (1.70) can be used conveniently and effectively for simulation.
For the two-dimensional nonlinear filter in the Example 1 of Sect. 1.3.2, i.e.
the system

dX1 = X2 dt
  
4 03 1
dX2 = (02 X1 2 0 X2 )dt + 2 X12 2 X22 dB(t) (1.71)
2 + 1 0

consider the transformations

X1 = sin( )cos( ), X2 = 0 sin( )sin( ) (1.72)

The following partial derivatives can be obtained from (1.71) and (1.72),

cos( ) sin( )
= , = ,
X1 cos( ) X2 0 cos( )
sin( ) cos( )
= , =
X1 sin( ) X2 0 sin( )
  2

2 1 cos ( )
2 sin( ) sin2 ( ) 2 sin( ) cos( )
= 2 2 + , , = 2 2 2
X22 0 sin( ) cos( ) cos ( )
3 X2
2
0 sin ( )
(1.73)

The Ito differential equations for the new processes (t) and (t) can be derived
using the Ito differential rule

d = [(2 0 h)tan( ) sin2 ( ) + hcot( ) cos2 ( )]dt



2hsgn(cos( )) sin( )dB(t) (1.74)
| cos( )|
d = [0 2 0 sin( ) cos( ) 2hcot 2 ( ) sin( ) cos( )]dt 2h dB(t)
sin( )
(1.75)

where h = 2 0 /(2 + 1). On the other hand, taking into account the WongZakai
correction terms [18], the two Ito equations in (1.71) are equivalent to the following
two Stratonovich stochastic differential equations
1 On Bounded Stochastic Processes 23

X1 = X2
 
1
X2 = 02 X1 (2 0 h)X2 + 2 X12 2 X22 W (t) (1.76)
0

where W (t) is a Gaussian white noise with a spectral density 02 h/ . The corre-
sponding equations for the new variables are

1
= (2 0 h) tan( ) sin2 ( ) sgn(cos( )) sin( )W (t) (1.77)
0
| cos( )|
= 0 (2 0 h) sin( ) cos( ) cos( )W (t) (1.78)
0 sin( )

Either the set of Ito equations (1.74) and (1.75) or the set of Stratonovich equations
(1.77) and (1.78) can be used for simulation.

1.4 Conclusions

Two different models can be used for generating bounded stochastic processes: the
randomized harmonic model and the nonlinear filter model. Both models are capable
to generate processes of spectra with single or multiple peaks and with either narrow
or broad bandwidths. The randomized harmonic model is simple to implement by
introducing a random noises in the phase angle, but the probability distributions of
the generated processes are of singular shapes and cannot be adjusted. Thus it is
suitable for cases in which effects of the probability distribution are not important.
In the nonlinear filter model, the drift terms in the Ito equations are adjusted to
match the spectral density, and the diffusion terms are determined according to the
boundary of the stochastic process and the shape of its probability density. Since it is
capable to cover a variety of probability distribution profiles, it may be used for cases
in which the probability distribution plays an important role, such as when system
transient behaviors are relevant. It is noted that the computational effort might be
moderately more to use the nonlinear filter processes than to use the randomized
harmonic processes.

Acknowledgments The first author thanks the support from National Natural Science Foundation
of China under Key Grant No. 10932009, No. 11072212, and No. 11272279. The second author
contributes to this work during his stay at Zhejiang University as a visiting professor. The financial
support from Zhejiang University is greatly acknowledged.
24 W.Q. Zhu and G.Q. Cai

References

1. Cai, G.Q., Lin, Y.K.: Phys. Rev. E 54(1), 299 (1996)


2. Cai, G.Q., Lin, Y.K.: Reliability of dynamical systems under non-Gaussian random excitations.
In: Shirarishi, N., et al. (eds). Structural Safety and Reliability. Proceedings of the 7th
International Conference on Structural Safety and Reliability, Japan: Kyoto, pp. 819826
(1997)
3. Cai, G.Q., Lin, Y.K.: Probabilist. Eng. Mech. 12(1), 41 (1997)
4. Cai, G.Q., Wu, C.: Probabilist. Eng. Mech. 19(2), 197 (2004)
5. Cai, G.Q., Lin, Y.K., Xu, W.: Response and reliability of nonlinear systems under stationary
non-Gaussian excitations. In: Spencer, B.F., Johnson, E.A. (eds.) Structural Stochastic Dy-
namics. Proceedings of the 4th International Conference on Structural Stochastic Dynamics,
Indiana: Notre Dame, pp. 1722 (1998)
6. Deodatis, G., Micaletti, R.C.: J. Eng. Mech. 127(12), 1284 (2001)
7. Dimentberg, M.F.: Statistical Dynamics of Nonlinear and Time-Varying Systems. Wiley,
New York (1988)
8. Dimentberg, M.F.: A stochastic model of parametric excitation of a straight pipe due to slug
flow of a two-phase fluid, In: Proceedings of the 5th International Conference on Flow-Induced
Vibrations, Brighton, UK, pp. 207209. Mechanical Engineering Publications, Suffolk (1991)
9. Dimentberg, M.F.: Probabilist. Eng. Mech. 7(2), 131 (1992)
10. Grigoriu, M.: J. Eng. Mech. 124(2), 121 (1998)
11. Lin, Y.K., Cai, G.Q.: Probabilistic Structural Dynamics, Advanced Theory and Applications.
McGraw-Hill, New York (1995)
12. Ito, K.: Nagoya Math. J. 3, 55 (1951)
13. Ito, K.: Memoir Am. Math. Soc. 4, 289 (1951)
14. Li, Q.C., Lin, Y.K.: J. Eng. Mech. 121(1), 102 (1995)
15. Shinozuka, M., Deodatis, G.: Appl. Mech. Rev. 44(4), 191 (1991)
16. Winterstein, S.R.: J. Eng. Mech. 114(10), 1772 (1988)
17. Wedig, W.V.: Analysis and simulation of nonlinear stochastic systems. In: Schiehlen, W. (ed.)
Nonlinear Dynamics in Engineering Systems, pp. 337344. Springer, Berlin (1989)
18. Wong, E., Zakai, M.: On the relation between ordinary and stochastic equations. Int. J. Eng.
Sci. 3, 213 (1965)
19. Wu, C., Cai, G.Q.: Effects of excitation probability distribution on system responses. Int.
J. Nonlinear Mech. 39(9), 1463 (2004)
Chapter 2
Dynamics of Systems with Randomly Disordered
Periodic Excitations

M. Dimentberg

Abstract A model of a periodic process with random phase modulation, or


disorder, is described. It can be easily incorporated into Stochastic Differential
Equations Calculus, thereby providing potential for analytical solution to dynamic
problems where it represents the forcing function, or excitation.
Thus, the corresponding method of moments had been applied to a linear system
subject to external and parametric excitation with preliminary reduction of the
equation of motion by asymptotic stochastic averaging in the latter case; boundaries
for parametric instability had been derived both in the mean square and in the
almost sure sense. Solution for a strongly nonlinear system with impacts had also
been obtained illustrating potentially strong influence of imperfect periodicity of
excitation on response subharmonics. Examples of application from engineering
mechanics are presented.

Keywords Bounded noise Non-Gaussian processes Stochastic differential


equations Stochastic mechanics Engineering mechanics Parametric reso-
nance Subharmonics

2.1 Introduction

The present survey covers response studies for systems subject to randomly disor-
dered periodic excitations using the following basic model of temporal variations
in the applied force h(t) as suggested for Engineering Mechanics independently
in [1, 22]

M. Dimentberg ()
Department of Mechanical Engineering, Worcester Polytechnic Institute,
100 Institute Road Worcester, MA 01609, USA
e-mail: diment@wpi.edu

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 25


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 2, Springer Science+Business Media New York 2013
26 M. Dimentberg

h(t) = cos(q(t)), q = + (t), (2.1)

where:
 (t) = 0  (t) (t + ) = D ( )

Here angular brackets denote probabilistic averaging and is Dirac delta, so


that is a stationary zero-mean Gaussian white noise with intensity D . Thus,
instantaneous frequency of the random process h(t) has mean value of and an
intensity D of white-noise fluctuations. Its power spectral density (PSD) hh ( )
is [6, 17]
2 + 2 + D2 /4
hh ( ) = (D /4 )  2 (2.2)
2 + D2 /4 2 + 2 D2

and thus is similar to that of a Gaussian white noise passed through a second-order
shaping filter with bandwidth D . On the other hand, probability density function
(PDF) p(h) of the sinusoid with random phase h(t) is drastically different from
Gaussian:
1
p(h) = (2.3)
1 h2
The model (2.1) may also be called PERPM (Periodic Excitation with Random
Phase Modulation) model. Obviously it should be more accurate than, say, Gaussian
model, for applications where temporal variations in amplitudes of loads, if at all,
are of secondary importance than those in phase (frequency). These applications
may include cases of excitation due to spatially periodic travelling dynamic loads
and/or travelling structures (e.g., traffic loads on bridges) where imperfect period-
icity should be accounted for. Thus, the first reported case of such an application
[1] was the classical problem of parametric resonance in coal mine cages with
potential random scatter in distance between neighboring supports. It should also
be emphasized that in the vicinity of resonances disregarding amplitude variations
in excitation in the framework of the PERPM model may be warranted even if
these variations are not very small because of higher sensitivity of the response
to variations in the frequency of excitation. Thus the model had been used, say,
for structural buffeting in a turbulent flow [17] and may be used for ship rolling in
rough seas.
It may be added that the basic PERPM model (2.1) has proved itself to be simpler
for analytical studies of sophisticated parametric random vibration problems with
narrow-band random excitations than the model of a filtered Gaussian white-noise
excitation. In particular, it can be easily incorporated into the SDE (Stochastic
Differential Equations) Calculus by using the following equivalent autonomous
description of the trigonometric functions

h = z1 , z1 = ( + (t))z2 , z2 = ( + (t))z1 (2.4)


2 Dynamics of Systems with Randomly Disordered Periodic Excitations 27

Applying to the SDEs (2.4) expectation operator denoted by angular brackets one
may obtain two ODEs for mean values mi = zi , i = 1, 2. This is done through
the use of expressions for the so-called WongZakai corrections [1, 17] for cross-
correlations between Gaussian white-noise excitations and state variables governed
by a set of SDEs for components of an n-dimensional state space vector X. The
general rule is as follows: if

Xi = gi (X)i (t); i (t) j (t + ) = D ,i j ( ) i, j = 1, . . . , n

then
n
gi (X)
gi (X)i (t) = (1/2) gk (X) D ,ik (2.5)
k=1 Xk

In case of linear functions g the expected values, which appear in the RHSs of the
deterministic equations (ODEs) for expectations of the components of state vector
X, are seen to be linear in these components. Thus, for SDEs (2.4) the resulting
ODEs are
m 1 = m2 (D /2)m1 , m 2 = m1 (D /2)m2 (2.6)

and they have asymptotically stable steady state solution m1 = m2 = 0. We may


then introduce three second-order state variables ui j = zi z j , i, j = 1, 2 and upon
applying usual differentiation rules to obtain three SDEs for ui j similarly derive
three ODEs for their expected values using expressions (2.5). This set of ODEs has
the asymptotically stable steady state solution z21  = z22  = 1/2, z1 z2  = 0 (here
the basic trigonometric relation had been applied to get the integration constant).
An instructive example of the method of moments application would be
derivation of the Eq. (2.2) [6]. The process h(t) is applied accordingly to a second-
order measuring filter with natural frequency so that

z + 2 z + 2 z = h(t) (2.7)

The mean square response analysis is performed then for the combined SDE
set (2.1) and (2.7) (with the latter written in a space state form). The desired PSD of
h(t) can be found then from the basic relation between PSDs of the excitation h(t)
and response z(t) of the shaping filter which results in
hh ( )
z2  = lim .
0+ 2 2

In the following Sects. 2.2 and 2.4 the basic model (2.1) is used to describe external
force as applied, respectively, to linear and nonlinear systems and response analyses
are presented, whereas in Sect. 2.3 the model describes parametric excitation of a
linear system and results are presented of stochastic stability analyses; subcritical
response to an external excitation can also be studied by the method of moments.
28 M. Dimentberg

2.2 Linear Systems Subject to External Excitation

The simplest of considered cases is that of a purely external excitation of a linear


system. Thus, the SDE of motion as considered in [5, 12] is

X + 2 X + 2 X = h(t) (2.8)

It goes without saying that whenever only second-order moments of the displace-
ment and velocity response are of interest one can just use basic excitation/response
relation for PSDs [1, 17], with the PSD of the RHS of the Eq. (2.8) being 2 hh ( )
(see Eq. (2.2)). However, with the PDF (2.3) of the (scaled) excitation the response
may (although not necessarily!) be strongly non-Gaussian. Regretfully, no analytical
solutions for response PDFs are known for the corresponding random vibration
problems (except for the case of broadband excitation, or D
where X(t)
becomes asymptotically normal). This lack of a benchmark analytical result may
bring difficulties with reliability evaluations for those applications where the
PERPM is the appropriate model indeed for dynamic loads. Two basic options for
further analytical studies are then the method of moments [5] and path integration
method [12].
The first of these approaches may be greatly simplified for an important special
case where the system (2.8) is lightly damped and the excitation is narrow-band and
with small detuning, so that ,D , and | | are proportional to a small parameter
and , D and | | . Under these conditions the response X(t)
is narrow-band indeed. This case permits analytical study by stochastic averaging
approach [1, 17, 20] with subsequent direct application of the method of moments.
To apply the averaging method to the system (2.8), (2.5) introduce first two new
state variables Xc (t) and Xs (t) as

= (Xc sin(q) + Xs cos(q))


X(t) = Xc cos(q) + Xs sin(q), X(t) (2.9)

The relations (2.9) are then resolved for Xc (t), Xs (t) and differentiated over time.
Using the Eq. (2.8) we obtain then a pair of first-order SDEs with their RHSs being
proportional to a small parameter. Then, upon application of averaging over the
period 2 / , which ultimately implies neglecting terms with sin(q) and cos(q),
[5, 12, 19] this set is reduced to


Xc = Xc Xs Xs (t), Xs = Xs + Xc + Xc (t) + (2.10)
2
where = ( 2 2 )/2 .
The linear system (2.10) permits straightforward analysis by the method of
moments. This reduction to just a pair of SDEs is especially important whenever
high-order moments are sought for. But it seems of importance also to derive
a simple analytical expression for the mean square amplitude [12]. Firstly, dir-
ect calculation of WongZakai corrections brings additional terms (D /2)Xc
2 Dynamics of Systems with Randomly Disordered Periodic Excitations 29

and (D /2)Xs to the first and second of the equations (2.10), respectively.
Applying then unconditional probabilistic averaging yields set of two deterministic
ODEs for the expected values mc,s = Xc,s  which has constant steady-state solution

1
mc = , ms = ( + D /2) , Q = ( + D /2)2 + 2 . (2.11)
2 Q 2 Q
Introduce now a new state variable H(t) which may be identified as a squared
response amplitude as long as the detuning | | is proportional to a small
parameter:

H = Xc2 + Xs2 = A2 = X 2 + X 2 / 2 (2.12)

Differentiating (2.12) over time and substituting RHSs of the SDEs (2.10) yields then

H = 2Xc Xc + 2Xs Xs = 2 H + ( / )Xs (2.13)

Applying to (2.13) the probabilistic averaging we obtain a first-order ODE for the
mean square amplitude. As long as stationary response with constant values of
moments is sought for, using (2.11) results in
1 + D /2
A2  = H = ( /2 )ms = ( /2 )2 (2.14)
(1 + D /2 )2 + ( / )2

This result clearly shows how imperfect periodicity of excitation leads to reduction
of the response as long as the apparent response bandwidth 2 + D does not
exceed detuning | |. Indeed, as long as the mean excitation frequency lies
within this resonant range increasing excitation bandwidth implies removal of
excitation energy out of resonant domain, as long as total excitation energy is fixed.
On the other hand, in case of higher detunings such an increase should bring more
energy into the resonant domain, as can be seen from the fact that mean square
response amplitude increases with D if 1 + D /2 > | |/ . This means that
neglecting random imperfections in periodicity of a nominally periodic excitation
may not necessarily lead to conservative estimates of reliability.
The method of moments had been applied in [5] to both the exact SDE
set (2.4), (2.5) (with the latter equation rewritten as the equivalent pair of first-
order SDEs) and the approximate set (2.4), (2.10) to predict fourth-order moments
of the response X(t) (25 independent ODEs were derived by the exact analysis
upon adding 10 additional trigonometric relations). Results for the (constant in
time) stationary fourth-order moment were presented as curves of the excess factor
of steady-state displacement X(t), that is the quantity = X 4 /(X 2 2 ) 3, as
functions of excitation/system bandwidth ratio D / for various values of the
scaled detuning [5]. (The above ratio emerged as an important nondimensional
parameter in all studies based on the model (2.1).) The value of was found to
be 1.5 for D / 0; this should be expected for the almost sinusoidal response,
that is, for the process with the PDF of the same general shape as (2.3) but with
30 M. Dimentberg

singularities smeared or transformed to finite peaks which may be very sharp


for small D / . On the other hand, magnitude of the excess factor was found
to be monotonously decreasing with D / and completely reduced to zero
with D / . This is manifestation of the well-known normalization effect for
response of a linear system to a broadband random excitation with any arbitrary
PDF; however, the corresponding convergence rate was found to be rather slow
particularly at zero detuning (values = 0 were obtained up to four significant
digits at D / = 50).
Results of predictions for limited set of response moments of several orders may
be sufficient for some engineering applications. For example, expected fatigue life
of a structural component in case of narrow-band random vibrations is governed
by moment of the m-th order of stress amplitude S if linear damage accumulation
rule is used [1]; here m is exponent in the relation for fatigue life N (the so-called
Woehler curve), namely NSm = const. However, information on response PDF may
be required for other applications, such as for solution to the first-passage problem
(e.g., for evaluating reliability with respect to brittle fracture). Whilst procedures
are available for evaluating PDF of a random variable from its known limited set
of momentsand they were used in [15]their accuracy may not be adequate in
general, and the path integration (PI) method may be used as an alternative [12].
The PI method is based on time discretization with constant step t whereby
fourth-order RungeKutta scheme can be used. The law of total probability is then
applied expressing joint PDF of state variables at instant t as integral of that at the
instant t t with weighting function as transition PDF; the latter is approximately
Gaussian for sufficiently small time steps. Convergence to stationary PDF of the
response had been observed in all cases studied; thus the PI approach is free
from scatter which is inherent in results of estimating PDFs by direct Monte-Carlo
simulation.
Figure 2.1 illustrates examples of the calculated stationary PDF of the dis-
placement pX (x). All curves were obtained for the case of zero mean detuning
( = ) but different values of the excitation/system bandwidth ratio. Transition
from almost sinusoidal case to the almost Gaussian one with increasing the
ratio is obvious; whilst this qualitative trend has already been established from
analysis of the second-order and fourth-order moments of the response [5], the
present quantitative data may be of a direct use for engineering applications. The
PDF pX (x) is seen to be bimodal for values of D /2 equal to 0.40 and 1.60
with peaks at the smeared singularities being less sharp in the latter case. The
case D /2 = 3.60 corresponds to transition between bimodal and unimodal PDFs
whereas clear unimodal PDF is obtained at D /2 = 10.0.
The joint PDFs of response displacement and velocity were used to calculate
an important reliability indexexpected number of upcrossings, per unit time, of a
given level u by X(t) according to the Rice formula [1, 17, 20]

nu = pX (X, )d (2.15)
0
2 Dynamics of Systems with Randomly Disordered Periodic Excitations 31

Fig. 2.1 Four examples of the stationary PDF p(x) of displacement. Full line: bimodal, for the
case D /2 = 0.40, dashed: bimodal, D /2 = 1.60; dash-dot: transitional, D /2 = 3.60;
dotted: unimodal, D /2 = 10.0. Expected excitation frequency = in all cases. Taken from
Ref. [12]

The results were scaled with respect to the corresponding numbers of upcrossings
for a Gaussian process with the same PSD as X(t) (and therefore same rms responses
x and V ).

nuG = (V /2X )exp u2 /2X2 (2.16)

The calculated values of ratio nu /nuG are presented in Fig. 2.2 as functions of the
excitation/system bandwidth ratio for the case of zero detuning and several values
of the ratio u/X (equal to 2.0, 2.5, 3.0, 3.5, and 4.0 starting from the upper curve
downwards). Each of the curves exhibits a finite range of almost zero values at small
D /2 , this range expanding with increasing u/X . At higher values of D /2
the scaled number of upcrossings starts to increase and may reach unity eventually
(normalization effect!) provided that the level of upcrossings u/X is not very high.
Thus it can be seen how convergence rate to normal PDF of the response X(t) is
strongly reduced with increasing level u = X; it may be very poor for the tails of the
response PDF. Qualitatively similar behavior of the ratio nu /nuG had been obtained
in [12] for other (nonzero) values of detuning; the convergence rate of normalization
effect seems to increase in general with increasing detuning.
The joint PDFs of response displacement and velocity were also used to
calculate stationary PDFs of amplitude in [12], although direct use of the approxi-
mate SDEs (2.10) is possible as well. With increasing D /2 smooth transition had
32 M. Dimentberg

Fig. 2.2 Expected numbers of upcrossings of several different levels X = u by the displacement
X(t) as functions of excitation/system bandwidth ratio D /2 for the case = . The numbers
are scaled with respect to the corresponding upcrossings numbers for the Gaussian process with
the same PSD as X(t). The levels shown correspond to values of the ratio u/X equal to 2.0, 2.5,
3.0, 3.5, and 4.0 (starting from the upper curve downwards) where X is the standard deviation of
the response X. Taken from Ref. [12]

been observed from sharp peak at the value close to square root of the value defined
by the Eq. (2.14) to the Rayleigh PDF as corresponding to asymptotically Gaussian
response at high D /2 .
The above results may be of use for reliability evaluation, for example when
fatigue life is of concern. They show in particular that whenever the admissible
safe level of vibration is assigned as based on endurance limit of the material, the
imperfect periodicity of the excitation should be accounted for in general as long
as it may become a source for damage accumulation because of nonzero excursions
beyond the endurance limit.
Concluding this section certain extensions of the basic model (2.1) of h(t)
should be mentioned. Firstly, as suggested in [14], the white noise in the RHS
of the equation for may be multiplied by a deterministic time-variant envelope
function. This makes the resulting random process nonstationary thereby providing
potential for predicting transient effects. Analysis of a linear systems response
to such an extended external excitation h(t) can still be done by the method of
moments [14, 16]. Thus, second- and fourth-order moments had been calculated
in [14] as functions of time for envelopes being the rectangular pulses of different
durations.
2 Dynamics of Systems with Randomly Disordered Periodic Excitations 33

Another extension of the basic model (2.1) as introduced in [15] involves addition
of a second Gaussian white noise according to the relations

h (t) = ( + (t)) cos(q), q = + 2 (t) (2.17)

where  j (t) = 0, j = 1, 2 and 1 (t)1 (t + ) = D1 ( ), 2 (t)2 (t + ) =


D2 ( ), 1 (t)2 (t + ) = D1 D2 ( ).
If D1 = 0, then the new excitation h (t) is reduced to the previously studied
excitation h(t), which appears in the RHS of the Eq. (2.8) with (t) = 2 (t).
With the newly added white noise 1 (t) potential temporal variations in amplitude
of the excitation can be simulated. Response analysis by the method of moments
through SDE Calculus as described above still can be applied to the SDE (2.8) with
h(t) substituted for by h (t). Results of calculations of second- and fourth-order
moments [15] illustrate influence of the two new parameters D1 and .

2.3 Linear Systems with Parametric Excitation

The phenomenon of parametric resonance is well known in physics and engineering.


The examples in the latter field are bending instability of a beam with periodically
varying axial force and oscillations of a pendulum excited by periodic vertical
vibration of its support. In the latter example the famous Mathieu ODE is obtained
for the pendulums inclination angle as long as it is small and harmonic law of
temporal variations is imposed; in the former example a set of such equations may be
obtained for the beams modal lateral displacements upon applying modal expansion
procedure for the basic partial differential equation of bending motion (perhaps with
the use of some Galerkin-type approximation). The most favorable condition for the
parametric instability is observed when frequency of the excitation equals to twice
the natural frequency of the system.
Instability due to Gaussian random parametric excitation has been rather exten-
sively analyzed for the case of high excitation/system bandwidth ratio using the
Stochastic Averaging approach which implies asymptotic white-noise approxima-
tion for the excitation. This approach, however, does not work for finite bandwidth
ratios; an attempt to consider expanded system by adding shaping filter relating
given excitation PSD to a white noise makes the expanded system nonlinear. On the
contrary, the PERPM model had manifested itself as being superior in this respect as
long as it provides analytical solutions for arbitrary values of the excitation/system
bandwidth ratio. Examples of potential applications (besides already mentioned coal
mine cage) are: pipe with a slug flow of a two-phase fluid with alternating slugs of,
say, water and steam resulting in temporal variations of the mass of fluid-filled span
between supports [2]; and floating offshore windmill oscillating vertically under
ocean waves with potential lateral instability of the slender structure.
34 M. Dimentberg

Thus, consider the generalized Mathieu equation [3, 6] :

X + 2 X + 2 X(1 + sin(2q(t))) = 0, (2.18)

where q = + (t) and (t) is the same white noise as defined for the Eq. (2.1),
whereas the factor 2 is added just for convenience to study the principal instability
domain. The same change of variables (2.9) as in Sect. 2.2 is now applied to the
Eq. (2.18) and the averaging over the response period is applied similarly to the new
(slowly varying) state variables Xc (t), Xs (t) assuming ,D , and | | to be
proportional to a small parameter. This results in the following pair of SDEs

Xc = ( )Xc + Xs Xs (t), Xs = ( + )Xs Xc + Xc (t) (2.19)

where = ( 2 /4 ) /4, ( 2 2 )/2i . Stochastic stability of the SDE


set (2.18) had been analyzed in [3, 6] both in the mean square and in probability.
The first of these definitions implies direct application of the method of moments to
derive set of three ODEs for second-order moments

D cc = (2 2 + D )Dcc + 2 Dcs + D Dss


D cs = (2 + 2D )Dcs (Dcc Dss )
D ss = (2 + 2 + D )Dss 2 Dcs + D Dcc , (2.20)

where Di j = Xi X j , i, j = c, s. Thus the mean-square stability of the system (2.18)


is governed by stability of the asymptotic ODE set (2.20). The latter can be checked
just by sign of the determinant of the coefficients in the RHS since symmetry
of the matrix of moments (of any order) implies that only real eigenvalues are
possible and therefore verification of signs for other RouthHurvitz determinants
is not required. This condition for zero value of the above determinant provides the
following expression for critical excitation amplitude as denoted by star (rewritten
here in the equivalent but more compact form than in [3, 6])

( / )2
( / )2 = 1 + D / + (2.21)
1 + D /

Therefore the system (2.18) is unstable in the mean square if > .


The boundary for neutral mean square stability (2.21) is seen to depend
strongly on the excitation/system bandwidth ratio D / . Thus, the imperfection
in periodicity is seen to be stabilizing in the mean
square at small-in-magnitude
mean detunings,namely, provided that | / | < 1 + D / and destabilizing
at larger mean detunings; with increasing excitation/system bandwidth ratio this
transition between stabilizing and destabilizing effects is shifted towards higher
mean detunings and in the limit of a broadband excitation only stabilization
2 Dynamics of Systems with Randomly Disordered Periodic Excitations 35

is possible. In general, however, neglecting imperfections in periodicity may


indeed lead to nonconservative estimates of reliability; thus, 50% drop in for
| / | = 5 is possible if D / is increased from zero to two. On the other hand
in some applicationsfor example when the mean excitation frequency is highly
uncertainthe worst case scenario may be used, whereby potential for the exact
(parametric) resonance is considered ( = 0). Then the random disorder is definitely
stabilizing.
It may be added here that the case of additional broadband external random
excitation of the system (2.18) with PSD ( ) (added into the RHS) had also
been considered in [3]. Within the applied asymptotic approach this resulted in
the additional constant equivalent white noises D = ( )/ 2 added to the
RHSs of the first and third ODEs (2.20). Analytical solution to this expanded set
clearly illustrates magnification of the mean square subcritical responses ( < )
with increasing amplitude of the parametric excitation. Such a magnification had
been studied in [7] for the more sophisticated case of the external excitation with
arbitrary PSD as described also by the PERPM model.
It should be emphasized here that instability in the mean square should not be in
general regarded as catastrophic: exceeding the threshold (2.21) by implies
just increased sensitivity of the system to external excitation, and if the latter is a
stationary white noise the normable stationary PDF of the response would exist but
with infinite mean square [1]. Still, the condition for instability in the mean square
provides a usefuland relatively easily predictablemargin with respect to real
or catastrophic almost sure instability [17] which would actually imply infinite
growth of response with time (see the following SDE (2.23)). The condition for the
latter-type of instability is usually much harder to predict, but for the present case it
can be done [3, 6].
Introducing in the Stratonovich SDEs (2.19) a change of variables

Xc = A cos( ), Xs = A sin( ), u = ln(A) (2.22)

yields the following pair of SDEs

u = + cos(2 ), = sin(2 ) + (t) (2.23)

From the first Eq. (2.23) the condition for almost sure neutral stability is seen to be
 2
= cos(2 ) = cos(2 )w( )d (2.24)
0

where w( ) is a stationary PDF of the phase (t). This PDF satisfies the Fokker
PlanckKolmogorov (FPK) equation which corresponds to the second SDE (2.23).
The quadrature solution to this FPK equation has been obtained by Stratonovich
and Romanovsky [20] for the original SDOF system with a different type of random
parametric excitation (perfect sinusoid plus white noise). For the present case this
36 M. Dimentberg

solution for w( ) yields the following relation for the critical value of the excitation
amplitude which satisfies the relation (2.24) and is denoted as
 
1 Ii /D +1 ( /D ) Ii /D +1 ( /D )
= + (2.25)
2 Ii /D ( /D ) Ii /D ( /D )

Here Is are modified Bessel functions. In the worst case of exact tuning to
resonance ( = 0) the critical excitation amplitude for almost sure instability
satisfies the relation

/ = I0 ( /D )/I1 ( /D ) (2.26)

Using in (2.26) asymptotic expressions for the Bessel functions at high and small
values of argument and comparing results with the Eq. (2.21) shows that with
increasing D / the ratio of critical excitation amplitudes / increases from
unity at D / 1 (which should be expected with approaching perfectly periodic

case) to 2 at D /
1. Numerical results based on the Eq. (2.25) are illustrated
in [6] in the form of generalized Ince-Strutt chartsset of neutral stability curves
on the plane ( , ) for various values of D / . The full set of curves / of vs.
D / for various detunings see also in [6].
Similar analyses have been performed recently [4] for the so-called sum combi-
national resonance in a two-degrees-of-freedom system governed by equations

X1 + 21 X1 + 12 X1 + 12 12 X2 h(t) = 0
X2 + 22 X2 + 22 X2 + 21 22 X1 h(t) = 0, (2.27)

where h(t) = sin(q(t)), q = 2( + (t)) and (t) is the same as in the Eq. (2.1).
Assuming total detuning 2 = 1 + 2 2 as well as damping ratios i /i
and coefficients 12 , 21 to be proportional to a small parameter the KB-averaging
can be applied [19]. This results in four shortened SDEs

X1c = 1 X1c + X1s + 12 X2c X1s (t)


X1s = 1 X1s X1c 12 X2s + X1c (t)
X2c = 2 X2c + X2s + 21 X1c X2s (t)
X2s = 2 X2s X2c 21 X1s + X2c (t), (2.28)

where i j = (i2 /4i )i j i i j /4 and (i2 i )/2i , i, j = 1, 2 and i = j.


The method of moments is applied to derive 10 ODEs for expected values of
four squares and six products of Xic , Xis , X jc , X js , i, j = 1, 2. Due to the fact that
this system of equations describes the temporal evolution of a positive definite
covariance matrix the eigenvalue governing transition from stable to unstable
2 Dynamics of Systems with Randomly Disordered Periodic Excitations 37

states (which is the eigenvalue with the largest real part) is purely real. This
implies that the type of instability is divergence (as opposed to flutter) andwhich
is computationally importantthat the point of transition from stable to unstable is
associated with expression for the determinant had been obtained. This results in
expression for critical excitation amplitude

( / )2
( / )2 = 1 + D / +
1 + D /

where

1
= (1 + 2 ), = 12 21 /( 2 ), = (1 2 )/2 (2.29)
2
It can be seen that in a special symmetric case 1 = 2 = , 12 = 21 = , = 0,
= the solution (2.29) precisely coincides with solution (2.21) for the principal
parametric resonance with and being excitation amplitude and damping factor
of the single excited mode. But there is also something more in this case. Namely,
direct inspection of the SDE set (2.27) shows that it is equivalent to two uncoupled
pairs of second-order SDEs for variables X+c = X1c + X2c , X+s = X1s + X2s and
variables Xc = X1c X2c , Xs = X1s X2s . This means that condition for almost
sure stability in this symmetric case is also the same as in case of principal
parametric resonance.
Concluding this section the example of a coal mine cage as mentioned in
the Introduction may be referred to: even modest random scatter in distances
between supportsresulting in just 3% in / where is standard deviation
of the excitation frequencymay provide 50% increase in the critical excitation
amplitude .

2.4 Nonlinear Systems

Whenever excitation model (2.1) is applied to a nonlinear system the simplest


approach to analysis is a local one based on a certain approximate replacement
of the given system by some equivalent linear one [1, 17]. Then the method of
moments may be applied to the latter as described in Sect. 2.2perhaps with the
use of some moment closure rule. Thus, for an SDOF system with cubic nonlinearity
in stiffness (Duffing oscillator) subject to excitation (2.1) second- and fourth-order
response moments had been evaluated in [13]. As for the nonlocal studies for
strongly nonlinear systems, paper [18] may be referred to where the path integration
method had been applied to LotkaVolterra system with temporal variations in some
coefficients. (This example is from the population dynamics, where the model (2.1)
simulates influence of imperfectly periodic seasonal environmental variations on
behavior of a predatorprey pair.)
38 M. Dimentberg

In the remaining part of this section a certain strongly nonlinear system is


consideredan SDOF system with one-sided rigid barrier at its equilibrium
positionsubject to imperfectly periodic (harmonic) excitation. This system is
known for possible excitation of subharmonics of various orders in case of perfect
periodicity [21,23]. As suggested in [21] this effect may be implemented for moored
bodies excited by ocean waves, but as shown in [10] it should be of less concern for
ocean engineers if imperfect periodicity of the ocean waves is taken into account.
Thus, let the vibroimpact system be excited by a sinusoidal-in-time force with
random white-noise temporal variations of the excitation frequency. The basic
equation of motion between impacts may be written as

Y + 2 Y + 2Y = f (t) for Y > 0, f (t) = sin(q(t)), q = + (t) (2.30)

where Y (t) is a systems displacement from its equilibrium position. Equation (2.30)
is supplemented by impact condition at the barrier at Y = 0

Y (t + 0) = rY (t 0),Y (t ) = 0; 0 < r 1 (2.31)

where t is clearly seen to be instant of impact, whereas r is restitution coefficient.


The extreme case r = 1 corresponds to elastic impacts, i.e. ones without energy
losses.
The system (2.30), (2.31) had been studied in [10, 11] for the resonant case, i.e.
one where mean excitation frequency is close to an even integer multiple of the
natural frequency of the system without barrier, so that |n | = O( ), n = 2n
where n is an integer and is a small parameter; parameters and were also
assumed to be proportional to . (Actually more general case was considered in
[10] with small offset of the barrier from the systems equilibrium position.) The
analytical study is facilitated by the piecewise-linear transformation [23]

Y = |X|, Y = Xsgn(X)
(2.32)

This transformation effectively reduces the system to a nonimpact one for the
case of the elastic impact, as long as the impact condition (2.31) is transformed
to just a continuity condition for X(t) if r = 1. This condition will be adopted
here for simplicity, with understanding that in case of small impact losses, with
1 r = O( ) the impacting system is asymptotically causal and the impact losses
may be accounted for through the use of an additional equivalent viscous damping
(1 r)( / ) [1, 8, 23]. The transformed equation (2.30) for the motion between
impacts is then found to be

X + 2 X + 2 X = sign(X) sin(q(t)) (2.33)

and it was analyzed in [10] by stochastic averaging. The procedure for analysis
is very similar to one described here in Sect. 2.2 with some adjustment needed to
handle nonlinearity in the RHS. Two new slowly varying state variables A, are
introduced to this end and the solution is represented as X = A sin( ), X = A
2 Dynamics of Systems with Randomly Disordered Periodic Excitations 39

cos( ), = q n . Then the following Fourier series expansion can be used in the
RHS of the Eq. (2.33)

sin((2k 1) )
sgn(X) = sgn(sin( )) = (4/ ) (2.34)
k=1 2k 1

and only terms with k = 2n 1 and k = 2n + 1 would contribute to resonant


response. The subsequent analysis can be made by the same method of moments
as in Sect. 2.2 using state variables Xs = A cos( ), Xs = A sin( ). Upon applying
stochastic averaging two SDEs similar to (2.10) are obtained for these variables and
the method of moments is applied resulting in mean square response amplitude

1 + D /2
A2  = (n / )2 (2.35)
(1 + D /2 )2 + (n / )2

where:
4n
n =
4n2 1
This expression contains the same second cofactor as in (2.14) which describes
influence of the excitation/system bandwidth ratio. And applying this result to
moored bodies excited by ocean wavesthe problem considered in [21] assuming
perfectly sinusoidal excitationwe may expect that for the worst-case scenario
n = 0 the mean square response amplitude may be up to several times less than
with perfect periodicity since bandwidth of ocean waves may be of the order of
10% or more of their mean frequency.
It should be emphasized that closed set of ODEs for moments could be obtained
only for this very special nonlinear system that could be transformed to the
SDE (2.33) with linear LHS. In general the nonlinearity does not disappear which
imply necessity for adopting some closure procedure for the infinite set of moments.
This was the case with the barrier as offset from the equilibrium position [10, 11].
Concluding this section we may consider the vibroimpact system (2.30), (2.31)
for a non-resonant case, whereby mean excitation frequency is not close to any
even integer multiple of the natural frequency. This case had been studied in
[21] for the case of perfectly periodic excitation and a certain range of excitation
frequencies was found where response becomes chaotic through breeding of
multiple frequencies; potential application to moored bodies excited by ocean waves
was mentioned once again.
The case of non-resonant perfectly periodic excitation had been considered in
[9] using qualitative analysis based on iterative scheme for the transformed sys-
tem (2.33). In this way a clear description of the frequency breeding phenomenon
was obtained through the use of the series expansion (2.34) with replaced by t
in the first approximation.
Influence of the imperfect periodicity had been studied in [9] then for the non-
resonant case by numerical (Monte-Carlo) simulation. It had been shown that
40 M. Dimentberg

increasing intensity of the white-noise variations in the excitation frequency leads to


gradual reduction in frequency content of the response with simultaneous increase
of the overall response level. In particular, chaos in the system was found to be
possible only when the excitation is perfectly, or almost perfectly periodic. In other
words, even small imperfection in periodicity of the excitation may kill chaos
for the system considered, with the excitation/system bandwidth ratio once again
being a key governing parameter. This conclusion raises certain doubts in the
general relevance of the applicability of the study of chaos to moored vessels,
as claimed in [21]. Indeed, PSDs of typical random ocean waves have relative
bandwidths D / of the order 0.1 and higheractually larger than the frequency
range of chaotic behavior as demonstrated in [21] (and reproduced in [9]). Therefore
vibroimpact motions of a moored vessel in many cases should be regarded as
response to a random excitation rather than the chaos. In other words, imperfect
periodicity, or randomness in ocean waves should not be ignored, whereas the
chaotic response patternone with the random behavior due to intrinsic systems
properties, triggered solely by small randomness in the initial conditionsmay
be less relevant, being dominated by temporal random variations in the excitation
frequency.

2.5 Conclusions

Deviations from perfect periodicity in a forcing function (excitation) may lead to


significant corresponding variations in the response. Thus, significant stabilization
of a coal mine cage with respect to its parametric instability had been demonstrated
as a potential result of even relatively small random scatternatural or artificial
of distance between a pair of neighboring supports for cable carrying the cage;
or subharmonic response of a moored body to ocean waves was shown to be
significantly lower than predicted for perfectly periodic sinusoidal excitation.
The predictions were made using model of a sinusoid with randomly varying
phase for the excitation and highlighted an important nondimensional parameter:
excitation/system bandwidth ratio. The model can be easily incorporated into the
Stochastic Differential Equations Calculus and the corresponding methods for
response analysis of linear, parametric, and nonlinear systems to this narrow-band
random excitation had been described together with certain specific results.

References

1. Dimentberg, M.: Statistical Dynamics of Nonlinear and Time-Varying Systems. Research


Studies Press, Taunton (1988)
2. Dimentberg, M.: A stochastic model of parametric excitation of a straight pipe due to slug flow
of a two-phase fluid.: In: Proceedings of the 5th International Symposium on Flow-Induced
Vibrations, pp. 207209, Brighton, UK (1991)
2 Dynamics of Systems with Randomly Disordered Periodic Excitations 41

3. Dimentberg, M.: Probab. Eng. Mech. 7, 131134 (1992)


4. Dimentberg, M., Bucher, C.: J. Sound Vib. 331, 4373 (2012)
5. Dimentberg, M., Hou, Z., Noori, M., Zhang W.: Non-Gaussian response of a single-degree-
of-freedom system to a periodic excitation with random phase modulation. In: ASME
Special Volume: Recent Developments in the Mechanics of Continua, ASME-AMD, 160,
pp. 2733 (1993)
6. Dimentberg, M., Hou, Z., Noori, M.: Stability of a SDOF system under periodic parametric
excitation with a white-noise phase modulation. In: Kliemann, W., Sri Namachivaya, N. (eds.)
Nonlinear Dynamics and Stochastic Mechanical, CRC Press (1995)
7. Dimentberg, M., Hou, Z., Noori, M., Zhang, W.: J. Sound Vib. 192(3), 621627 (1996)
8. Dimentberg, M., Iourtchenko, D.: Probab. Eng. Mech. 14, 323328 (1999)
9. Dimentberg, M., Iourtchenko, D.: Int. J. Bifurc. Chaos 15, 20572061 (2005)
10. Dimentberg, M., Iourtchenko, D., van-Ewijk, O.: Nonlinear Dyn. 17, 173168 (1998)
11. Dimentberg, M., Iourtchenko, D., van Ewij, O.: Subharmonic response of moored systems to
ocean waves. In: Spencer, B., Johnson, E. (eds.) Stochastic Structural Dynamics, pp. 495498.
Balkema, Rotterdam (1999)
12. Dimentberg, M., Mo, E., Naess, A.J.: Eng. Mech., http://www.asce.org/Booksand
Journals/Permissions/Permission-Requests/Reuse-Author-s-OwnMaterial/ 133, 10371041
(2007)
13. Hou, Z., Wang, Y., Dimentberg, M., Noori, M.: Probab. Eng. Mech. 14, 8395 (1999)
14. Hou, Z., Zhou, Y., Dimentberg, M., Noori, M.: Probab. Eng. Mech. 10, 7381 (1995)
15. Hou, Z., Zhou, Y., Dimentberg, M., Noori, M.: J. Eng. Mech. 122, 11011109 (1996)
16. Hou, Z., Zhou, Y., Dimentberg, M., Noori, M.: Stochastic models for disordered periodic
processes and their applications. In: Shlesinger, M., Swean, T. (eds.) Stochastically Excited
Nonlinear Ocean Structures, pp. 225251. World Scientific, Singapore (1998)
17. Lin, Y.K., Cai, G.Q.: Probabilistic Structural Dynamics. Advanced Theory and Applications.
McGraw-Hill, New York (1995)
18. Naess, A., Dimentberg, M., Gaidai, O.: Phys. Rev. E 78, 021126 (2008)
19. Nayfeh, A., Mook, D.: Nonlinear Oscillations. Wiley, New York (1979)
20. Stratonovich, R.L.: Topics in the Theory of Random Noise, vol. II. Gordon and Breach, New
York (1967)
21. Thompson, J.M.T., Stewart, H.B.: Nonlinear Dynamics and Chaos (Chapters 14, 15). Wiley,
Chichester (1986)
22. Wedig, W.V.: Analysis and simulation of nonlinear stochastic systems. In: Schielen, W. (ed.)
Nonlinear Dynamics in Engineering Systems, pp. 337344. Springer, New York (1989)
23. Zhuravlev, V.F., Klimov, D.M.: Applied Methods in Vibration Theory (in Russian). Nauka,
Moscow (1988)
Chapter 3
Noise-Induced Phenomena: Effects of Noises
Based on Tsallis Statistics

Horacio S. Wio and Roberto R. Deza

Abstract In this chapter, suitable tools are developed for dealing with the stochastic
dynamics of nonlinear systems submitted to noises which are neither white nor
Gaussian. These tools are then applied to some physical problems:
stochastic resonance
Brownian motors
resonant gated trapping
noise-induced transition
which, besides being highly relevant to biology and technology, are fine instances of
the fact that in nonlinear systems, noise canand hence does, often challenging our
intuitionhave highly nontrivial constructive effects. In the first three examples,
the systems response is either optimized (signal enhancement) or becomes more
robust (spectral broadening) when the noise is non-Gaussian. In the last one, a shift
of transition lines is observed, in the sense in which order is enhanced.

Keywords Bounded noises Tsallis statistics Non-Gaussian stochastic pro-


cesses Stochastic resonance Brownian motors Resonant gated trapping
Noise-induced transition Biophysics

H.S. Wio ()


Instituto de Fsica de Cantabria (Universidad de Cantabria and CSIC) Avda. Los Castros s/n,
39005 Santander, Spain
e-mail: wio@ifca.unican.es
R.R. Deza
Instituto de Fsica de Cantabria (Universidad de Cantabria and CSIC) Avda. Los Castros s/n,
39005 Santander, Spain. Permanent address: IFIMAR (Universidad Nacional de Mar del Plata
and CONICET) Dean Funes 3350, B7602AYL Mar del Plata, Argentina
e-mail: deza@mdp.edu.ar

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 43


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 3, Springer Science+Business Media New York 2013
44 H.S. Wio and R.R. Deza

3.1 Introduction

During the last decades of the twentieth century, the scientific community has
recognized that noise or fluctuations can in many situations be (against everyday
intuition) the trigger of new phenomena or new forms of order. A few examples
are noise-induced phase transitions [1], noise-induced transport [2, 3], stochastic
resonance [4, 5], noise-sustained patterns [6, 7].
Most studies of such noise-induced phenomena have assumed the noise source
to have a Gaussian distribution, either white (memoryless) or colored (i.e.,
with memory, a concept defined below). Although customarily accepted without
criticisms on the basis of the central limit theorem, the true rationale behind this
assumption lies in the possibility of obtaining some analytical results, and avoiding
many difficulties arising in handling non-Gaussian noises. There is, however,
experimental evidence that at least in some casesparticularly in sensory and
biological systemsnon-Gaussian noise sources may add desirable features (like
robustness or fault tolerance) to noise-induced phenomena. These findings add
practical interest to the intrinsic one of finding viable ways to deal with non-
Gaussian noises (or at least some classes thereof).
The present chapter is a brief review of recent results on some noise-induced
phenomena arising when the system is submitted to a colored (or time-correlated)
and non-Gaussian noise source whose statistics obeys the q-distribution found
within the framework of nonextensive statistical physics [8]. In all the phenomena
analyzed, the systems response is shown to be strongly affected by a departure of
the noise source from Gaussian behavior (corresponding to q = 1). This translates
into a shift of transition lines, an enhancement of the systems response or a marked
broadening thereof, according to the case. In most examples, the value of the
parameter q optimizing the systems response turns out to be q = 1. Clearly, this
result would be highly relevant for many technological applications, as well as the
understanding of some situations of biological interest.

3.2 Non-Gaussian Noise

We start out by considering the following SDE (stochastic differential equation, a


differential equation with random coefficients), which is a generalization of the one
proposed by Langevin in 1908 but with multiplicative noise

x = f (x) + g(x) (t). (3.1)

For the time being, we disregard any explicit dependence on t of the functions f (the
drift) and g (the coefficient of the noise, which yields an x-dependent diffusion in
a FokkerPlanck description, both in Itos and in Stratonovichs interpretations), but
keep of course the implicit one through the stochastic process x.
Our focus here is the stochastic or noise source (t), which is called mul-
tiplicative because it affects the values of g(x(t)). Usually, is assumed to
3 Noise-Induced Phenomena: Effects of Noises Based on Tsallis Statistics 45

be a Gaussian-distributed variable; this means that if we made a histogram of


the values taken by at any fixed t but in different realizations, we would
be able to fit it with a Gaussian (albeit with a t-dependent variance). In fact
the variance Dthe second moment of a Gaussian distribution and its defining
characteristic beside its meanis nonetheless but the t = t case of the self-
correlation function C(t t )  (t) (t ), where  represents an average over
realizations or ensemble average. Here we shall consider only Markovian processes,
namely those which lose completely their memory after a typical correlation time
time . If 0, the noise is white and C(t t ) = 2D (t t ), whereas for finite
we speak of a coloredstrictly, an OrnsteinUhlenbeck (OU) processfor which
C(t t ) = 2(D/ ) exp[(t t )/ ].
An OU process can be obtained dynamically from an SDE of the type in Eq. (3.1)

x = x + (t)

if (t) is a Gaussian white noise with zero mean and variance D. Here we assume
the noise (t) to be of the OU type, but obeying a particular class of non-Gaussian
distribution arising in nonextensive thermostatistics [8]. (t) is a generalization of
the OU process and can be generated through the SDE

dVq
= + (t), (3.2)
d

where (t) is again a Gaussian white noise with variance D. The q- (and -)
dependent potential has the expression
 
D (q 1) 2
Vq ( ) = ln 1 + ,
(q 1) 2D

and limq1 Vq ( ) = 2 /2. Since this article is not the appropriate space to elaborate
on all the properties of the process , we refer to [9] for details. However, it is
instructive to display the stationary probability density function (pdf) Pqst ( ), which
can be normalized only for q < 3 and is given by

1  
Pqst ( ) = expq 2 . (3.3)
Zq 2D

The function expq (x) is defined by

1
expq (x) = [1 + (1 q)x] 1q , (3.4)

and the normalization constant Zq has the expression


46 H.S. Wio and R.R. Deza

 
 1q
1 +1

(1q) 1 + 3
 for < q < 1,
 1q 2

for q = 1,
 
 q11 1
2
(q1) 1
 for 1 < q < 3
q1

( indicates the Gamma function). The first moment of Pqst ( ) is   = 0 while the
second,

2D
 2  = 2 Pqst ( ) d = Dq ,
(5 3q)

is finite only for q < 5/3. Also the -process correlation time diverges near q = 5/3
and can be approximated over the whole range of q values by

2D
q .
(5 3q)

The limit of being a (Gaussian) OU process with noise intensity D/ and


correlation time is recovered for q 1. Furthermore for q < 1, the pdf has a
cutoff and is only defined for

2D
| | < .
(1 q)

The shape of the pdf as a function of is shown in Fig. 3.1, for different values of q.
In the next section we outline the path-integral approach to obtain an effective
Markovian approximation, and in the following ones we briefly review a few non-
Gaussian noise-induced phenomena.

3.3 Effective Markovian Approximation

The joint process (x, ) in Eqs. (3.1)(3.2) is Markovian. As a consequence, its


transition pdf Pq (x, ,t; ) obeys a FokkerPlanck equation (FPE for short)

   dVq  D 2P
[ f (x) + g(x) ]Pq 1
q
Pq = , Pq + 2 (3.5)
t x d 2

and also admits a path-integral representation [10]


3 Noise-Induced Phenomena: Effects of Noises Based on Tsallis Statistics 47

Fig. 3.1 The stationary pdf 0.5


given by Eq. (3.3), for
/D = 1. The solid line
indicates the Gaussian case 0.4
(q = 1); the dashed line
corresponds to a bounded
distribution (q = 0.5); the 0.3
dashed-dotted line

pdf(x)
corresponds to a wide
(Levy-like) distribution 0.2
(q = 2)

0.1

0.0

5 4 3 2 1 0 1 2 3 4 5
x

 x(tb )=xb , (tb )=b


Pq (xb , b ,tb | xa , a ,ta ) = D [x(t)] D [ (t)]D [px (t)]D [p (t)] eSq,1 .
x(ta )=xa , (ta )=a

Here, the variables px (t) and p (t) are the canonically



conjugate momenta of x(t)
and (t)which in the path-integral context means d px eipx q = (q)and
 tb 
Sq,1 = f (x(s)) g(x(s)) (s)]
ds ipx (s)[x(s)
ta

d D 
+ip (s)[ (s) + 1 Vq ( (s))] + 2 [ip (s)]2 . (3.6)
d
is the stochastic action, where the time-derivatives are interpreted as the limits of
discrete differences.
In the following paragraphs we sketch the path-integration over the dynamical
variables p (s) px (s) and (s), and the adiabatic-like elimination whereby we
retrieve an effective Markovian approximation. Gaussian integration over p (s)
yields
 x(tb )=xb , (tb )=b
Pq (xb , b ,tb | xa , a ,ta ; ) = D [x(t)] D [ (t)] D [px (t)] eSq,2 ,
x(ta )=xa , (ta )=a
(3.7)
with
 tb 
Sq,2 = f (x(s)) g(x(s)) (s)]
ds ipx (s)[x(s)
ta
 tb 
2 d d
+ ds [ (s)+ 1 Vq ( (s))] (ss )[ (s )+ 1 Vq ( (s ))] . (3.8)
4D ta d d
48 H.S. Wio and R.R. Deza

The integration over px (s) is also immediate, yielding

Pq (xb , b ,tb | xa , a ,ta ; )


 x(tb )=xb , (tb )=b  
D [x(t)] D [ (t)]eSq,3 ds [x(s)
f (x(s))g(x(s)) (s)] ,
x(ta )=xa , (ta )=a

(3.9)

with
 tb  
2 d
Sq,3 = ds (s) + 1 [ Vq ( (s))]2 , (3.10)
4D ta d

and ( ds [x f (x) g(x) ]) indicates that

f (x(s))
x(s)
(s) = (3.11)
g(x(s))

is to be fulfilled at each instant s. With this condition, the integration over (s) just
amounts to replacing (s) by Eq. (3.10) and (s) and by the time-derivative of this
equation, namely

g 1
(s) = x (x f ) + (x f x),
(3.12)
g2 g

where the prime is a shorthand for d/dx, and x x(s). The resulting stochastic
action corresponds to a non-Markovian description as it involves x(s).
In order
to obtain an effective Markovian approximation we resort to approximations and
arguments used before in relation with colored Gaussian noise [1113], whose
results resembled those of the unified colored noise approximation (UCNA)
[14, 15]. In short, we neglect all the contributions including x(s) n with
and/or x(s)
n > 1 and get the approximate relation

dVq 1 x f 1 /D(q 1) f 2 x
+ 1 ( f /g) x +  2 .
d g 1 + (q1) ( f /g)2 g
2D 1 + (q1)
2D f /g) 2

(3.13)
As is the case for the UCNA, this approximation gives reliable results for small
values of .
The final result for the transition pdf is
 x(tb )=xb
Pq (xb ,tb | xa ,ta ; ) = D [x(t)] eSq,4 , (3.14)
x(ta )=xa

where for the simple case g(x) = 1, and writing f (x) U


3 Noise-Induced Phenomena: Effects of Noises Based on Tsallis Statistics 49

 tb
  2

1 1 2D (q 1)U 2 U
Sq,4 = ds U + x + .
4D ta [1 + 2D (q 1)U 2 ]2 1 + 2D (q 1)U 2
(3.15)
It is immediate to recover some known limits. For > 0 and q 1 we get the known
Gaussian colored noise result (OU process), while for 0 we retrieve a Gaussian
white noise, even for q = 1.
The FPE for the evolution of P(x,t) in (3.14) is

1
t P(x,t) = x [A(x)P(x,t)] + x2 [B(x)P(x,t)], (3.16)
2
where

U
A(x) = (q1)U 2 (3.17)
1 2D
(q1)U 2
1+ 2D
+ U [1 + 2D (q 1)U 2 ]

and
 2

[1 + 2D (q 1)U 2 ]2
B(x) = D . (3.18)
U [1 + 2D (q 1)U 2 ]2 + [1 2D
(q 1)U 2 ]

The stationary distribution of the FPE in Eq. (3.16) results


Pst (x) = exp [ (x)] , (3.19)
B
where is the normalization factor and

A
(x) = 2 dy. (3.20)
B

The FPE (3.16)(3.18) and its associated stationary distribution (3.19)(3.20) allow
to compute the mean first-passage time (MFPT) and other results, through a
Kramers-like approximation. Their analytical dependence on the different param-
eters in the case of a double-well potential agrees remarkably with the results of
extensive numerical simulations.

3.4 Stochastic Resonance

The phenomenon of stochastic resonance (SR) is but one example of the counter-
intuitive role played by noise in nonlinear systems, as enhancing the response of
such a system to a weak external signal may require increasing the noise intensity.
50 H.S. Wio and R.R. Deza

The study of SR has risen considerable interest since it was first introduced by Benzi
et al. to explain the periodicity of the Earths ice ages (see [4, 5] and references
therein). Some causes are its potential technological applications in optimizing the
response of nonlinear dynamical systems, and its connection with some biological
mechanisms.
A large number of the studies on SR have been done analyzing a paradigmatic
bistable one-dimensional double-well potential

x4 x2
U0 (x) = . (3.21)
4 2
In almost all descriptions, the transition rates between the two wells were estimated
as the inverse of the Kramers time (or the typical mean first-passage time between
the wells), which was evaluated using standard techniques. Moreover, the noises
have been assumed to be Gaussian in almost all cases.
Let us return to Eqs. (3.1)(3.2) and consider now an explicitly time-dependent
drift of the form

U(x,t)
f (x,t) = U0 (x) + S(t)
x

where S(t) A cos t is an external signal. For A = 0 and g = 1, this problem


describes the diffusion (induced by the colored non-Gaussian noise ) of a
hypothetical particle submitted to the potential U0 (x). Although the details about
the effective FokkerPlanck equation will be omitted in this article (see [9, 16, 17]),
it is worth indicating that such Markovian approximation allows to obtain the pdf
of the process and derive analytical expressions for the Kramers time. Another
useful approximation, the so-called two-state approach [4, 5], was also exploited in
order to obtain analytical expressions for the output power spectral density and the
signal-to-noise ratio at the input frequency (denoted by R). Figure 3.2 shows the
dependence of R on the noise intensity D.
In the upper row, the theoretical results are depicted. On the left, for fixed
correlation time and values of q both below and above 1, and the general trend is
that R becomes higher the lower gets q. On the right, for fixed q and several values
of , and the general trend agrees with previous results for colored Gaussian noises
[4, 5]: as the correlation time increases, the maximum of R decreases and shifts
toward larger values of D (the latter fact being a consequence of the suppression of
the switching rate for increasing ).
Both qualitative trends were confirmed by Monte Carlo simulations of Eqs. (3.1)
and (3.2), displayed in the lower row of Fig. 3.2. The lower left frame corresponds
to the same parameters as the upper left frame. In addition to the increase of the
maximum of R for q < 1, an aspect not very well reproduced or predicted by the
effective Markovian approximation can also be seen: the R curve flattens as q lowers,
which is a hint that maximizing the systems response to a weak external signal
does not require fine-tuning of the noise intensity. The lower right frame displays
simulation results for the same parameters as the upper right frame.
3 Noise-Induced Phenomena: Effects of Noises Based on Tsallis Statistics 51

12
5 q=0.75
10 =0.1
4
8
3
6

R
R

2
4
2 1

0 0
0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,0 0,1 0,2 0,3 0,4 0,5 0,6
D D
12
12 =0.1 q=0.75

8
8
R
R

4
4

0
0
0,0 0,4 0,8 1,2 1,6 2,0 0,0 0,2 0,4 0,6 0,8 1,0
D D

Fig. 3.2 Signal-to-noise ratio R vs. noise intensity D for the double-well potential U0 (x) =
x4 x2
4 2 . Upper row: theoretical results; lower row: Monte Carlo results. Left: = 0.1 and
q = 0.25, 0.75, 1.0, 1.25 (from top to bottom); right: q = 0.75 and = 0.25, 0.75, 1.5 (from top
to bottom)

The numerical and theoretical results can be summarized as follows:


(a). for fixed , the maximum value of the signal-to-noise ratio increases with
decreasing q;
(b). for given q, the optimal noise intensity (the one maximizing the signal-to-noise
ratio) decreases with q and its value is approximately independent of ;
(c). for fixed noise intensity, the optimal value of q is independent of and in
general turns out to be qop = 1.
The SR phenomenon under a non-Gaussian noise source of the form introduced
above was analyzed in [18] for the particular case of non-Gaussian white noise,
using a simple experimental setup. Those results confirmed most of the predictions
indicated above.

3.5 Brownian Motors

Brownian motors or ratchetsnonequilibrium systems in which the breakdown of


spatial and/or temporal symmetry induces directional transportare another class
of noise-induced phenomena that attract the attention of an increasing number of
52 H.S. Wio and R.R. Deza

researchers, again due to both potential technological applications and biological


interest [2, 3]. The transport properties of a typical Brownian motor are usually
studied by means of the following general stochastic differential or Langevin
equation
d2x dx
m 2
= V (x) F + (t) + (t), (3.22)
dt dt
where m is the mass of the particle, the friction constant, V (x) the (sawtooth-like)
ratchet potential, F a constant load force, and (t) the thermal noise satisfying
 (t) (t ) = 2 T (t t ). Finally, (t) is the time-correlated forcing (with zero
mean) that keeps the system out of thermal equilibrium, allowing to rectify the
motion. For this type of ratchet model, many different kinds of time-correlated
forcing have been considered in the literature [2, 3].
The effect of non-Gaussian noises of the class introduced beforewith the
dynamics of (t) described by the Langevin equation (3.2)on the transport
properties of a typical Brownian motor, has been analyzed in [19, 20]. As discussed
before, for 1 < q < 3, the probability distribution decays as a power law (much
more slowly than a Gaussian). Hence, keeping the noise intensity D constant, the
width or dispersion of the distribution increases with q, meaning that the higher q,
the stronger the kicks the particle will receive as compared with the Gaussian OU
process.
Our main objective will be to analyze the dependence of the mean current J
dx/dt and the efficiency defined as the ratio of the work done per unit time
by the particle against the load force F to the mean power injected to the system
through the external forcing on the different parameters (in particular on q).
Let us first look at the overdamped regime, by setting m = 0 and = 1. For
, a closed expression was obtained in [19, 20] using an adiabatic approximation.
The left frame of Fig. 3.3 shows typical analytical results for J and as functions
of qobtained through the adiabatic approximationtogether with results of
numerical simulations. The chosen parameter region is similar to others used in
previous studies, but now we consider a nonzero load force, so to have non-
vanishing efficiency. Although there is not quantitative agreement between theory
and simulations, the used adiabatic approximation predicts qualitatively very well
the behavior of J (and ) as q is varied. As shown in the figure, the current grows
monotonically with q (at least for q < 5/3) while there is an optimal value of q
(> 1) giving the maximum efficiency. These facts can be interpreted as follows:
as q increases, the width of the Pq ( ) distribution grows and large values of the
non-Gaussian noise become more frequent, leading to a monotonic increase of
J with q. However, since the fluctuations around the mean value become larger,
the efficiency ends up decaying for large values of q: in this region, in spite of
having a large (positive) mean value of the current for a given realization of the
process, the transport of the particle towards the desired direction is far from being
assured. Hence, the indicated results clearly show that the transport mechanism
becomes more efficient when the stochastic forcing has a non-Gaussian distribution
with q > 1.
3 Noise-Induced Phenomena: Effects of Noises Based on Tsallis Statistics 53

Fig. 3.3 Current (a) and


efficiency (b) as functions of
0.006 a
q, for the Brownian motor
subject to non-Gaussian noise 0.004
described by Eq. (3.22). Solid
line: adiabatic approximation;

J
squares: Monte Carlo results. 0.002
Calculations performed for
m = 0, = 1, T = 0.5, F =
0.1, D = 1 and = 100/(2 ). 0.000
Taken from Ref. [19] (C) 0.006
Springer b

0.004

0.002

0.000
1.0 1.2 1.4 1.6
q

For m = 0, when inertia effects become relevant, it is reasonable to expect on the


ground of the results discussed above that non-Gaussian noises might improve the
mass-separation capability of ratchets. Previous works have analyzed two species
of ratchets with an OU noise as external forcing (corresponding to q = 1 in the
present case). Their dynamics were studied for different values of , and a parameter
region was found where mass separation occurs. This means that the direction of
the current becomes mass-dependent: the heavy species moves in one direction
while the light one does so in the opposite sense. When this systemin the
parameter region where mass separation occurs for q = 1is submitted to non-
Gaussian forcing, we find that mass separation also occurs for q = 1, that happens
in the absence of a load force, and that it is enhanced for q > 1. The current J is
plotted as a function of q in the right frame of Fig. 3.3, for m1 = 0.5 and m2 = 1.5.
It is apparent that the current difference is maximized for some value (close to
q = 1.25) indicated with a vertical double arrow. Another double arrow indicates the
separation of masses occurring for q = 1 (Gaussian, OU forcing). If the load force
decreases (increases), both curves shift upward (downward) together, maintaining
their difference approximately constant. By controlling F, it is possible to achieve
the situation shown in part (b), where for the optimal value of q, the heavy species
remains static on average (it has J = 0), while the light one has J > 0. In part (c),
both species move in opposite directions at equal speed for the optimal q (Fig. 3.4).
In [21], a modelconsisting of a random walker moving along a ratchet
potentialwas set up to study the transport properties of motor proteins, like kinesin
and myosin. Whereas in that work, the noises were assumed to be Gaussian and
white, they could be generally expected to be self-correlated and non-Gaussian
54 H.S. Wio and R.R. Deza

0.003 a b c

0.002

0.001
J

0.000

0.001

0.002
0.8 1.0 1.2 1.4 0.8 1.0 1.2 1.4 0.8 1.0 1.2 1.4
q q q

Fig. 3.4 Mass separation, Monte Carlo results for the current as a function of q, for particles of
masses m = 0.5 (hollow circles) and m = 1.5 (solid squares). Calculations performed for = 2,
T = 0.1, = 0.75, and D = 0.1875, and F = 0.025 (a), F = 0.02 (b) and F = 0.03 (c). Taken from
Ref. [19] (C) Springer

in real situations. Hence, the effect on the model of [21] of a noise of the
class described in Sect. 3.2 was analyzed in [22], showing the relevant effects
that arise when departing from Gaussian behaviorparticularly related to current
enhancementand their relevance for both biological and technological situations.
Among other aspects, a value of q = 1 optimizing the current was found, in addition
to the already known maximum of J as a function of the noise intensity.
Also, the combination of two different enhancing mechanisms was analyzed.
Besides non-Gaussian noises (whose effects on current and efficiency have been
described above), time-asymmetric forcing can separately enhance the efficiency
and current of a Brownian motor [23]. In [24], the effects of subjecting a Brownian
motor to both effects simultaneously were studied. The results were compared with
those obtained in [23] for the Gaussian white noise regime in the adiabatic limit,
finding that although the inclusion of the time-asymmetry parameter increases the
efficiency up to a certain extent, for the mixed case this increase is much less
appreciable than in the white noise case.

3.6 Resonant Gated Trapping

As commented before, SR has been found to play a relevant role in several biology-
related problems. In particular, ionic transport through cell membranes. These
possess voltage-sensitive ion channels that switch (randomly) between open and
closed states, thus controlling the ion current. Experiments measuring the current
3 Noise-Induced Phenomena: Effects of Noises Based on Tsallis Statistics 55

through these channels have shown that ion transport depends (among other factors)
on the electric potential of the membrane, which plays the role of the barrier
height, and can be stimulated by both dc and ac external fields. Together with
related phenomena, these experiments have stimulated several theoretical studies.
Different approaches have been used, as well as different ways of characterizing SR
in ionic transport through biological cell membranes. A toy model considering the
simultaneous action of a deterministic and a stochastic external field on the trapping
rate of a gated imperfect trap was studied in [25, 26]. The main result was that even
such a simple model of a gated trapping process showed an SR-like behavior.
The study was based on the so-called stochastic model for chemical reactions,
properly generalized in order to include the traps internal dynamics. The dynamic
process consists in the opening or closing of the traps according to an external
field that has two contributions, one periodic with a small amplitude, and another
stochastic whose intensity is (as usual) the tuning parameter. The absorption
contribution is approximately modeled as

(t) (x) (x,t),

with the density of the not yet trapped particles, and

(t) = [B sin t + c ],

where (x)the Heaviside functiondetermines whether the trap is open or


closed: if B sin t + c the trap opens, otherwise it is closed. The interesting
case is when c > B, i.e., the trap is always closed in the absence of noise. When
the trap is open, the particles are trapped with a probability per unit time (i.e., the
open trap is imperfect). Finally, was dynamically generated through Eq. (3.2).
The SR-like phenomenon was quantified by computing the amplitude of the
oscillating part of the absorption current, indicated by J(t). The resulting qual-
itative behavior was as follows: for small noise intensities, the trapping current
was low (as c > B), implying that J was small too; for a large noise intensity,
the deterministic (harmonic) part of the signal became irrelevant and J was again
small. Hence, there had to be a maximum at some intermediate value of the noise.
When compared against the white-noise case, an increase in the system response
was apparent, together with a spectral broadening of the SR curve (which, as
discussed in Sect. 3.4, eases the need of tuning the noise): the bounded character
of the pdf for q < 1 contributed positively to the rate of overcoming the threshold
c and such rate remained at about the same order within a larger range of values
than if had been a white noise [16, 17].
The dependence of the maximum of J(t) on the parameter q was also analyzed,
and the existence of another resonant-like maximum as a function of q was
observed, implying that it is possible to find a region of values of q where the
maximum of J reaches optimal values (corresponding to a non-Gaussian and
bounded pdf), yielding the largest system response (see Fig. 3.5). That is, a double
stochastic resonance effect exists as a function of both the noise intensity and q.
56 H.S. Wio and R.R. Deza

Fig. 3.5 Value of J (amplitude of the oscillating part of the absorption current) as a function
of o for a given observational time (t = 1, 140). Different values of q (triangles q = 0.5, crosses
q = 1.0, squares q = 1.5) and a fixed value of ( = 0.1). Taken from Ref. [26] (C) Elsevier
Science Ltd (2002)

3.7 Noise-Induced Transition

As a last example, we consider a system which exhibits a noise-induced transition


when submitted to a Gaussian white noise, namely the genetic model) of [27, 28],
described by
1
x = x + x(1 x) + x(1 x) (t). (3.23)
2
In previous work [28], in which the system was submitted to OU noise, a reentrance
effect (from a disordered state to an ordered one, and finally again to a disordered
state) arose as the noise correlation time was varied from 0 to . The treatment of
the system simplifies with the change of variables
 
x
y = ln ,
1x
that gives

y = sinh(y) + + (t). (3.24)

Also, in order to simplify the algebra we choose = 0.


3 Noise-Induced Phenomena: Effects of Noises Based on Tsallis Statistics 57

In [29] the same system was studied, but with dynamically generated through
Eq. (3.2). The main result showed the persistence of the indicated reentrance effect,
together with a strong shift in the transition line, as q departed from q = 1. The
transition was anticipated for q > 1, while it was retarded for q < 1.
In order to obtain some analytic results a strong approximation, valid for
|q 1| < 1 (both for q < 1 and q > 1), was derived within a path integral description.
Its comparison with simulations yielded a good agreement even beyond its theoret-
ical validity range, indicating that (at least for this case) such an approximation
results to be robust. Finally, a conjecture about a possible reentrance effect with q
was shown to be false.

3.8 Final Comments

The results discussed above clearly show that non-Gaussian noises can significantly
change the systems response in many noise-induced phenomena, as compared with
the Gaussian case. Moreover, in all the cases presented here, the systems response
was either enhanced or altered in a relevant way for values of q departing from
Gaussian behavior. In other words, the optimum response occurs for q = 1. Clearly,
the study of the change in the response of other related noise-induced phenomena
when subject to such a kind of non-Gaussian noise will be of great interest.
Other recent related works are, for instance, studies of
(a). the stationary properties of a single-mode laser system [30],
(b). the effect of non-Gaussian noise and system-size-induced coherence resonance
of calcium oscillations in arrays of coupled cells [31],
(c). work fluctuation theorems for colored-noise driven open systems [32],
(d). multiple resonances with time delays and enhancement by non-Gaussian noise
in NewmanWatts networks of HodgkinHuxley neurons [33],
(e). effects of non-Gaussian noise and coupled-induced firing transitions of
NewmanWatts neuronal networks [34],
(f). non-Gaussian noise-optimized intracellular cytosolic calcium oscillations [35],
(g). effects of non-Gaussian noise near supercritical Hopf bifurcation [36],
(h). a model of irreversible thermal Brownian refrigerator and its performance [37],
among many others.
An extremely relevant point is related to some recent work [38, 39] where the
algebra and calculus associated with the nonextensive statistical mechanics has been
studied. It is expected that the use of such a formalism could help to directly study
Eq. (3.1), without the need to resort to Eq. (3.2), and also to build up a nonextensive
path-integral framework for this kind of stochastic process.
58 H.S. Wio and R.R. Deza

Acknowledgments The authors thank S. Bouzat, F. Castro, M. Fuentes, M. Kuperman,


S. Mangioni, J. Revelli, A. Sanchez, C. Tessone, R. Toral, and C. Tsallis for their collaboration
and/or useful discussions and comments. HSW acknowledges financial support from MICINN
(Spain) through Project FIS2010-18023, and RRD from MECD (Spain) through Sabbatical
SAB2011-0079. RRD also acknowledges financial support from the National University of Mar
del Plata (Argentina) through Project EXA544/11.

References

1. Sagues, F., Sancho, J.M., Garca-Ojalvo, J.: Rev. Mod. Phys. 79, 829 (2007)
2. Astumian, R.D., Hanggi, P.: Phys. Today 55(11), 33 (2002)
3. Reimann, P.: Phys. Rep. 361, 57 (2002)
4. Bulsara, A., Gammaitoni, L.: Phys. Today 49(3), 39 (1996)
5. Gammaitoni, L., Hanggi, P., Jung, P., Marchesoni, F.: Rev. Mod. Phys. 70, 223 (1998)
6. Izus, G.G., Deza, R.R., Sanchez, A.D.: J. Chem. Phys. 132, 234112 (2010)
7. Izus, G.G., Sanchez, A.D., Deza, R.R.: Phys. A 391, 4070 (2012)
8. Gell-Mann, M., Tsallis, C. (eds.): Nonextensive Entropy, Interdisciplinary Applications.
Oxford University Press, New York (2004)
9. Fuentes, M.A., Wio, H.S., Toral, R.: Phys. A 303, 91 (2002)
10. Colet, P., Wio, H.S., San Miguel, M.: Phys. Rev. A 39, 6094 (1989)
11. Wio, H.S., Colet, P., Pesquera, L., Rodrguez, M.A., San Miguel, M.: Phys. Rev. A 40,
7312 (1989)
12. Castro, F., Wio, H.S., Abramson, G.: Phys. Rev. E 52, 159 (1995)
13. Abramson, G., Wio, H.S., Salem, L.D.: In: Cordero, P., Nachtergaele, B. (eds.) Nonlinear
Phenomena in Fluids, Solids, and other Complex Systems. North-Holland, Amsterdam (1991)
14. Jung, P., Hanggi, P.: Phys. Rev. A 35, 4464 (1987)
15. Jung, P., Hanggi, P.: J. Opt. Soc. Am. B 5, 979 (1988)
16. Fuentes, M.A., Toral, R., Wio, H.S.: Phys. A 295, 114 (2001)
17. Fuentes, M.A., Tessone, C., Wio, H.S., Toral, R.: Fluct. Noise Lett. 3, 365 (2003)
18. Castro, F.J., Kuperman, M.N., Fuentes, M.A., Wio, H.S.: Phys. Rev. E 64, 051105 (2001)
19. Bouzat, S., Wio, H.S.: Eur. Phys. J. B 41, 97 (2004)
20. Bouzat, S., Wio, H.S.: Phys. A 351, 69 (2005)
21. Mateos, J.L.: Phys. A 351, 79 (2005)
22. Mangioni, S.E., Wio, H.S.: Eur. Phys. J. B 61, 67 (2008)
23. Krishnan, R., Mahato, M.C., Jayannavar, A.M.: Phys. Rev. E 70, 021102 (2004)
24. Krishnan, R., Wio, H.S.: Phys. A 389, 5563 (2010)
25. Sanchez, A.D., Revelli, J.A., Wio, H.S.: Phys. Lett. A 277, 304 (2000)
26. Wio, H.S., Revelli, J.A., Sanchez, A.D.: Phys. D 168169, 165 (2002)
27. Horsthemke, W., Lefever, R.: Noise-Induced Transitions, 2nd printing. Springer, Berlin (2006)
28. Castro, F., Sanchez, A.D., Wio, H.S.: Phys. Rev. Lett. 75, 1691 (1995)
29. Wio, H.S., Toral, R.: Phys. D 193, 161 (2004)
30. Bing, W., Xiu-Qing, W.: Chin. Phys. B 20, 114207 (2011)
31. Yubing, G.: Phys. A 390, 3662 (2011)
32. Sen, M.K., Baura, A., Bag, B.C.: Eur. Phys. J. B 83, 381 (2011)
33. Yinghang, H., Yubing, G., Xiu, L.: Neurocomputing 74, 1748 (2011)
34. Yubing, G., Xiu, L., H.Y. et al.: Fluct. Noise Lett. 10, 1 (2011)
35. Yubing, G., Yinghang, H., L.X. et al.: Biosystems 103, 13 (2011)
36. Ruiting, Z., Zhonghuai, H., Houwen, X.: Phys. A 390, 147 (2011)
37. Lingen, C., Zemin, D., Fengrui, S.: Appl. Math. Mod. 35, 2945 (2011)
38. Borges, E.P.: Phys. A 340, 95 (2004)
39. Surayi, H.: In: Beck, C., et al. (eds.) Complexity, Metastabilty and Nonextensivity. World
Scientific, Singapore (2005)
Chapter 4
Dynamical Systems Driven by Dichotomous
Noise

Luca Ridolfi and Francesco Laio

Abstract Dichotomous Markov noise is a two-state bounded stochastic process.


Its simple structure allows exact solutions of steady-state probability density
function to be obtained analytically in one-dimensional differential models. It is
used to describe the random switching between two deterministic dynamics and to
investigate the effect of the noise correlation.
This chapter describes the fundamental properties of the dichotomous noise and
the main analytical results about one-dimensional stochastic differential equations
forced by additive and multiplicative dichotomous noise. Noise-induced transitions
(i.e., structural changes of the system behavior) in systems driven by such type of
noise are also recalled; finally, some emblematic examples of use of dichotomous
noise in the environmental sciences are described.

Keywords Bounded noises Dichotomous Markov noise Noise-induced


transitions Environmental sciences

4.1 General Framework

In this chapter, we will focus on dynamical systems which fall in the class of one-
dimensional stochastic differential equations

d
= f ( ) + g( ) d (t), (4.1)
dt

where is the state variable, t is time, f ( ) and g( ) are deterministic functions,


and d (t) is the noise term, modelled as a dichotomous Markov noise (DMN).

L. Ridolfi () F. Laio


DIATI, Politecnico di Torino, Italy
e-mail: luca.ridolfi@polito.it; francesco.laio@polito.it

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 59


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 4, Springer Science+Business Media New York 2013
60 L. Ridolfi and F. Laio

dn (t)

1
k2

k1 t
2

Fig. 4.1 Parameters of the dichotomous noise and an example of time series. Taken From Ref [4]
(C) Cambridge University Press (2011). Reprinted with Permission

The dichotomous Markov process is a stochastic process described by a state


variable, d (t), that can take only two values, d = 1 and d = 2 , with rate k1 for
the transition 1 2 , and k2 for 2 1 . The noise path (see Fig. 4.1) is a step
function with instantaneous jumps between the two states and random permanence
times, t1 and t2 , in the two states. The expected values of such times are t1  =
1 = 1/k1 and t2  = 2 = 1/k2 . In particular, when the transition rates k1 and k2
are constant in time, the permanence times are exponentially distributed random
variables [1]. If 1 = |2 |, the noise is called symmetric dichotomous Markov noise,
otherwise it is called asymmetric dichotomous Markov noise [2].
The steady state probability distribution of the variable d is a discrete-valued
distribution that can take only two values, 1 and 2 , with probability P1 and P2 ,
respectively. These latter read [3]

k2 k1
P1 = P2 = , (4.2)
k1 + k2 k1 + k2

while the mean, the variance, and the autocovariance function of the dichotomous
process are [1, 4]
k2 1 + k1 2
 d  = , (4.3)
k1 + k2

k1 k2 (2 1 )2
(d d )2  = = 1 2 , (4.4)
(k1 + k2 )2
and
k1 k2 (2 1 )2 |tt |(k1 +k2 )
d (t)d (t ) = e = 1 2 e|tt |(k1 +k2 ) . (4.5)
(k1 + k2 ) 2

The autocovariance function does not vanish for t = t , which entails that the
dichotomous noise is a colored noise. A typical temporal scale of a correlated
process is the integral scale, I , defined as the ratio between the integral of the
4 Dynamical Systems Driven by Dichotomous Noise 61

autocovariance function with respect to the time lag and the variance of the process.
The integral scale is a measure of the memory of the process, and in the case of
dichotomous noise it reads
1
I = = c . (4.6)
k1 + k2
Equation (4.1) is commonly written by assuming a zero-average noise process.
In this case, Eq. (4.3) gives

1 2
1 k2 + 2 k1 = + = 0, (4.7)
2 1
and the (stationary) dichotomous Markov process results characterized by three
independent parameters. For example, one can choose the mean durations, 1 and
2 (i.e., the two transition rates k1 and k2 ) and the value of one of the states of d ,
say 2 , and obtain the other value (i.e., 1 ) using Eq. (4.7). In what follows we will
refer to the case of zero-mean (Eq. (4.7)) dichotomous Markov noise.
The dichotomous noise is generally used in the scientific modelling in two
different ways: the mechanistic usage and the functional one. In the first case,
dichotomous noise is introduced for its ability to model systems that randomly
switch between two deterministic dynamics, while in the functional usage the DMN
is adopted to suitably represent a colored random forcing.
The mechanistic approach applies to a class of processes characterized by two
alternating dynamics of the state variable, (t), that can grow or decay depending
on a random driver, q(t), being greater or lower than a threshold value, . When
q is a resource for the state variable, the growth and decay are modelled by two
functions, f1 ( ) and f2 ( ), respectively,

d f1 ( ) if q(t) (4.8a)
=
dt f2 ( ) if q(t) < (4.8b)

with f1 ( ) > 0 and f2 ( ) < 0. If the random driver is a stressor, the conditions
in (4.8ab) are reversed. The overall dynamics of the variable can then be
expressed by a stochastic differential equation forced by a dichotomous Markov
noise, d (t), assuming (constant) values, 1 and 2

d
= f ( ) + g( ) d (t) (4.9)
dt
with
2 f 1 ( ) 1 f 2 ( ) f 1 ( ) f 2 ( )
f ( ) = g( ) = . (4.10)
1 2 1 2
The transition rates between dynamics f1 and f2 are k1 = PQ ( ) and k2 = 1
k1 = 1 PQ ( ), where PQ () is the cumulate probability function of the random
62 L. Ridolfi and F. Laio

a b
dn(t) dn(t)
1
0.4
0.5
0.2
t t
0.5 0.2

1 0.4

(t) (t)
1
0.8 1.4

0.6 1.2
0.4 t
5 10 15 20 25 30
0.2 0.8
t 0.6
5 10 15 20 25 30 35

Fig. 4.2 Noise path and the corresponding evolution of the (t) variable for Example I (panel a,
= 1) and Example II (panel b), described by Eq. (4.9), with functions (4.11), and Eq. (4.14),
respectively. Taken From Ref [4] (C) Cambridge University Press (2011). Reprinted with
Permission

forcing q. Notice that in this mechanistic usage, the rates k1 and k2 are the only
relevant characteristics of the DMN, while the other noise characteristics (e.g., its
mean, 1 k2 + 2 k1 , and variance, 1 2 ) are uninfluential to the representation
of the dynamics. In fact, in this case switches between two states ( f1 ( ) and
f2 ( )) that are independent of 1 and 2 . As a consequence, 1 and 2 may assume
arbitrary values.
A simple example of mechanistic approach (in the following we refer it as
Example I) is when DMN is used to switch between the two dynamics

f1 ( ) = (1 ) and f2 ( ) = , (4.11)

where determines the rates of growth and decay. Therefore, (t) exponentially
increases (decreases) toward the asymptote = 1 ( = 0) when the noise is in the 1
(2 ) state. A realization of the corresponding (t) dynamics is shown in Fig. 4.2a.
Different from the mechanistic usage, the functional interpretation of the DMN
is commonly introduced to investigate how an autocorrelated random forcing, d (t)
(whose effect on the dynamics can be in general modulated by a function g( ) of the
state variable), affects the dynamics of a deterministic system, d /dt = f ( ). The
temporal dynamics results therefore modelled by the stochastic differential equation

d /dt = f ( ) + g( )d (t), (4.12)

and in this case none of the parameters k1 , k2 , 1 , and 2 has arbitrary value. These
parameters need to be determined by adapting the DMN to the characteristics of
4 Dynamical Systems Driven by Dichotomous Noise 63

the driving noise: for example, by matching the mean, variance, skewness, and
correlation scale. Moreover, the functions f ( ) and g( ) are in this case assigned
a priori, while f1 ( ) and f2 ( ) are obtained from (4.10) and depend on the noise
characteristics

f1 ( ) = f ( ) + g( )1 f2 ( ) = f ( ) + g( )2 . (4.13)

An example frequently used to illustrate the impact of the noise correlation on


the dynamics [5] is the logistic-type (or Verhulst) deterministic dynamics, f ( ) =
( ), where > 0 is a parameter (Example II). When it is perturbed by a
dichotomous noise modulated by the linear function g( ) = , one obtains

d
= ( ) + d . (4.14)
dt
An example of the resulting (t) dynamics, with = 1, is shown in Fig. 4.2b.

4.2 Probability Density Function

The steady state probability density function for the process described by the
Langevin equation (4.1) can be obtained by taking the limit as t in the master
equation for the processi.e., the forward differential equations that relate the
state probabilities at different points in timeand by solving the resulting forward
differential equation to find the steady state probability density function (a less
rigorous but simpler approach is described in [4]). The steady state probability
density function, p( ), for the state variable, , reads [3, 6, 7]
   
  
1 1 k1 k2
p( ) = C exp + d , (4.15)
f 1 ( ) f 2 ( ) f 1 ( ) f 2 ( )

where the integration constant, C, can be determined as a normalization constant by


imposing that the integral of p( ) over its domain is equal to one.
Using the definitions of f1 ( ) and f2 ( ) given in Eq. (4.13), and the zero mean
condition (4.7) one also obtains
  
g( ) 1 f ( )
p( ) = C(2 1 ) exp d (4.16)
( ) c ( )

where

( ) = [ f ( ) + 1 g( )][ f ( ) + 2 g( )]. (4.17)

By referring to the two examples introduced in previous section, for Example I


one obtains
64 L. Ridolfi and F. Laio

 
k1 +k

2
k1 k2
p( ) =     (1 ) 1 1 , (4.18)
k1 k1

where [] is the Gamma function. It is immediate to recognize that relation (4.18)


is the standard Beta distribution [8] with parameters k1 / and k2 / .
In the case of Example II, f ( ) and g( ) are defined as in Eq. (4.14) and the
steady state pdf results
2k k k
2 2 ( + ) + ( + )
p( ) . (4.19)
[( )2 2 ]

where a symmetric noise (i.e., 1 = 2 = and k1 = k2 = k) has been assumed.


In order to define completely the pdfs, it is necessary to specify their domains.
This deserves a bit of attention as the domain depends on the stationary points of
the functions f1,2 ( ) and on their stability. Recall that a stationary point, s , of a
dh( ) 
generic dynamics h( ) is stable if h(s ) = 0 and d  < 0, while it is unstable
 = s
) 
if h(s ) = 0 and dh(
d  > 0.
= s
The characteristics of the stationary points can be easily understood by referring
to the potentials V1 ( ) and V2 ( ) defined as

dV1 ( ) dV2 ( )
f 1 ( ) = f 2 ( ) = ; (4.20)
d d

the stable (unstable) stationary points correspond in fact to the minima (maxima)
of the potentials. The dynamics of can then be represented as those of a particle
moving along the -axis driven by the switching between the two potentials. It is
evident that the particle remains trapped between any pair of nearby stable points
(minima of the potentials V1 ( ) and V2 ( )) that are not separated by an unstable
point (i.e., a maximum of either V1 ( ) or V2 ( )). These pairs of minima define
the domain, [in f , sup ], of the steady state pdf. Note that the same criteria for
the determination of the extremes of the steady state domain apply when the minima
of the potential are at . Finally, if the stable points are coincident the pdf reduces
to a Dirac delta function centered in the two overlapping stable points.
Boundaries can also be externally imposed. For example, a frequent case in the
bio-geosciences is when the variable is positive-valued, or it has a boundary at
a certain threshold value, th . This corresponds to changing the potential of the
deterministic dynamics by setting V1 ( ) = V1 ( ) and V2 ( ) = V2 ( ) for th ,
and V1 ( ) and V2 ( ) for > th (if th is assumed to be an upper bound).
The general rule described in the previous paragraph to determine the boundaries of
the domain can now be applied to the modified potentials V1 ( ) and V2 ( ). Notice
that the external bound may create a new minimum in the potential and affect the
original boundaries of the domain if the sign of the derivative of V1 ( ) and V2 ( ) is
negative at th .
4 Dynamical Systems Driven by Dichotomous Noise 65

a b
p p
6 5
k1=0.5, k2=0.5 k1=0.5, k2=0.5
5 4
k1=5, k2=5 k1=1.5, k2=1.5
4 k1=0.8, k2=2 k1=5, k2=5
3
3
2
2
1 1

0.2 0.4 0.6 0.8 1 0.6 0.8 1 1.2 1.4

Fig. 4.3 Steady state probability density functions for Example I with = 1 (panel a) and
Example II with = 1 and = 0.5 (panel b). Taken From Ref [4] (C) Cambridge University
Press (2011). Reprinted with Permission

We have now the elements (i.e., the general expression of the pdf and the
conditions for the determination of the boudaries) to obtain the pdfs for our
examples. In the case of Example I, both f1 ( ) = 1 and f2 ( ) = have a
single stable fixed point. The boundaries of the domain correspond to the minima
of the two potential, V1 ( ) = + 2 /2 and V2 ( ) = 2 /2, in f = 0 and sup = 1.
The expression for the steady state pdf (4.18) is therefore valid for [0, 1] (see
Fig. 4.3a).
In the Example II and in the case of a symmetric dichotomic noise (1 = 2 = ),
one has f1,2 ( ) = ( ) and V1,2 ( ) = 3 /3 ( ) 2 /2. If > , the
domain is therefore [ , + ], while the domain is [0, + ] in the reverse
case. An example of pdf with = 1 and = 0.5 is reported in Fig. 4.3b.
We conclude this analysis of the pdf by recalling some tools for investigating
the behavior of the steady state pdf near the boundaries of the domain. Assume that
the boundary i (with i = in f or i = sup ) is a stable point of the f1 ( ) dynamics,
i.e. f1 (i ) = 0. If f2 (i ) = 0, the steady state pdf in the vicinity of i is determined
as a limit of Eq. (4.15) for f1 ( ) 0
  
1 k1
p( ) exp d . (4.21)
f 1 ( ) f 1 ( )

If f1 ( ) is expanded around  i and the expansion is truncated to the first order (i.e.,
d f 1 ( ) 
f 1 ( ) = ( i ) d  ), using Eq. (4.21) the pdf can be represented as
= i

k1 
 1+ d f1 ( ) 
1 d 
p ( ) i . (4.22)
| i |

This limit behavior makes evident the competition between the time scale
characteristic of the switching between the two deterministic dynamics and the time
scale of the deterministic dynamics f1 ( ) near the attractor. In fact when the random
66 L. Ridolfi and F. Laio

switching (i.e., the transition rate) is relatively slow with respect to the deterministic
dynamics for i , the particle tends to spend much time near the boundary and
the pdf diverges at the boundary i (being d f1 ( )/d |i > k1 ). Vice versa,
when the switching between the two dynamics is sufficiently fast to prevent that
remains much time near the attractors the pdf becomes null at the boundary because
d f1 ( )/d |i < k1 .
Notice that these results are valid only when f1 (i ) = 0 and f2 (i ) = 0, which
excludes the cases when the bound is externally imposed. Moreover Eq. (4.21) is
not valid for the cases when i is also an unstable stationary point of f2 ( ).
A particular case refers to the state-dependent DMN [9], where one (or more)
of the parameters (k1 , k2 , 1 , and 2 ) depends on the state variable, . While a
possible -dependency of 1 and/or 2 can be accounted easily through a suitable
modification of the g( ) function, the state-dependency in k1 and k2 profoundly
affects the dynamics. The solution in this state-dependent case is simply obtained
from Eq. (4.15) by setting k1 = k1 ( ) and k2 = k2 ( )
      
1 1 k1 ( ) k2 ( )
p( ) = C exp + d (4.23)
f 1 ( ) f 2 ( ) f 1 ( ) f 2 ( )

where C is the usual normalization constant calculated by imposing that the integral
of p( ) in the pdf domain is equal to 1. The zeros of f1 ( ) and f2 ( ) are the natural
boundaries for the dynamics and represent the limits of the domain.

4.3 Noise-Induced Transitions

The presence of noise in a stochastic dynamical system is generally associated


with disorganized random fluctuations around the stable states of the underlying
deterministic system. However, there are also systems in which suitable noise
components can give rise to new dynamical behaviors and new ordered states that
did not exist in the deterministic counterpart of the dynamics. Known as noise-
induced transitions [5], these entail a constructive role by noise in the dynamics,
associated with structural changes in the steady-state probability distribution of the
process.
Modes and antimodes of the probability density function of the state variable
are considered the most important indicators of noise-induced transitions. In fact,
modes and antimodes provide important information about the shape of the pdf and
the preferential states of the system. For this reason, it is worth to focus on the
dependence of modes/antimodes on the properties of the random forcing in order to
detect noise-induced phenomena.
The first step to investigate noise-induced transitions is to analyze the deter-
ministic counterpart of the dynamics. To this aim it is important to distinguish
the two different usages of DMN. In the functional approach, the deterministic
counterpart of the process is easily found by setting d (t) = 0 in Eq. (4.12). Thus,
the deterministic steady states, st , are the zeroes of f ( ), i.e., f (st ) = 0.
4 Dynamical Systems Driven by Dichotomous Noise 67

In the mechanistic approach, the dynamics switch between the two deterministic
processes,
d d
= f 1 ( ) > 0 and = f2 ( ) < 0, (4.24)
dt dt
depending on whether the value of a stochastic external driver, q, is greater or
smaller than a given threshold, , respectively. If the variance of the driving
force, q, is decreased while maintaining constant, its mean, q , in the zero-
variance limit q becomes a constant deterministic value, q = q . The corresponding
deterministic stationary state is determined by the position of q relative to . If
q > , the deterministic steady state, st,1 , is obtained as a solution of the first of
equations (4.24), i.e. f1 (st,1 ) = 0. Instead, if q < the deterministic steady state,
st,2 , is obtained by setting f2 (st,2 ) = 0.
Once the deterministic counterpart of the dynamics is identified, it is possible
to investigate how the noise modifies the modes and antimodes, m , of the pdf of
the process. These are obtained by setting equal to zero the first-order derivative
of (4.16) or (4.15), depending on the interpretation adopted for the DMN. In the
functional interpretation, the modes and antimodes are the solution of the following
equation

f (m ) + c 1 2 g(m )g (m ) + c (1 + 2 ) f (m )g(m ) +
 
f 2 (xm )g (m )
+ c 2 f ( m ) f ( m ) = 0, (4.25)
g(m )

where
 
dg( )  d f ( ) 
g ( m ) = and f ( m ) = . (4.26)
d  =m d  =m

The impact of noise properties on the shape of the pdf is evident from Eq. (4.25).
In fact, apart from the first term that is independent of the noise parameters,
the second term expresses the effect of the multiplicative nature of the noise (i.e.,
the fact that g( ) = const), the third term results from the asymmetry of the
noise (i.e, 1 = 2 ), while the fourth term is due to the noise autocorrelation.
If the mechanistic interpretation is adopted, it is convenient to rewrite Eq. (4.25)
in terms of the functions f1 ( ) and f2 ( ),

f12 (m ) f2 (m ) f22 (m ) f1 (m )
k1 f2 (m ) k2 f1 (m ) = 0, (4.27)
f 2 ( m ) f 1 ( m )

where
 
d f1 ( )  d f2 ( ) 
f1 (m ) = and f2 (m ) = , (4.28)
d  = m d  = m
68 L. Ridolfi and F. Laio

Fig. 4.4 Possible shapes of k1


the steady state pdf for the
p p
case described in Example I
( f1 ( ) = 1 and
f2 ( ) = ). Taken From
Ref [4] (C) Cambridge
University Press (2011).
Reprinted with Permission 1

p p

k2
1

that clearly shows how the stable points of the noisy dynamics, m , can be very
different from their deterministic counterparts, st,1 and st,2 .
To show an example of how noise may profoundly affect the dynamical
properties of a system through noise-induced transitions, one can consider the
dynamics described in Example I. In this case (with = 1) Eq. (4.27) becomes

1 k1
m = . (4.29)
2 k1 k2

Thus, the mode or antimode, m , is comprised between the boundaries of the interval
]0, 1[, if k1 < 1 and k2 < 1 or k1 > 1 and k2 > 1. In the first case m is an antimode,
while in the second case m is a mode. It is useful also to explore the behavior of
the pdf close to the boundaries. Using Eq. (4.22) we obtain

lim p( ) k1 1 lim p( ) (1 )k2 1 ; (4.30)


0 1

when k1 < 1 (k2 < 1) the pdf has a vertical asymptote at = 1 ( = 0).
Figure 4.4 collects the possible shapes of the pdf as a function of the parameters
k1 and k2 . When k1 < 1 and k2 > 1 or k1 > 1 and k2 < 1, the noise is unable to
create new states, in that the preferential state of the stochastic system coincides with
the stable state of the underlying deterministic dynamics. In this case noise creates
only disorder in the form of random fluctuations about the stable deterministic state.
Conversely, when the switching rates, k1 and k2 , exceed the threshold k1 = k2 = 1 a
new noise-induced state exists at xm and, then, a noise-induced transition emerges.
Finally, when k1 < 1 and k2 < 1 the noise allows for the coexistence of the two
steady states of the underlying deterministic dynamics. Thus, noise induces a
bistable (i.e., bimodal) behavior that is not observed in the deterministic counterpart
of the process, where only one steady state can exist for a given set of parameters.
4 Dynamical Systems Driven by Dichotomous Noise 69

4b

3b

2b

k
2b 4b 6b 8b 10b

Fig. 4.5 Scenario of the steady state pdfs for the Verhulst model driven by a symmetric
multiplicative noise. Taken From Ref [4] (C) Cambridge University Press (2011). Reprinted with
Permission

It should be noted that in this example the noise is additive, being

f 1 ( ) f 2 ( ) 1
g( ) = = (4.31)
2 1 2 1
a constant. It follows that noise-induced transitions can emerge even with this simple
form of dichotomous noise. In this case, being the noise symmetric and additive,
transitions are due to the autocorrelation of the dichotomous noise.
It is instructive to describe also a case of transitions induced by a multiplicative
noise. To this aim, let us consider the case of the Verhulst model and concentrate on
the case in which the noise term is a linear function of (Example II), i.e.,

d
= ( ) + d = [( + d ) ] . (4.32)
dt
The deterministic steady state is st = , while the modes and antimodes are found
from Eq. (4.25),

m [(m )(2k 3m + ) 2 ]
= 0, (4.33)
2k
with solution

1
m,1 = 0 m,2,3 = (k 3 2 + (k )2 + 2 ). (4.34)
3
70 L. Ridolfi and F. Laio

Depending on the autocorrelation scale, c = 1/2k, and amplitude, , of noise,


a remarkable scenario of possible behaviors of p( ) occurs (see Fig. 4.5). In
particular, = is never a mode/antimode of the pdf because = is a solution
of Eq. (4.34) only if = 0, i.e. in the absence of noise. In the case of asymmetric
dichotomous noise (i.e., 1 = 2 ) the third term in Eq. (4.25) also plays a role,
further increasing the variety of possible noise-induced transitions.

4.4 Examples of Environmental Systems Driven


by Dichotomous Noise

A number of natural and anthropogenic drivers affect the dynamics of environmental


systems. Such drivers generally exhibit a significant random component, e.g. due to
weather patterns or climate fluctuations. This motivates the study of how a stochastic
environment may affect the dynamics of natural systems [1012]. In fact, random
environmental drivers may either cause stochastic fluctuations of the system around
the stable state(s) of the underlying deterministic dynamics or induce new dynamical
behaviors and new ordered states [4, 5, 13], which do not exist in the deterministic
counterpart of the process, as, for example, new stable states and new bifurcations.
In this section, some examples of noise-induced transitions in environmental
zero-dimensional systems driven by dichotomous noise are described. To this aim,
we capitalize the theoretical results presented in the previous sections.

4.4.1 Random Shifting Between Stressed/Unstressed


Conditions in Ecosystems

Consider the case of an ecosystem in which the biomass, B, randomly switches


between a growth and a decay state, depending on whether the level, q(t), of
fluctuating resources is above or below a certain threshold, . We assume that both
the growth and the decay rates are expressed by linear functions

f1 (B) = a1 (1 B) f2 (B) = a2 B (4.35)

where a1 and a2 are two positive coefficients determining the rates of growth and
decay, respectively. With probability P1 the dynamics are in state 1 (i.e., q(t) )
with dB/dt = f1 (B), while with probability 1 P1 the dynamics are in state 2 (i.e.,
q(t) ) with dB/dt = f2 (B). Dichotomous noise determines the rate of switching
between these two states. The probability density function of B reads
1P P
1+ a 1 1+ a1
p(B) = C [a1 (1 B) + a2 B] (1 B) 1 B 2 (4.36)

where C is the normalization constant and B [0, 1] being the roots of f1 (B) = 0
and f2 (B) = 0 (i.e., B = 1 and B = 0) the natural boundaries of the dynamics.
4 Dynamical Systems Driven by Dichotomous Noise 71

2
I 4 a1 P1 1 V III
a2=
4(P1 1 + a1)

1.5 p(B) p(B) p(B)

B B B
a2 1
II

0.5
a 2=P 1 p(B)

P1=1a1
p(B)
B
IV B
0.2 0.4 0.6 0.8 1
P1

Fig. 4.6 Qualitative behavior of the probability distributions of biomass, B, in the parameter space
{P1 , a2 } (a1 is constant and equal to 0.2). A variety of shapes emerges: L-shaped distributions with
preferential state at B = 0 (case I); J-shaped distributions with preferential state at B = 1 (case II);
bistable dynamics with bimodal (U-shaped) distribution (case III), dynamics with only one stable
state located between the extremes of the domain of B (case IV); bimodal distributions with a
preferential state at B = 0 and the other for B < 1 (case V). Taken From Ref [4] (C) Cambridge
University Press (2011). Reprinted with Permission

Figure 4.6 shows how the probability distribution of B changes in the parameter
space. For a2 > P1 the distribution, p(B), has a singularity at B = 0 and p(B) is
L-shaped (Fig. 4.6, case I). Similarly, p(B) is J-shaped (i.e., it has a singularity in
B = 1) for P1 > 1 a1 (Fig. 4.6, case II). When both conditions are met, p(B) is
U-shaped and two spikes of probability at B = 0 and B = 1 occur (Fig. 4.6, case III).
Differently, when these conditions are not met, the probability distribution of B has
only one mode within the interval [0, 1] and no spikes of probability at B = 0 and
at B = 1 (Fig. 4.6, case IV). When p(B) has a singularity at B = 0 (but not at B = 1)
and a2 < (4a1 P1 1)/[4(P1 1 a1 )], p(B) has both a mode and an antimode in
[0, 1] as in Fig. 4.6 (case V).
Figure 4.6 demonstrates that the preferential states of B vary across the parameter
space. For relatively low (high) rates of decay, a2 , and high (low) probability, P1 ,
of occurrence of unstressed conditions the dynamics have a preferential state (i.e.,
spike of probability) in B = 1 (B = 0). In intermediate conditions the system may
show either one (case IV) or two (case III and V) statistically stable states. This
bistability (i.e., bimodality in p(B)) emerges as a noise-induced effect and is a
clear example of the ability of noise to induce new states, which do not exist in
the underlying deterministic system [4, 5]. The deterministic counterpart of these
dynamics is in fact a system that is either always unstressed (Bm = 1) or always
stressed (Bm = 0), depending on whether the constant level, q, of available resources
is greater or smaller than the minimum value, , required for survival. Thus, the
deterministic dynamics are not bistable and it is the random driver that induces
bistability in the stochastic dynamics of B.
72 L. Ridolfi and F. Laio

The occurrence of bistable dynamics is important to understand the way


ecosystems respond to changes in environmental conditions [14, 15]. In fact, the
existence of alternative stable states is associated with possible abrupt and highly
irreversible shifts in the state of the system [4], with consequent remarkable
limitations to its resilience [16].

4.4.2 Noise-Induced Stability in Dryland Plant Ecosystems

Different from the case described in the previous subsection, let us now consider
the case where the noise is able to stabilize the system around an intermediate state
between two deterministically stable states. We refer to the case of dryland plant
ecosystems that can exhibit a bistable behavior with two stable states corresponding
to unvegetated (desert) and vegetated land surface conditions [17, 18]. The
existence of these two stable states is usually due to positive feedbacks between
vegetation and water availability [17, 1921].
Natural and anthropogenic disturbances acting on bistable dynamics may induce
abrupt transitions from the stable vegetated state to the unvegetated one [22]. When
this transition occurs, a significant increase in water availability (i.e., rainfall) is
necessary to destabilize the desert state and reestablish a vegetation cover. This
picture of drylands as deterministic bistable systems contrasts with the existence of
intermediate states between desert and completely vegetated landscapes. Spatial
heterogeneities and lateral redistribution of resources can explain the emergence of
patchy distributions of vegetation [2325], but a similar result can be induced also
by temporal fluctuations in environmental conditions [26], like random interannual
rainfall fluctuations typical of arid climates. In order to show this constructive action
by noise, let us express the dynamics of dryland vegetation as [26, 27]
 3
dv v (if R < R1 ) (4.37a)
=
dt v(1 v)(v c) (if R R1 ) (4.37b)

where v is the normalized vegetation biomass (0 v 1 and v = 1 in the case of


completely vegetated land) and R is the fluctuating annual precipitation. Depending
on the value of annual precipitation, dynamics can be either monostable or bistable.
For small values of R, lower than a threshold R1 , vegetation establishment is
inhibited and only the bare-soil state (i.e., v = 0) is stable. In contrast, for large
values of R, larger than another threshold R2 , prolonged periods of water stress do
not occur, the state v = 0 is unstable, and v = 1 is stable. In intermediate conditions
(i.e., R1 R R2 ) the system becomes bistable and both bare and completely
vegetated soils are stable states of the system. In this condition, soil moisture is too
low for the establishment of vegetation in bare soil, but in a completely vegetated
region (v = 1) the water available in the soil is sufficient to maintain vegetation
cover. This feedback effect is modeled by a coefficient c in Eq. (4.37b) equal to
(R2 R)/(R2 R1 ) for R > R1 .
4 Dynamical Systems Driven by Dichotomous Noise 73

0.8

0.6

0.4

0.2
R1 R2
0 R
100 200 300 400 500

Fig. 4.7 Deterministic stable (solid thick lines) and unstable (dashed thick lines) states of
Eq. (4.37) (R1 =260 mm and R2 =360 mm). Dotted line shows (analytically calculated) noise-
induced statistically stable states of the stochastic dynamics, while crosses (R = 0.4R) and
squares (R = 0.6R) correspond to numerically evaluated values of the modes of v. Taken from
Ref. [32] (C) Elsevier Science Ltd (2008)

To show the effect of interannual rainfall fluctuations on vegetation dynamics,


R is treated as an uncorrelated random variable, with mean R, standard deviation
R , and gamma distribution, p(R) (but other choices do not alter the dynamical
behavior). These fluctuations induce vegetation dynamics to alternate between two
different regimes: when R < R1 (this happens with probability P1 = 0R1 p(R) dR)
dynamics are expressed by Eq. (4.37a), while when R exceeds R1 the process is
described by Eq. (4.37b) with c depending on R.
Numerical integration of Eq. (4.37) shows that random interannual fluctuations
of R stabilize the system around an unstable state of the underlying deterministic
dynamics (region II in Fig. 4.7). A range of values of R does exist where the
probability distribution of v exhibits only one mode between 0 and 1, and this stable
state would not occur without the random forcing.
The modes of v (squares and crosses in the Fig. 4.7) can be also determined
analytically by a simplified stochastic model, where c is assumed constant and
equal to its average value, c+ , conditioned on R > R1 . In this case, the temporal
dynamics of vegetation can be modeled by a stochastic differential equation driven
by dichotomous Markov noise
dv v(c+ v c+ v)1 v(c+ v + v c+ )
= f (v) + g(v) d = v3 + + d (4.38)
d 2 1 2 1
where d is a zero-mean dichotomous Markov process. The two functions g(v) and
f (v) are determined so that dv/dt=v3 when d = 1 , and dv/dt=v(1 v)(v c+ )
when d = 2 , while the transition probabilities of the noise are assumed k1 =
(1 P1 ) and k2 = P1 . The analytical solution of (4.38) shows the existence of a
range of values of R in which the stochastic dynamics have only one preferential
state, while the stable states, v = 0 and v = 1, of the deterministic dynamics become
unstable (see Fig. 4.7).
74 L. Ridolfi and F. Laio

Climate fluctuations are generally considered as a source of disturbance that


induce random transitions between preferential states in bistable dynamics. The
previously described model suggests instead that rainfall fluctuations unlock the
system from these preferential states and stabilize the dynamics at half-way between
bare soil and full vegetation cover conditions, so demonstrating the constructive role
of the noise.

4.4.3 Impact of Environmental Noise on Biodiversity

A number of studies demonstrate how biodiversity can be enhanced by environmen-


tal variance [2831]. DOdorico et al. [32] proposed a dichotomous noise-driven
stochastic differential model able to describe such noise-induced effect. Consider a
system in which species are controlled by the same environmental random variable,
R > 0 (e.g., water, energy, light, or nutrients), that we assume to have a gamma
distribution, p(R) (though other distributions do not alter the results), mean, R,
and standard deviation, R . Each species is unstressed when R remains within a
certain niche, I , while its biomass decays when R is outside of this interval. For the
sake of simplicity, we assume that all niches have the same amplitude and that no
mutual interaction (e.g., competition/facilitation) exists among species. Fluctuations
in R determine the switching between growth (unstressed conditions) and decay
(stressed conditions) in species biomass, depending on whether R falls within or
outside of the interval I of that species. We use a linear decay and a logistic growth
for the stressed and unstressed conditions, respectively

dB a1 B( B) if R I (4.39a)
=
dt a2 B otherwise (4.39b)

where B is the species biomass, is the carrying capacity (i.e., the maximum
sustainable value of B), and the coefficients a1 and a2 give the decay and growth
rates, respectively.
The stochastic dynamics resulting from the random switching between the two
Eqs. (4.39) are modelled as a dichotomous Markov process. When the environ-
mental variable, R, is comprised within the nichethis happens with probability
 R +
P1 = R00 p(R) dR, where R0 is the lower limit of the niche, I the species is
not stressed and its growth is expressed by (4.39a). Vice versa, with probability
1 P1 the species is stressed and its dynamics are modelled by (4.39b). The solution
of the stochastic differential equation associated with these dynamics provides the
probability distribution, p(B). In particular, when [33]
a a2
P1 Plim = a= (4.40)
a+ a1
B is zero with probability tending to one and species goes extinct. In fact, low values
of P1 correspond to conditions in which the environmental variable remains too often
4 Dynamical Systems Driven by Dichotomous Noise 75

p(R)

PPlim
PPlim
P=Plim PPlim
P=Plim
PPlim
PPlim

d d d d d d R
R*I Ru*

Fig. 4.8 Probability distribution of the resource, R. The biodiversity potential = [Rl , Ru ] defines
the interval where species with niche range remain unstressed for a sufficient fraction of time to
avoid extinction (after [32]). Taken from Ref. [32] (C) Elsevier Science Ltd (2008)

outside the niche to allow for the survival of that species. Therefore, a species can
survive only when P1 Plim . Given a distribution of resources p(R) and a niche
interval , Fig. 4.8 shows that there are two limit positions Rl and Ru in which the
condition P1 = Plim is found. They correspond to the conditions
 R +  R
l u
p(R) dR = Plim and p(R) dR = Plim . (4.41)
Rl Ru

For a given distribution, p(R), of the environmental variable (Fig. 4.8) one can
determine the interval [Rl , Ru ] on the R-axis, in which species with niche range, ,
remain unstressed for a sufficient fraction of time to avoid extinction. Being =
Ru Rl the interval width, / is a proxy of the biodiversity potential that could be
sustained in the ecosystem: large values of / are associated with a broader range
of species that are be able to have access to favorable environmental conditions.
When the variance of R is zero the process becomes deterministic, with R = R and
= 2 .
In order to investigate the effect of environmental variability on biodiversity,
the values of the parameters a, , and R, are kept constant and the dependence
of biodiversity potential on the standard deviation R is investigated. The results
are shown in Fig. 4.9 for different values of the niche range, . Two effects of
the environmental noise are evident. Firstly, moderate levels of environmental
fluctuations enhance the biodiversity potential with respect to the deterministic case
(i.e., > 2 ). In this case, noise plays a constructive role on the dynamics by
favoring biodiversity. Secondly, relatively large noise intensities limit the ability of
the system to support diverse communities of individuals (i.e., < 2 ). In this
second case, noise has a destructive effect, namely noise-induced extinctions
occur. These results are consistent with the so-called intermediate disturbance hy-
pothesis [34, 35], i.e., that moderate disturbances can be beneficial to an ecosystem.
76 L. Ridolfi and F. Laio

Fig. 4.9 Biodiversity


potential as a function of the 2
coefficient of variation, 3 =1.5
CV = R /R, of
2.5 =1.0
environmental noise. = 1, =0.5
R is gamma distributed with 2
mean, R = 3, and a = 0.11.
(after [32]) 1.5
1
0.5
R
0.25 0.5 0.75 1 1.25 1.5 1.75 2 R

Figure 4.9 shows also that generalist speciesi.e., species with high are
better adapted than specialists species with low to benefit from environmental
fluctuations.

References

1. Bena, I.: Int. J. Mod. Phys. 20(20), 2825 (2006)


2. Honger, M.O.: Helv. Phys. Acta 52, 280 (1979)
3. Kitahara, K., Horsthemke, W., Lefever, R., Inaba, Y.: Progr. Theor. Phys. 64(4), 1233 (1980)
4. Ridolfi, L., DOdorico, P., Laio, F.: Noise-Induced Phenomena in Environmental Sciences.
Cambridge University Press, New York (2011)
5. Horsthemke, W., Lefever, R.: Noise-Induced Transitions: Theory and Applications in Physics,
Chemestry and Biology, 322 pp. Springer, Berlin (1984)
6. Pawula, R.F.: Int. J. Contr 25(2), 283 (1977)
7. Van den Broeck, C.: J. Stat. Phys. 31(3), 467 (1983)
8. Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous Univariate Distributions. Wiley,
New York (1994)
9. Laio, F., Ridolfi, L., DOdorico, P.: Phys. Rev. E 78, 031137 (2008)
10. Ludwig, D., Jones, D.D., Holling, C.S.: J. Anim. Ecol. 47, 315 (1978)
11. Benda, L., Dunne, T.: Water Resour. Res. 33(12), 2849 (1997)
12. DOdorico, P., Laio, F., Ridolfi, L.: Am. Nat. 167(3), E79 (2006)
13. May, R.M.: Stability and Complexity in Model Ecosystems, 270 pp. Princeton University
Press, Princeton (1973)
14. Holling, C.S.: Ann. Rev. Ecol. Syst. 4, 1 (1973)
15. Gunderson, L.H.: Ann. Rev. Ecol. Syst. 31, 425 (2000)
16. Walker, B.H., Salt, D.: Resilience Thinking: Sustaining Ecosystems and People in a Changing
World, 175 pp. Island Press, Washington, D.C. (2006)
17. Walker, B.H., Ludwig, D., Holling, C.S., Peterman, R.M.: J. Ecol. 69, 473 (1981)
18. Zeng, N., Neelin, J.D.: J. Clim. 13, 2665 (2000)
19. Rietkerk, M., van de Koppel, J.: Oikos 79(1), 69 (1997)
20. Zeng, X., Shen, S.S.P., Zeng, X., Dickinson, R.E.: Geophys. Res. Lett. 31,
10.129/2003GL018910 (2004)
21. DOdorico, P., Caylor, K., Okin, G.S., Scanlon, T.M.: J. Geophys. Res. 112, G04010 (2007).
Doi:10.1029/2006JG000379
22. Scheffer, M., Carpenter, S., Foley, J.A., Folke, C., Walker, B.: Nature 413, 591 (2001)
4 Dynamical Systems Driven by Dichotomous Noise 77

23. von Hardenberg, J., Meron, E., Shachak, M., Zarmi, Y.: Phys. Rev. Lett. 87, 198101 (2001)
24. Rietkerk, M., Boerlijst, M.C., van Langevelde, F., HilleRisLambers, R., van de koppel, J.,
Kumar, L., Klausmeier, C.A., Prins, H.H.T., de Roos, A.M.: Am. Nat. 160, 524 (2002)
25. van de Koppel, J., Rietkerk, M.: Am. Nat. 163, 113 (2004)
26. DOdorico, P., Laio, F., Ridolfi, L.: Proc. Natl. Acad. Sci. USA 102, 10819 (2005)
27. Borgogno, F., DOdorico, P., Laio, F., Ridolfi, L.: Water Resour. Res. 43(6), W06411 (2007)
28. Chesson, P.L.: Theor. Popul. Biol. 45, 227 (1994)
29. Yachi, S., Loreau, M.: Proc. Natl. Acad. Sci. USA 96, 1463 (1999)
30. Mackey, R.L., Currie, D.J.: Ecology 82(12), 3479 (2001)
31. Hughes, A.R., Byrnes, J.E., Kimbro, D.L., Stachowicz, J.J.: Ecol. Lett. 10, 849 (2007)
32. DOdorico, P., Laio, F., Ridolfi, L., Lerdau, M.T.: J. Theor. Biol. 255, 332 (2008)
33. Camporeale, C., Ridolfi, L.: Water Resour. Res. 42, W10415 (2006)
34. Connell, J.H.: Science 199, 1302 (1978)
35. Huston, M.A.: Am. Nat. 113(1), 81 (1979)
Chapter 5
Stochastic Oscillator: Brownian Motion
with Adhesion

M. Gitterman

Abstract We consider an oscillator with a random mass for which the particles of
the surrounding medium adhere to the oscillator for some random time after the
collision (Brownian motion with adhesion). This is another form of a stochastic
oscillator, different from oscillator usually studied that is subject to a random force
or having random frequency or random damping. A comparison is performed for the
first two moments, stability analysis and different resonance phenomena (stochastic
resonance, vibration resonance) for stochastic oscillators subject to external periodic
force as well as to linear and quadratic, white, dichotomous, and trichotomous
noises.

Keywords Bounded noises Brownian motion with adhesion Resonance


Stochastic oscillators Dichotomous and trichotomous noises

5.1 Introduction

Brownian motion of a particle located in a parabolic potential U = 2 x2 /2


is described by the dynamic equation of a harmonic oscillator (with m = 1)
supplemented by thermal noise (t) ,

d2x dx
2
+ + 2 x = (t) (5.1)
dt dt
with the correlation function

< t1 ) (t2 ) >= D (t2 t1 ) (5.2)

M. Gitterman ()
Department of Physics, Bar Ilan University, Ramat Gan 52900, Israel
e-mail: gittem@mail.biu.ac.il

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 79


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 5, Springer Science+Business Media New York 2013
80 M. Gitterman

Usually one considers the Brownian motion of a free particle ( 2 = 0). Our analysis
covers the more general problem of a stochastic harmonic oscillator. The random
force (t) enters Eq. (5.1) additively. Another forms of a stochastic oscillator
contain the multiplicative random forces, which connected with the fluctuations of
the potential energy or damping [1]. These models have been applied in physics,
chemistry, biology, sociology, etc., everywhere from quarks to cosmology. In fact,
a person who is worried by oscillations of prices in the stock market (described by
the stochastic oscillator model) can be relaxed by classical music produced by the
oscillations of string instruments!
We consider an oscillator with random mass [2]. Such model describes, among
another phenomena, the Brownian motion with adhesion, where the molecules of the
surrounding medium not only randomly collide with the Brownian particle, which
produces its well-known zigzag motion, but they also stick to the Brownian particle
for some (random) time, thereby changing its mass. The appropriate equation
of motion of the Brownian particle subject to an external periodic field has the
following form,

d2x dx
[1 + (t)] 2
+ + 2 x = (t) + A sin ( t) (5.3)
dt dt
Since the same molecules take part in colliding and adhering to the Brownian
particle, we assume that (t) and (t) are delta correlated,

< (t1 ) (t2 ) > = R (t2 t1 ) (5.4)

There are many applications of an oscillator with a random mass such as ionion
reactions, electrodeposition, granular flow, cosmology, film deposition, traffic jams,
and the stock market. Specific to these fluctuations, as distinct from other noise
in oscillator equations, is their restriction to large negative values, which would
lead to the negative mass. Therefore, the simplest form of these fluctuations is not
white noise, but the so-called dichotomous (or trichotomous) noise which jumps
randomly between two (three) different restricted values. Its correlation function
has an exponential OrnsteinUhlenbeck form,

2
 (t1 ) (t2 ) = exp [ |t1 t2 |] , (5.5)

where is the inverse correlation time of fluctuations. For symmetric dichotomous


noise, (t) = while for nonsymmetric dichotomous noise (t) = A or B with
2 = AB and = A B. For thichotomous noise (t) = a and 0. In order to
assure the positivity of the mass, one has to assume that both | |and | B | are
smaller than one.
Multiplying Eq. (5.3) by 1 (t) , one obtains
5 Stochastic Oscillator: Brownian Motion with Adhesion 81

 
d 2 x 1 (t) d [1 (t)] (t) A [1 (t)]
+ + x =
2
+ sin ( t) (5.6)
dt 2 1 2 dt 12 12

Therefore, the small dichotomous fluctuations of mass are equivalent to simultane-


ous fluctuations of the frequency and the damping coefficient.
Another possibility to retain the mass positive is to replace the linear noise (t)
in Eq. (5.3) by a positive nonsymmetric random force 2 (t),

  d2x dx
1 + 2 (t) 2
+ + 2 x = (t) + A sin ( t) (5.7)
dt dt

The quadratic noise 2 (t) can be written as

2 (t) = 2 + (5.8)

with 2 = AB and = A B. Indeed, for = A, one obtains 2 = AB + (A B) A =


A2 , and for = B, 2 = B2 , i.e., the noise 2 (t) takes the positive values A2 and B2 .
Equation (5.7) then takes the following form,

  d2x dx
1 + 2 + (t) + + 2 x = (t) + A sin ( t) (5.9)
dt 2 dt
There are many situations in which chemical and biological solutions contain small
particles which not only collide with a large particle, but they may also adhere to it.
The diffusion of clusters with randomly growing masses has also been considered
[3]. There are also some applications of a variable-mass oscillator [4]. Modern
applications of such a model include a nano-mechanical resonator which randomly
absorbs and desorbs molecules [5]. The aim of this note is to describe a general and
simplified form of the theory of an oscillator with a random mass, which is a useful
model for describing different phenomena in Nature.

5.2 Basic Equations

For generality we consider trichotomous noise, when for the stationary states, the
probabilities P of values a and 0 are

P (a) = P (a) = q; P (0) = 1 2q (5.10)

The limit case of symmetric dichotomous noise corresponds to q = 1/2.


The supplementary conditions to the OrnsteinUhlenbeck correlations (5.5) are

3 = a2 ; < 2 > = 2qa2 (5.11)


82 M. Gitterman

Equation (5.3) can be rewritten as two first-order differential equations

dx
=y
dt
dy dy
= y 2 x + (t) + A sin ( t) (5.12)
dt dt
which after averaging take the following form
 
dx d<y> d
= < y >; = + < y > < y > 2 x + A sin ( t)
dt dt dt
(5.13)
where the ShapiroLoginov formula for splitting the correlation [6] (with n = 1),
which yields for exponentially correlated noise has been used
   n
dng d
(t) n = +  g (5.14)
dt dt

If dg/dt = A + (t) , Eq. (5.14) with n = 1 becomes

d
< g >=< 2 > < g > (5.15)
dt

and for stationary states (d/dt. . . =0) and white noise ( 2 and with
2 / = D) one gets for g = y,

< y >= D (5.16)

Multiplying Eq. (5.12) by (t) and averaging results in


 
d
+ < x > =< y >
dt
 
d
+ < y > = < y > 2 < x > +R (5.17)
dt

Additional relation between averaged values can be obtained by multiplying the first
of Eq. (5.12) by 2x and the second by 2y, which yields

d 2 d 2 dy2
x = 2xy; y + + 2 y2 + 2 2 xy = 2y + 2yA sin ( t) (5.18)
dt dt dt
Averaging Eqs. (5.18) by using (5.14) yields
5 Stochastic Oscillator: Brownian Motion with Adhesion 83

d 2!
x = 2 xy
dt
   
d d
+ 2 <y > +
2
+ < y2 > +2 2 < xy > = 2D + 2 < y > A sin ( t)
dt dt
(5.19)
Analogously, multiplying Eqs. (5.12) by y and x, respectively, and summing leads to
 
d d
xy = y2 (xy) y2 xy 2 x2 + x + xA sin ( t) (5.20)
dt dt
Averaging Eqs. (5.19) and (5.20) leads to

< xv > = 0
 
d d
< xy > =< y >
2
+ < xy > + < y2 > 2 < x2 > +xA sin ( t)
dt dt
(5.21)
Additional equations for the correlators can be obtained by multiplying Eqs. (5.12)
and (5.20) by 2 x, 2 y, and , respectively, and averaging,
 
d !
+ x2 = 2  xy
dt
 
d !
+ + 2 y2 + 2 2  xy = 2 < y > A sin ( t)
dt
 
d ! 2qa2 d 2qa2
+ +  xy = y2 (xy) + < y2 > +Rx
dt dt
2R
2 < x2 > + A sin ( t)
( + ) + 2
(5.22)
The splitting of correlation formula

< y >=< >< y > (5.23)

which is exact for the OrnsteinUhlenbeck noise, has been used in last equation
in (5.22).
For the stationary states (d/dt . . . = 0), Eqs. (5.13), (5.17), (5.19), (5.21),
and (5.22) take the following form,

< y > = 0; < y > + 2 x = A sin ( t) ; < x >=< y >


( + ) < y > + 2 < x >= R
84 M. Gitterman

xy = 0; 2 < y2 > + < y2 >= 2D1 + 2 < y > A sin ( t)


!
x2 = 2  xy
!
( + 2 ) y2 + 2 2  xy = 2 < y > A sin ( t)
!
( + )  xy = y2 + 2qa2 < y2 > +Rx
2R
2 < x2 > + A sin ( t)
( + ) + 2
< y2 > < xy > + < y2 > < xy > 2 < x2 > +2xA sin ( t)
(5.24)

By this means we obtained eight equations (5.24) for eight correlators x, x2 ,
y2 , x,  y,  xy,  x2 , and  y2 .

5.3 First Two Moments

From equations, obtained in the previous section one finds the first

2R A
x = + sin ( t) (5.25)
2 [ ( + ) + 2 ] 2

and the second moment,


 
! D U 2Dqa2 1
x2 = + + Rx 2 { 2 < x >
2 V 2

A sin ( t)
2 2 < y >
2
    
1
2

2 1 + (2+22 ) ( + )< y>
2
+ 2< y>
2 < x >
}A sin ( t)
V
(5.26)

where

2 2 2 ( + 2 )
U= (5.27)
2 2
 
2 2 qa2 2 2 + ( + 2 ) ( + ) + 2 2
V= (5.28)
2 2
5 Stochastic Oscillator: Brownian Motion with Adhesion 85

and x, is given in (5.25), and < x > and < y > are equal to

R R
< x >= ; < y >= (5.29)
( + ) + 2 ( + ) + 2

The last terms in Eqs. (5.25) and (5.26) are related to an oscillator response to the
external periodic force, while the other terms describe the common action on an
oscillator of the additive and multiplicative forces.
In the limit case of Eq. (5.26) in the absence of both an external field (A = 0) and
correlation between additive and multiplicative noise (R = 0), Eq. (5.26) reduces to
the following form,

D 2Dqa2U
< x2 > = (5.30)
2 V 2

where U and V were defined in (5.27)(5.28).


In order to compare the result obtained with those of random frequency and
damping, we consider the limit form of Eq. (5.30) for white noise, which gives

! D
x2 = (5.31)
2

This result coincides with the well-known result for free Brownian!motion with
2 = 0. For free Brownian particle, 2 0 and one obtains x2 , as it
should be for Brownian motion. The independence of the stationary results on the
mass fluctuation is due to the fact that the multiplicative random force appears in
Eq. (5.1) in front of the higher derivative. It is remarkable that these results are
significantly different from
! the stationary second moments! for the cases when the
random frequency x2 and the random damping x2 , are the white noises of
strength D1 ,

! D ! D
x2
= ; x2
= (5.32)
2 2 ( D1 2 ) 2 (1 2 D1 )

showing the energetic instability [1]. It! turns out that for symmetric dichotomous
noise, the stationary second moment x2 for the mass ! fluctuations, in contrast to its
white noise form (5.31), may lead to instability, x < 0.
2

It is interesting to compare Eq. (5.26), obtained for trichotomous noise with that
for dichotomous noise, when the random variable (t) jumps between two values !
x = a, and not between three values, a and zero. The second moment x2 for
dichotomous noise is ! obtained from (5.26), when 2q = 1. The variable q appears
in Eq. (5.26) for x2 through expressions of V in the form q/ (a bq), which is a
monotonically increasing function for 0 < q < 1/2. Therefore, for trichotomous
noise the second moment < x2 > is always smaller than that for the dichotomous
noise.
86 M. Gitterman

5.4 Linear Versus Quadratic Noise

5.4.1 Brownian Motion

We start from the traditional model of Brownian motion, where the Brownian
particle is subject to the systematic damping force 2 v and the linear random force
(t) or the quadratic random force 2 (t) ,

dv
+ 2 v = (t) (5.33)
dt
dv
+ 2 v = 2 (t) (5.34)
dt
Multiplying Eq. (5.33) by 2v and averaging, one obtains for stationary states
(d/dt . . . = 0)
1
< v2 >= < v > (5.35)
2
Multiplying Eq. (5.33) by (t) and using the ShapiroLoginov procedure for
splitting the correlations [6]
 
dg d
< >= + < g > (5.36)
dt dt
one gets for stationary state with g = v,

< v >= (5.37)
(2 + )
Combining Eqs. (5.35) and (5.37), one gets

< v2 >= (5.38)
2 (2 + )
Let us turn now to the analysis of Eq. (5.34), which can be rewritten, using (5.8) as
dv
+ 2 v = 2 + (5.39)
dt
For stationary state, the averaging of Eq. (5.39) leads to

< v >= (5.40)
2
Multiplying Eq. (5.39) by 2v and averaging gives for stationary state

< v2 >= < v > + < v > (5.41)
2 2
5 Stochastic Oscillator: Brownian Motion with Adhesion 87

Multiplying Eq. (5.39) by (t) and averaging results in


< v >= (5.42)
+ 2

Inserting (5.40) and (5.42) into (5.41) gives


 
2
< v >=
2
+ (5.43)
2 2 + 2

As one can see from Eqs. (5.38) and (5.43), the stationary second moment < v2 >
is positive for both linear and quadratic noise, i.e., a system remains stable.
Till now we analyzed the classical Brownian motion. In order to put it into
considered here stochastic oscillator framework, let us consider Brownian motion
in the parabolic potential V (x) = 02 x2 /2, which is described by Eq. (5.1) for linear
noise, and by the equation

d2x dx
+ 2 + 02 x = 2 (t) (5.44)
dt 2 dt
for the quadratic noise.
The stationary second moment for Eq. (5.1) with dichotomous internal noise (5.5)
has the following form [7]

2 ( + 2 )
< x2 > = (5.45)
202 2 + 2 + 02

5.4.2 Harmonic Oscillator with Random Frequency

After cumbersome calculations, one finds for quadratic noise


& '1
2 4 2 2 (4 + )2
< x2 > = 4D 4 2 1 + 2
( + 2 ) [4 2 (1 + 2 ) + (4 + ) ]}

(5.46)

which for linear noise ( = 1 and 1 + 2 1) reduces to Eq. (10.25) in [1],


& '1
2 4 2 (4 + )2
< x >= 4D 4
2 2
(5.47)
( + 2 ) [4 2 + (4 + ) ]

According to Eqs. (5.46) and (5.47) the stability conditions (positivity of < x2 >)
for quadratic noise has the form
88 M. Gitterman

2 2 2 2 (4 + )2
<1 (5.48)
2 (1 + 2 ) (2 + ) [4 2 (1 + 2 ) + (4 + )]

and for linear noise

2 2 2 2 (4 + )2
<1 (5.49)
2 (2 + ) [4 2 + (4 + )]

Comparison of (5.48) and (5.49) shows that for small strength of quadratic
noise an oscillator, like in the case of linear noise, becomes unstable when the
inequality (5.49) does not obey. However, by increasing the strength of quadratic
noise one can attain the fulfilment of (5.48), i.e. to stabilize an oscillator with the
help of noise (noise-induced stability).

5.4.3 Harmonic Oscillator with Random Damping

Analogous calculations leads for quadratic noise to


2  
2 + B2 B4 + 2 2 + 8 2 2 2
< x >= D
2
(5.50)
4 2 (1 + 2 ) {2 2 + B2 B4 } 16 2 2 2 2 {2 2 + B2 }

where
 
B2 = + 2 1 + 2 ; B4 = + 4 1 + 2

In the limit case of linear noise, 1 + 2 1 and = 1 the latter equation reduces to
 2 
2 + [ + 2 ] ( + 4 ) + 2 2 + 8 2 2 2
< x >= D
2
4 2 [2 2 + ( + 2 ) ( + 4 )] 16 2 2 2 [2 2 + ( + 2 )]
(5.51)
Stability condition for linear noise is
 
2 2 + ( + 2 ) ( + 4 ) > 4 2 2 2 + ( + 2 ) (5.52)

and for quadratic noise


 
    2 2 + 2
2 + + 2 1 +
2 2
+ 4 1 + 2
> 4 2
+ 2 (5.53)
1+2

Therefore, as in the previously considered case of random frequency, the increase


of the strength of quadratic noise makes a system more stable compared to linear
noise.
5 Stochastic Oscillator: Brownian Motion with Adhesion 89

5.4.4 Harmonic Oscillator with Random Mass

As in the previous cases, replace Eq. (5.9) with A = 0 by two first-order differential
equations
dx dy dy dy
= y; = 1+2 2 y 2 x + (t) (5.54)
dt dt dt dt
Multiplying the first equation in (5.54) by 2x and the second by 2y gives, after
averaging and using Eq. (5.36),
d
< x2 > =< xy > (5.55)
dt
 
d d d
< y2 > = 1 + 2 < y2 > + < y2 >
dt dt dt
4 < y2 > 2 2 < xy > + (t) + 4D (5.56)

Multiplying Eqs. (5.54) by y and x, respectively, summing and averaging these


equations,
 
d d
< xy >= 2 + 2 < y2 > + < xy > + < y2 > 2 < x2 >
dt dt
(5.57)

Multiplying Eqs. (5.54) and (5.57) by 2x , 2y , and , respectively, and averaging,


one obtains for the steady-state,

< x2 > = 2 < xy >


 
4 + 2 + 2 < y2 > = 2 2 < xy >

( + 2 ) < xy > = 2 + 2 < y2 > + 2 < y2 > 2 < x2 >
(5.58)
From the six equations (5.55)(5.58), one obtains

4D 2 + 2 A 2 2 ( + B)
< x >= 2
2
(5.59)
4 A 2 B
where
2 2 2 2
A = + 2 + 2 + 2 B + ; B= (5.60)
4 + (2 + 2 )
90 M. Gitterman

For linear noise 2 + 2 2 and = 1,

4D 2A1 2 ( + B1 )
< x2 >= (5.61)
2 4 A1 2 B1

with

2 2 2
A1 = + 2 + 2B1 + ; B1 = (5.62)
2 +

Comparing Eqs. (5.59)(5.62) shows that A > A1 and B < B1 , i.e., as in the previous
cases, the replacement of linear noise by quadratic noise makes the system more
stable.
For white noise, Eq. (5.59) reduces to

! D
x2 = (5.63)
2

This result agrees with the well-known result for free


! Brownian motion meaning
2 = 0. For a free Brownian particle one obtains x2 , as expected. The fact
that the stability is unaffected by the mass fluctuation is due to the fact that the
multiplicative random force appears in Eq. (5.3) in front of a higher derivative.

5.5 Stability Conditions

Here we consider the more complicated problem of the stability of the solutions. For
a deterministic equation, the stability of the fixed points is defined by the sign of ,
found from the solution of the form exp ( t) of a linearized equation near the fixed
points. The situation is quite different for a stochastic equation. The first moment
x (t) and higher moments become unstable for some values of the parameters.
However, the usual linear stability analysis, which leads to instability thresholds,
turns out to be different for different moments making them unsuitable for a stability
analysis. A rigorous mathematical analysis of random dynamic systems shows [8]
that, similar to the order-deterministic chaos transition in nonlinear deterministic
equations, the stability of a stochastic differential equation is defined by the sign
of Lyapunov exponents . This means that for stability analysis, one has to go
from the Langevin-type equations to the associated FokkerPlanck equations which
describe the properties of statistical ensembles and to calculate the Lyapunov index
, defined by [8]

1 ln x2 x/ t
= < >=< > (5.64)
2 t x
5 Stochastic Oscillator: Brownian Motion with Adhesion 91

One can see from Eq. (5.64) that it is convenient to replace the variable x in the
Langevin equations with the variable z = (dx/dt) /x,

dz d 2 x/d 2 (dx/d )2 d 2 x/d 2


= z2 (5.65)
d x x2 x

The Lyapunov index now takes the following form [9]



= zPst (z)dz (5.66)

where Pst (z) is the stationary solution of the FokkerPlanck equations corresponded
to the Langevin equations expressing in the variable z.
Replacing the variable x in Eq. (5.65) by the variable z leads to

dz
= A (z) + 1 B (z) (5.67)
d

where
1
A (z) = z2 B (z) ; B (z) = 1 + 2 z + 2 1 (t) = (t)
R 1+2
(5.68)
According to [10], the stationary solution of the FokkerPlanck equation, corre-
sponding to the Langevin equation (5.67), has the following form
   
B 1 z 1 1
Pst (z) = N 2 2 exp dx +
B A2 2 A (x) B (x) A (x) + B (x)
(5.69)
Equation (5.69) has been analyzed for different forms of functions A (x) and B (x) :
A = x, B = 1 [11]; A = x, B = x [12] ; A = x xm , B = x, [13]; A = x x3 ,
B = 1 [14]; A = x3 . B = x [15, 16]; A = x x2 , B = x [10].
For

A = x2 + x + ; B = x+
2
= 1; = 1+2 ; = 1+2 . (5.70)
R R
Inserting (5.70) into (5.69) gives
1 1
Pst (z) = N (z x1 )1[2 (x1 x2 )] (z x2 )1+[2 (x1 x2 )]
1 1
(z x3 )1+[2 (x3 x4 )] (z x4 )1[2 (x3 x4 )] (5.71)
92 M. Gitterman

Equation (5.71) defines the boundary of stability of the fixed point x = 0, which
depends on characteristics 2 , of an oscillator, and , , and of the noise.

5.6 Resonance Phenomena

The well-known phenomena of deterministic chaos, stochastic, and vibrational


resonances occur for an oscillator with random mass if one adds one or two periodic
forces to the oscillator equation. Stochastic resonance manifests itself in the fact
that the noise, which always plays a distractive role, appears as a constructive
force, increasing the output signal as a function of noise intensity. Like stochastic
resonance, vibrational resonance manifests itself in the enhancement of a weak
periodic signal through a high-frequency periodic field, instead of through noise
as in the case of stochastic resonance.
One of the greatest achievements of twentieth-century physics was establishing a
deep relationship between deterministic and random phenomena. The widely stud-
ied phenomena of deterministic chaos and stochastic resonance might sound
contradictory, consisting of half-deterministic and half-random terms. In addition to
stochastic resonance, another exciting phenomenon is deterministic chaos which
appears in equations without any random force. Deterministic chaos means an
exponential divergence in time of the solutions for even the smallest change in the
initial conditions. Therefore, there exists a close connection between determinism
and randomness, even they are apparently different forms of behavior [17].
The dynamic equation of motion of a bistable underdamped one-dimensional
oscillator driven by a multiplicative random force (t) , an additive random force
(t) , and two periodic forces, A sin ( t) and C sin ( t) , has the following form

d2x dx
+ 02 x + (t) x + bx3 = (t) + A sin ( t) +C sin ( t) (5.72)
dt 2 dt

The dynamic resonance mentioned above corresponds to = b = = = C = 0


and 0 . Let us consider some other limiting cases of Eq. (5.72).
1. Brownian motion (0 = b = A = C = 0) has been studied most widely with
many applications. The equilibrium distribution comes from the balance of two
contrary processes: the random force which tends to increase the velocity of the
Brownian particle and the damped force which tries to stop the particle [1].
2. The double-well oscillator with additive noise ( = A = C = 0) and small
damping, << , shows two or three peaks in the power spectrum (Fourier
component of the correlation function) descriptive of fluctuation transitions
between the two stable points of the potential, small intra-well vibrations and
the over-the-barrier vibrations [18].
3. Stochastic resonance (SR) in overdamped (d 2 x/dt 2 = = C = 0) and un-
derdamped ( = C = 0) oscillators is a very interesting and counterintuitive
5 Stochastic Oscillator: Brownian Motion with Adhesion 93

phenomenon, where the noise increases a weak input signal. SR occurs in the
case that a deterministic time-scale of the external periodic field is synchronized
with a stochastic time-scale, determined by the Kramer transition rate over the
barrier.
4. Stochastic resonance in a linear overdamped oscillator (d 2 x/dt 2 = = b =
C = 0), as distinct from the nonlinear case, allows an exact solution [19, 20].
However, this effect occurs only when the multiplicative noise (t) is colored
and not white.
5. Vibrational resonance ( = = 0) , which occurs in a deterministic system,
manifests itself in the enhancement of a weak periodic signal through a high-
frequency periodic field, instead of through noise, as in the case of stochastic
resonance.

5.6.1 Stochastic Resonance

Noise, which always plays a distractive role, appears as a constructive force,


increasing the output signal as a function of noise intensity. This phenomenon was
proposed as the explanation of the periodicity of the ice ages [21, 22] and has found
many applications [23].
The standard definition of stochastic resonance (SR) is the non-monotonic
dependence of an output signal, or some function of it, as a function of some
characteristic of the noise or of the periodic signal [23]. At first glance, it appears
that all three ingredients, nonlinearity, periodic forcing and random forcing, are
necessary for the appearance of SR. However, it has become clear that SR is
generated not only in a typical two-well system but also in a periodic structure
[24]. Moreover, SR occurs even when each of these ingredients is absent. Indeed,
SR exists in linear systems when the additive noise is replaced by nonwhite
multiplicative noise [20]. Deterministic chaos may induce the onset of SR instead
of a random force [23]. Finally, the periodic signal may be replaced by a constant
force in underdamped systems [25].
Consider the linearized equation (5.9) with = 0 of an oscillator with random
mass subject to an external periodic field,
  d2x dx
1 + 2 + (t) 2
+ 2 + 2 x = A sin ( t) (5.73)
dt dt
Repeating the procedure leading to Eq. (5.25), one obtains a fourth-order differential
equation for x
 2  d4 d3 d2 d
1+2 +2 4
x + A3 3 x + A2 2 x + A1 x + A0 x =
dt dt dt dt
 2 2   
+ 2 1 + 2 + 2 A sin ( t) + + 2 1 + 2 + 2 A cos ( t)
(5.74)
94 M. Gitterman

where

A3 = 2 ( + 2 ) + 2 1 + 2 + 2 1 + 2

A2 = 2 ( + ) + 1 + 2 2 2 + + 3 + 2 2 + 2
 
A1 = 1 + 2 + 2 + 2 2 +
 
A0 = 2 2 + + 2 1 + 2 + 2 (5.75)

In a similar way, one can obtain the equation for the second moment < x2 >,
associated with Eq. (5.73), which is transformed into six equations for six variables,
< x2 >, < y2 >, < xy >, < x2 >, < y2 >, and < xy >, but we shall not write
down these cumbersome equations.
Analogous to the cases of random frequency and random damping [26], we seek
the solution of Eq. (5.74) in the form

x = a sin ( t + ) (5.76)

One easily finds


 1/2  
f52 + f62 1 f5 f7 + f6 f8
a= ; = tan (5.77)
f72 + f82 f5 f8 f6 f7

with

f5 = f4 f2 2 A; f6 = f3 A;

f7 = 3 2 f2 + f1 f3 2 2 2 2 f4 + 2 f3

f8 = 2 f4 2 2 f2 + f1 f4 + 2 f3 2 2 2 + 4 f1 f2 2 2
f1 = 1 + 2 ; f2 = 1 + 2 + 2 ; f3 = + 2 f2 ; f 4 = 2 + ( + f 2 )
(5.78)
One can compare Eqs. (5.76)(5.78) with the equations for the first moment x,
obtained [26] for the cases of random frequency and random damping, respectively,
subject to symmetric dichotomous noise, and extended afterwards [27, 28] to the
case of asymmetric noise. All these equations are of fourth order with the same
dependence on the frequency of the external field but with a slightly different
dependence on the parameters of the noise.
The amplitude a of the output signal depends on the characteristics , ,
of the asymmetric dichotomous noise and the amplitude A and the frequency
of the input signal. The signal-to-noise ratio is of frequent use in the analysis of
stochastic resonance, which involves the use of the second moments. For simplicity,
we call stochastic resonance the non-monotonic behavior of the ratio a/A of the
5 Stochastic Oscillator: Brownian Motion with Adhesion 95

Fig. 5.1 OutputInput ratio A


as the function of the external a 3
frequency for different
frequencies > 1 and
= = 1, 2 = 0.2, and 2.5
= 2.2. Curves 1, 2, and 3
correspond to = 1.5,
= 1.3, and = 1.0, 2
respectively

1.5

0.5
0 0.6 1.2
W

amplitude of the output signal a to the amplitude A of the input signal. (Output
Input ratio, OIR).
Figures 5.1 and 5.2 show the dependence of the OIR on the external frequency
and confirms the existence of the phenomenon of stochastic resonance. Moreover,
the presence of noise, which usually plays a destructive role, here results in an
increase of the output signal, thereby improving the efficiency of a system in the
amplification of a weak signal. In the absence of noise, the usual dynamic resonance
occurs, when the frequency of an external force approaches the eigenfrequency of
an oscillator. Figures 5.1 and 5.2 show the -dependence of the OIR for parameters
= 2 = = 1, = 0.5 and different eigenfrequencies < 1 (Fig. 5.1) and > 1
(Fig. 5.2). The values of the maxima increase with a decrease of on both plots,
although the positions of maxima are shifted to the right with a decrease of for
< 1 and to the left for > 1.

5.6.2 Vibrational Resonance

Like stochastic resonance, vibrational resonance manifests itself in the enhancement


of a weak periodic signal through a high-frequency periodic field, instead of through
noise as in the case of stochastic resonance. The deterministic equation of motion
then has the following form,

d2x dx
+ 02 x + x3 = A sin ( t) + C sin ( t) . (5.79)
dt 2 dt
96 M. Gitterman

Fig. 5.2 OutputInput ratio A


a 3
as the function of the external
frequency for different
1.2
frequencies < 1 and
= = 1, 2 = 0.2 and
= 2.2. Curves 1, 2, and 3
correspond to = 0.1,
= 0.5, and = 0.9, 2
respectively 0.6

0
0 1 2
W

Equation (5.79) describes an oscillator moving in a symmetric double-well potential


V (x) = 02 x2 /2 + x4 /4 with a maximum at x = 0 and two minima x with the
depth d of the wells,

02 04
x = d= (5.80)
4

The amplitude of the output signal as a function of the amplitude C of the high-
frequency field has a bell shape, showing the phenomenon of vibrational resonance.
For close to the frequency 0 of the free oscillations, there are two resonance
peaks, whereas for smaller , there is only one resonance peak. These different
results correspond to two different oscillatory processes, jumps between the two
wells and oscillations inside one well.
Assuming that >> , resonance-like behavior (vibrational resonance [29])
manifests itself in the response of the system at the low-frequency , which depends
on the amplitude C and the frequency of the high-frequency signal. The latter
plays a role similar to that of noise in SR. If the amplitude C is larger than the
barrier height d, the field during each half period / transfers the system from
one potential well to the other. Moreover, the two frequencies and are similar
to the frequencies of the periodic signal and the Kramer rate of jumps between the
two minima of the underdamped oscillator. Therefore, by choosing an appropriate
relation between the input signal A sin ( t) and the amplitude C of the large signal
(or the strength of the noise) one can obtain a non-monotonic dependence of the
output signal on the amplitude C (vibration resonance) or on the noise strength
(stochastic resonance). To put this another way [30], both noise in SR and the
high-frequency signal in vibrational resonance change the parameters of the system
response to a low-frequency signal.
Let us now pass to an approximate analytical solution of Eq. (5.79). In accor-
dance with the two times scales in this equation, we seek a solution of Eq. (5.79) in
5 Stochastic Oscillator: Brownian Motion with Adhesion 97

the form

C sin ( t)
x (t) = y (t) (5.81)
2
where the first term varies significantly only over times t, while the second term
varies much more rapidly. On substituting Eq. (5.81) into (5.79), one can average
over a single cycle of sin ( t) . Then, odd powers of sin ( t) vanish upon averaging,
while the sin2 ( t) term gives 1/2. In this way, one obtains the following equation
for y (t) ,
 
d2y dy 3bC2
+ 0
2
y + by3 = A sin t (5.82)
dt 2 dt 2 4

with
  4
02 3bC2 /2 4 2 3bC2 /2 4
y0 = 0; y = ; d= 0 (5.83)
b 4b

One can say that Eq. (5.83) is the coarse-grained version (with respect to time)
of Eq. (5.79). For 3 C2 /2 4 > 02 , the phenomenon of dynamic stabilization [31]
occurs, namely, the high-frequency external field transforms the previously unstable
position = 0 into a stable position.
A resonance in the linearized equation (5.83) occurs when [32]

3bC2 3bA2
2 = 2
+ (5.84)
2 4 0
4 2 2

For an oscillator with random mass one has to perform the preceding analysis of
Eq. (5.79), based on dividing its solution in the two time scales (Eq. (5.81)) followed
by the linearization of Eq. (5.83) for the slowly changing solution. The subsequent
analysis of an oscillator equation with one periodic force is quite analogous to
analysis of Eq. (5.73), which describes the stochastic resonance phenomenon.

5.7 Conclusions

We considered a new type of stochastic oscillator which has a random mass. An


example is Brownian motion with adhesion, where the surrounding molecules not
only collide with the Brownian particle inducing a zigzag motion but also adhere
to it for a random period of time, thereby increasing the mass of the Brownian
particle. The first two moments are found for dichotomous random noise. An
analysis was performed of the stochastic and vibration resonances, which shows
that deterministic and random phenomena are complimentary and not contradictory.
Due to many applications in physics, chemistry, biology, and engineering, the model
of an oscillator with random mass will find many applications in the future.
98 M. Gitterman

References

1. Gitterman, M.: The Noisy Oscillator: The First Hundred Years, from Einstein Until Now.
World Scientific, Singapore (2005)
2. Gitterman, M.: J. Phys. Conf. Ser. 012049 (2010)
3. Luczka, J., Hanggi, P., Gadomski, A.: Phys. Rev. E 51, 5762 (1995)
4. Sewbawe Abdalla, M.: Phys. Rev. A 34, 4598 (1986)
5. Portman, J., Khasin, M., Shaw, S.W., Dykman, M.I.: Bull. APS, March Meeting 2010
6. Shapiro, E., Loginov, V.M.: Phys. A 91, 563 (1978)
7. Hwalisz, L., Hung, P., Hunggi, P., Talkner, P., Schimansky-Geier, L.: Z.f. Phys. 77, 471 (1989)
8. Arnold, L.: Random Dynamic Systems. Springer, Berlin (1998)
9. Leprovost, N., Aumaitre, S., Mallick, K.: Eur. Phys. J. B 49, 453 (2006)
10. Kitahara, K., Horsthemke, W., Lefever, R.: Phys. Lett. A 70, 377 (1979); Progr. Theor. Phys.
64, 1233 (1980)
11. Klyatskin, V.I.: Radiophys. Quant. Electron. 20, 381 (1977)
12. Berdichevsky, V., Gitterman, M.: Phys. Rev. E 60, 1494 (1999)
13. Sasagawa, F.: Progr. Theor. Phys. 69, 790 (1983)
14. Ouchi, K., Horita, T., Fujisaka, H.: Phys. Rev. E 74, 031106 (2006)
15. Jia, Y., Zheng, X.-P., Hu, X.-M., Li, J.-R.: Phys. Rev. E 63, 031107 (2001)
16. Ke, S.Z., Wu, D.J., Cao, L.: Eur. Phys. J. B 12, 119 (1999)
17. Gitterman, M.: J. Phys. A 23, 119 (2002)
18. Dykman, M.I., Mannela, R., McClintock, P.V.E., Moss, F., Soskin, M.: Phys. Rev. E 37, 1303
(1988)
19. Fulinski, A.: Phys. Rev. E 52, 4523 (1995)
20. Berdichevsky, V., Gitterman, M.: Europhys. Lett. 36, 161 (1996)
21. Benzi, R., Sutera, S., Vulpani, A.: J. Phys. A 14, L453 (1981)
22. Nicolis, G.: Tellus 34, 1 (1982)
23. Gammaitoni, L., Hanggi, P., Jung, P., Marchesoni, F.: Rev. Mod. Phys. 70, 223 (1998)
24. Stokes, N.G., Stein, N.D., McClintocl, V.P.E.: J. Phys. A 26, L385 (1993)
25. Marchesoni, F.: Phys. Lett. A 231, 61 (1997)
26. Gitterman, M.: Phys. A 352, 309 (2005)
27. Jiang, S.-Q., Wu, B., Gu, T.-X.: J. Electr. Sci. China 5(4), 344 (2007)
28. Jiang, S., Guo, F., Zhow, Y., Gu, T.: In: Communications, Circuits and Systems, 2007. ICCCAS
2007, pp. 10441047.
29. Landa, P.S., McClintock, P.V.E.: J. Phys. A 33, L433 (2000)
30. Braiman, Y., Goldhirsch, I.: Phys. Rev. Lett. 66, 2545 (1991)
31. Kim, Y., Lee, S.Y., Kim, S.Y.: Phys. Lett. A 275, 254 (2000)
32. Gitterman, M.: J. Phys. A 34, L355 (2001)
Chapter 6
Numerical Study of Energetic Stability
for Harmonic Oscillator with Fluctuating
Damping Parameter

Roman V. Bobryk

Abstract A harmonic oscillator with fluctuating damping parameter is considered.


The fluctuation is modelled by three type of zero mean random processes with the
same correlation function: OrnsteinUhlenbeck process, telegraphic process and
sine-Wiener process. Efficient numerical procedures are introduced for obtaining
energetic stability diagrams for such cases of random parameter.

Keywords Bounded noises Sine-Wiener process OrnsteinUhlenbeck


process Telegraphic noise Linear oscillators

6.1 Introduction

Gaussian processes play very important role in theory of stochastic processes and
without them the theory would be incomplete. However they have two features that
may not fit in a real modelling. The first one is a lack of boundedness property;
the Gaussian process may take arbitrarily large value with positive probability. The
second one is a unimodality of Gaussian distribution. There are of course random
processes that have these features but analytical studies of them are much more
complicated than in the Gaussian case. In many cases the above features have not
a significant impact but there are important situations where the assumption of
Gaussianity is not appropriate. Interesting and important examples of such cases are
noise-induced transitions [13], stochastic resonance [4] and Brownian motors [5].
Noise-induced transitions are very interesting phenomena in nonlinear stochastic
systems [6]. Most often in such systems Gaussian excitations have been considered,

R.V. Bobryk ()


Institute of Mathematics, Jan Kochanowski University, Swietokrzyska 15, 25-406 Kielce, Poland
e-mail: roman.bobryk@ujk.edu.pl

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 99


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 6, Springer Science+Business Media New York 2013
100 R.V. Bobryk

particularly the white noise. It is pointed out in [1] that non-Gaussian excitations
may lead to new important effects. A simple example of non-Gaussian process is
the sine-Wiener (SW) process:

(t) = 2/ sin[ 2 1 w(t)], (6.1)

where w(t) is the standard Wiener process, and are intensity and correlation
time, respectively. Using the well-known properties of the Wiener process and the
Euler representation of the sine function one can easily show that
 
2 4s
E[ (t)] = 0, E[ (t) (s)] = exp((t s)/ ) 1 exp( ) , t s. (6.2)

Therefore the SW process has the following correlation function in the stationary
regime:
2
K(t) = E[ (t + u) (u)] = exp(|t|/ ). (6.3)

One can make some modification of the SW process by introducing the random
phase :

(t) = 2/ sin[ + 2 1 w(t)], (6.4)

where is a uniformly distributed in [0, 2 ] random variable independent of w(t).


Now this process has correlation function (6.3) for all t and so the process (6.4)
can be considered as a stationary version of the SW process. It is important
to note that the process (6.4) is non-Gaussian but it has the same mean and
correlation function as well-known OrnsteinUhlenbeck (OU) process which is
a very important example of Gaussian processes. Therefore they may be useful
models for comparison of effects of Gaussianity and non-Gaussianity in the noise-
induced transitions.
Such investigations have been conducted for some nonlinear systems, namely for
the so-called genetic model [2] and for the well-known Duffing oscillator [7]. In the
case of genetic model it has shown that the SW noise enhances the transitions. The
stationary probability density function in this case is always bimodal if it is bimodal
in the OU noise case but inverse fact is not true. Reentrance transition phenomena
can be observed for the Duffing oscillator with the SW noise, i.e. for the same noise
intensity the probability density function has an identical modality for both the small
and the large correlation time but a different modality for a moderate correlation
time. The phenomena are not observed in the OU noise case. It worth to note that
recently the SW noise has appeared to be useful in modelling of tumour growth [3].
In this paper we consider an influence of SW and OU noises on a harmonic
oscillator. The harmonic oscillator is a very important model system which is
frequently discussed in undergraduate courses of mathematics, physics, chemistry
and engineering. The study of a noisy harmonic oscillator was started more
6 Numerical Study of Energetic Stability for Harmonic Oscillator. . . 101

than a 100 years ago with the well-known paper of A. Einstein on Brownian
motion investigation [8] and since then it has been attracted the attention of many
researchers (see, e.g., [9] and references therein).
We consider a harmonic oscillator described by the following equation:

d2x dx
+ 2 [1 + (t)] + 2 x = 0, t > 0, (6.5)
dt 2 dt

where > 0 is a damping parameter, is a natural frequency and (t) is a zero-


mean stationary stochastic process with the correlation function (6.3). Three types
of the random noises (t) with correlation (6.3) are considered, namely the SW
and OU processes
and the well-known telegraphic (TG) process with two states
{ / , / }.
We are interested in mean-square asymptotic stability (energetic stability) for
(6.5) with such random excitations. If the excitation is a Gaussian white noise with
intensity 2 , then the inequality

4 2 < 1 (6.6)

gives a necessary and sufficient condition of the energetic stability if the Eq. (6.5)
is interpreted in Stratonovich sense [1012]. It is difficult to obtain analytically the
necessary and sufficient conditions of the stability if the excitation is not a white
noise. In the case of OU and SW noises it is rather impossible. Note that if tends to
zero, then the considered excitations tend to a white noise. Therefore it is interesting
to compare the limiting condition (6.4) with stability conditions in the case of the
real fluctuations. The TG and SW processes are special cases of bounded processes
but OU process is not in this classes. It is important to investigate the implication of
the boundedness of the excitation on the stability conditions. In the paper we present
efficient numerical methods for investigation of the energetic stability. Stability
diagrams in these three cases of the excitation are presented.

6.2 Numerical Algorithms

Equation (6.5) implies that the vector


& 2 'T
dx dx
y(t) := , x(t) , x2 (t)
dt dt

satisfies the following equation in R3 :

dy
= Ay + (t)By, t > 0, (6.7)
dt
102 R.V. Bobryk

where

4 2 2 0 4 0 0
A = 1 2 2 , B = 0 2 0 .
0 2 0 0 0 0

It is known [13] that in the case of TG excitation the mean E[y] for the solution of
equation (6.7) satisfies the following system:

dE[y]
= AE[y] + By1 ,
dt
dy1 2
= y1 / + Ay1 + BE[y]. (6.8)
dt
Therefore the problem of energetic stability for Eq. (6.5) with the TG noise is
reduced to the eigenvalue problem for the matrix of coefficients of the system (6.8).
From numerical point of view it is a quite simple task. Unfortunately in the case of
OU and SW excitations the problem is more complicated and we cannot obtain a
closed system of equations for E[y].
Let us first consider the SW noise case. Because we are interested in behaviour
of the system (6.5) for large t we can deal with stationary version (6.4). Using
the CameronMartin formula for the density of Wiener measure under translation
[14] we can obtain for the mean E[y(t)] the following infinite hierarchy of linear
differential equations (see [15] for details):
dE[y]
= AE[y] + Bu1 ,
dt
du1 1 2
= u1 + Au1 + Bu2 + BE[y],
dt 2 2
duk k2 2
= uk + Auk + Buk+1 + Buk1 ,
dt 2 4
E[y(0)] = y(0), uk (0) = 0, k = 1, 2, 3, . . . . (6.9)

Here
k k2 t
uk (t) := exp { }(E[eik y(t; w(s) + iks/ )]
( 2 i)k 2
+ (1)k E[eik y(t; w(s) iks/ )]), k N,

where y(t; w(s) iks/ ) is the solution of Eqs. (6.4), (6.7) with w(t) replaced by
w(t) ikt/ . In this way we reduce the problem of the energetic stability for
Eq. (6.5) to the asymptotic stability for this hierarchy. Note that in the case of
nonstationary sine-Wiener process (6.1) we again obtain the chain (6.9) but with
6 Numerical Study of Energetic Stability for Harmonic Oscillator. . . 103

different initial conditions. Therefore the conditions of stability in the case of


nonstationary SW and stationary SW noises have to be the same.
For computational purposes we have to close the hierarchy and a natural way to
do that consists in neglecting the terms un+1 in the equations for un . In this case the
index n is called the truncation index. After applying this procedure we obtain the
closed system of linear differential equations of first order with constant coefficients.
Note that this procedure is quickly convergent [16]. It is well known that we have
the asymptotic stability for this system if and only if the matrix of its coefficients has
all eigenvalues with negative real parts. For sufficiently large truncation index, the
asymptotic stability or instability of the system determines the mean square stability
or instability for Eq. (6.5).
Now consider the OU noise case. Let us rewrite the Eq. (6.7) in the integral form:
 t  t
y(t) = A y(t1 )dt1 + B (t1 )y(t1 )dt1 + y(0).
0 0

The solution of this equation is a functional of (t) and there exist functional
derivatives of all orders

k y(t)
,
(s1 ) (sk )

which satisfy the equation


 t
k y(t) k y(t1 )
=A dt1
(s1 ) (sk ) 0 (s1 ) (sk )
 t
k y(t1 ) k
+B (t1 ) dt1 + B (t si )
0 (s1 ) (sk ) i=1

k1 y(si )
, k = 1, 2, 3, . . . , (6.10)
(s1 ) (si1 ) (si+1 ) (sk )

with (t) standing for the Heaviside step unit function.


Now, we use the DonskerFurutsuNovikov formula [17]:
 t
R[ ]
E[ (t)R[ ]] = K(t s)E[ ]ds,
0 (s)

where R[ ] is a functional of (t). Applying this formula to the Eqs. (6.7), (6.10) we
obtain for E[y(t)] the following infinite hierarchy of integro-differential equations:
 t
dE[y] y(t)
= AE[y] + B K(t s)E[ ]ds,
dt 0 (s)
104 R.V. Bobryk

 t
k y(t) k y(t1 )
E[ ]=A E[ ]dt1 (6.11)
(s1 ) (sk ) 0 (s1 ) (sk )
 t  t1
k+1 y(t1 ) k
+B K(t1 sk+1 )E[ ]dsk+1 dt1 + B (t si )
0 0 (s1 ) (sk+1 ) i=1

k1 y(si )
E[ ], k = 1, 2, 3, . . . .
(s1 ) (si1 ) (si+1 ) (sk )

Let us introduce the substitution:


 t  t
2k k
vk (t) := exp{ (t s j )/ }
k 0 0 j=1

k y(t)
E[ ]ds1 dsk , k = 1, 2, 3, . . . .
(s1 ) (sk )

Using this substitution we rewrite the hierarchy (6.11) as the following infinite
hierarchy of coupled linear differential equations [18]:

dE[y]
= AE[y] + Bv1 ,
dt
dv1
= v1 / + Av1 + Bv2 + 2 BE[y]/ ,
dt
dvk
= kvk / + Avk + Bvk+1 + +k 2 Bvk1 / , k = 2, 3, . . . ,
dt
E[y(0)] = y(0), vk (0) = 0, k = 1, 2, 3, . . . .

It is important that the coefficients in this hierarchy are constant. In the equations for
vn we neglect vn+1 . Then we obtain a closed set of linear differential equations of
first order with constant coefficients. It is proved [19] that the solution of this closed
set converges to E[y(t)] as n . Stability of the closed set is determined by the
signs of eigenvalues of its coefficient matrix.

6.3 Stability Diagrams on the Plane

Here we present stability diagrams for three considered cases of the random
excitation. They are obtained numerically by applying the methods from the
previous section. In the case of the SW and OU noises the truncation index n is
chosen in such a way that a further increase does not change the diagrams. In Fig. 6.1
6 Numerical Study of Energetic Stability for Harmonic Oscillator. . . 105

2.5
4
2.0 Unstable
3
1.5 Unstable

1.0 2

0.5 Stable 1
Stable

0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.5 1.0 1.5 2.0 2.5

Fig. 6.1 Energetic stability diagrams to the Eq. (6.1) for the values = 0.5, = 1 and [0.01, 3]
(left) and for the values = = 1, [0.01, 3] (right). Dotted, solid and dashed curves separate
stability and unstability regions for TG, SW and OU noise cases, respectively

2.5 2.5
Unstable
2.0 2.0
Unstable
1.5 1.5
1.0 1.0
Stable
0.5 0.5
Stable
0.5 1.0 1.5 2.0 2.5 3.0 0.5 1.0 1.5 2.0 2.5 3.0

Fig. 6.2 As in Fig. 6.1 but for the values = 0.5, = 3 and [0.001, 3] (left) and for the values
= = 1, [0.1, 3] (right)

they are presented in the parameter spaces ( , ) (left) and ( , ) (right). Dotted,
solid and dashed curves separate stability and instability regions for TG, SW and
OU noise cases, respectively. One can observe significant differences in stability
regions with growth of the correlation time.
In Fig. 6.2 the stability diagrams are shown in the parameter spaces ( , ) (left)
and ( , ) (right). We also present here the stability curve obtained from condition
(6.6) (dotted-dashed curve). It is interesting to note that the stability curve in this
limiting case is below other curves. In OU noise case this fact was rigorously proven
in [20]. In a similar way as it was done in [20] the property can also be proven in
the TG and SW noise cases. The numerical method is quite efficient. The stability
diagrams for the truncation index n = 30 and n = 90 are the same.

6.4 Three-Dimensional Stability Diagrams

In this section we present stability diagrams in three-dimensional space. They are


based on the numerical methods from the Sect. 6.2.
106 R.V. Bobryk

3
2.5
2 2.0
1.5
1 1.0
0.5
4 4
0.8 3 0.8 3
0.6 2 0.6 2
0.4 1 0.4 1
0.2 0.2
0 0

2.5
2.0
1.5
1.0
0.5
4
0.8 3
0.6 2
0.4
1
0.2
0

Fig. 6.3 Upper-left panel: Energetic stability diagram to Eq. (6.5) with three types of noise for
= 1. The surface separates stability (below) and instability (above) regions. Upper-left panel:
TG noise; upper right panel: SW noise; lower panel: OU noise

6.4.1 Stability Diagrams in ( , , ) Space

In Fig. 6.3 the stability diagrams to Eq. (6.5) are shown in the space ( , , ) for the
value = 1 and for the TG, SW and OU excitations.

6.4.2 Stability Diagrams in ( , , ) Space

In Fig. 6.4 the stability diagrams to Eq. (6.5) are shown in the space ( , , ) for the
value = 0.5 and for the TG, SW and OU noise cases.

6.4.3 Stability Diagrams in ( , , ) Space

In Fig. 6.5 the stability diagrams to Eq. (6.5) are shown in the space ( , , ) for the
value = 1 and for the TG, SW and OU noise cases.
6 Numerical Study of Energetic Stability for Harmonic Oscillator. . . 107

3
2
3
1 2
0 1
4 0
3
1.5
2
Out[21]=
1 1.0
0.0
0.5 0.0
1.0 0.5
1.0
1.5 1.5
2.0 2.0
3
2
1
0
1.4
Out[24]= 1.2
1.0
0.8
0.0
0.5
1.0
1.5
2.0

Fig. 6.4 Energetic stability diagram to Eq. (6.5) with three kinds of noises for = 0.5. The surface
separates stability (below) and instability (above) regions. Upper-left panel: TG noise; upper right
panel: SW noise; lower panel: OU noise

2.0 1.5 2.0


1.5
1.0 1.0
0.5 0.5
6
2
4
Out[27]= Out[30]=
2 1
0 0
3 3
2 2
1 1

2.0
1.5
1.0
0.5

2
Out[33]=
1
0
3
2
1

Fig. 6.5 Energetic stability diagram to Eq. (6.5) with three types of noise for = 1. The surface
separates stability (below) and instability (above) regions. Upper-left panel: TG noise; upper right
panel: SW noise; lower panel: OU noise
108 R.V. Bobryk

6.5 Conclusion

In the paper we have presented the energetic stability diagrams for the harmonic
oscillator with random damping parameter. Three cases of zero mean random
excitation with the same correlation are considered. It is shown that the random
excitations can have important influence on stability regions especially if the
correlation time is not small. It follows from the numerical computations that the
stability regions in the case of TG excitation are larger than in the case of SW one.
On the other hand, the stability regions in the case of SW excitation are larger that
in the OU one. It is interesting to note a similarity of the surfaces which separate
stability and instability regions in all cases of the excitation. Proposed numerical
methods are quite efficient and can be applied to other stability problems.

References

1. Wio, H.S., Toral, R.: Phys. D 193, 161 (2004)


2. Bobryk, R.V., Chrzeszczyk, A.: Phys. A 358, 263 (2005)
3. dOnofrio, A.: Phys. Rev. E 81, 021923 (2010); dOnofrio, A., Gandolfi, A.: Phys. Rev. E 82,
061901 (2010)
4. Fuentes, M., Toral, R., Wio, H.: Phys. A 295, 114 (2001); Fuentes, M.A., Tessone, C., Wio,
H.S., Toral, R.: Fluct. Noise Lett. 3, L365 (2003)
5. Bouzat, S., Wio, H.S.: Eur. Phys. J. B 41, 97 (2004)
6. Horsthemke, W., Lefever, R.: Noise-Induced Transitions: Theory and Aplications in Physics,
Chemistry and Biology. Springer, Berlin (2006)
7. Bobryk, R.V., Chrzeszczyk, A.: Nonlinear Dyn. 51, 541 (2008)
8. Einstein, A.: Ann. Phys. 14, 549 (1905)
9. Gitterman, M.: The Noisy Oscillator: The First Hundred Years, From Einstein Untill Now.
World Scientific, Singapore (2005)
10. Gihman, I.I., Skorokhod, A.V.: Stochastic Differential Equations. Springer, Berlin (1972)
11. Arnold, L.: Stochastic Differential Equations: Theory and Applications. Wiley, New York
(1974)
12. Mendez, V., Horsthemke, W., Mestres, P., Campos, D.: Phys. Rev. E 84, 041137 (2011)
13. Morrisson, J.A., McKenna, J.: SIAM-AMS Proc. 6, 97 (1973)
14. Cameron, R.H., Martin, W.T.: Ann. Math. 45, 386 (1944)
15. Bobryk, R.V., Stettner, L.: Syst. Contr. Lett. 54, 781 (2005); Bobryk, R.V.: J. Sound Vib. 305,
317 (2007); Bobryk, R.V.: Appl. Math. Comput. 198, 544 (2008); Bobryk, R.V., Chrzeszczyk,
A.: Phys. Lett. A 373, 3532 (2009)
16. Bobryk, R.V.: J. Math. Anal. Appl. 329, 703 (2007)
17. Donsker, M.D.: In: Proc. Conf. Theory Appl. Anal. Funct. Space, pp. 24. MIT Press,
Cambridge MA (1964); Furutsu, K.: J. Res. NBS D 67, 303 (1963); Novikov, E.A.: Sov. Phys.
JETP 20, 1290 (1965)
18. Bobryk, R.V.: Phys. A 184, 493 (1992); Bobryk, R.V., Stettner, L.: Stochast. Stochast. Rep. 67,
169 (1999)
19. Bobryk, R.V.: Ukrainian Math. J. 37, 443 (1985)
20. Bobryk, R.V.: Syst. Contr. Lett. 20, 227 (1993)
Chapter 7
A Moment-Based Approach to Bounded
Non-Gaussian Colored Noise

Hideo Hasegawa

Abstract A moment method (MM) is applied to the Langevin model for a


Brownian particle subjected to bounded non-Gaussian colored noise (NGCN).
Eliminating components relevant to NGCN, we have derived the effective Langevin
equation, from which the stationary distribution function is obtained.
A comparison is made among results of the MM, universal colored noise ap-
proximation, functional integral methods, and direct simulations (DSs). Numerical
calculations show that results of the MM are in fairly good agreement with those
derived by DSs.

Keywords Bounded noises Non-Gaussian colored noise Moment method


Universal colored noise approximation Functional integral methods

7.1 Introduction

A study of stochastic systems has been extensively made with the use of the
Langevin model where Gaussian white (or colored) noise is usually adopted because
of a simplicity of calculation. In recent years, however, there is a growing interest in
studying dynamical systems driven by non-Gaussian colored noise (NGCN). This
is motivated by the fact that NGCN is quite ubiquitous in natural phenomena. For
example, experimental results for crayfish and rat skin offer strong indication that
there could be NGCN in these sensory systems [16, 20]. It has been theoretically
shown that the peak of the signal-to-noise ratio (SNR) in the stochastic resonance
for NGCN becomes broader than that for Gaussian noise [7]. This result has been
confirmed by an analog experiment [6]. Effects of NGCN on the mean first-passage

H. Hasegawa ()
Tokyo Gakugei University, 4-1-1, Nukui-kita machi, Koganei, Tokyo 184-8501, Japan
e-mail: hideohasegawa@goo.jp

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 109


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 7, Springer Science+Business Media New York 2013
110 H. Hasegawa

time [3], Brownian motors with rachet potential [4], a supercritical Hopf bifurcation
[23], and spike coherence in a HodgkinHuxley neuron [22] have been studied.
Stochastic systems with Gaussian colored noise are originally expressed by the
non-Markovian process, which is transformed into the Markovian one by extending
the number of variables and equations. The FokkerPlanck equation (FPE) for
colored noise includes the probability distribution function (PDF) expressed in
terms of multi-variables. We may transform this multivariate FPE to the univariate
FPE or obtain the effective Langevin equation with the use of some approximation
methods such as the universal colored noise approximation (UCNA) [9, 15] and the
functional-integral methods (FIMs) [7, 8, 14, 21]. The purpose of this paper is to
study an application of a moment method (MM) to the Langevin model with NGCN
[13], which is simpler and more transparent than UNCA and FIM.
The paper is organized as follows. The MM is explained in Sect. 7.2, where
we derive the effective Langevin equation, eliminating variables relevant to NGCN
[13]. A comparison is made among the MM, UCNA, and FIMs in Sect. 7.3, where
results of direct simulation (DS) are also presented. Section 7.4 is devoted to
conclusion.

7.2 Moment Method

We consider a Brownian particle subjected to NGCN (t) and white noise (t)
whose equations of motion are expressed by [5]

x = F(x) + (t) + (t) + I(t), (7.1)


= K( ) + (t), (7.2)

with

K( ) = . (7.3)
[1 + (q 1)( / 2 ) 2 ]

Here F(x) = U (x), U(x) expresses a potential; I(t) is an external input; signifies
the relaxation time; and denote magnitudes of noises; and are zero-mean
white noises with correlations:  (t) (t ) = (t t ),  (t) (t ) = (t t ) and
 (t) (t ) = 0. The stationary PDF of the Langevin equation given by Eq. (7.2) has
been extensively discussed [1, 2, 11, 17] in the context of the nonextensive statistics
[18, 19]. The stationary PDF for is given by [1, 2, 11, 17]
   1/(q1)

p( ) 1 + (q 1) 2 , (7.4)
2 +

where [x]+ = x for x 0 and zero otherwise. Eq. (7.4) for q = 1.0 yields the Gaussian
PDF,

p( ) e( /
2 ) 2
. (7.5)
7 A Moment-Based Approach to Bounded Non-Gaussian Colored Noise 111

The PDF given by Eq. (7.4) for q > 1.0 is non-Gaussian distribution with a long-tail
 for q < 1.0 is non-Gaussian distribution bounded for [c , c ] with
while that
c = / (1 q) . Expectation values of and 2 are given by

2
  = 0,  2  = . (7.6)
(5 3q)

The FPE of the distribution p(x, ,t) in the Stratonovich representation is


expressed by

2 2
p(x, ,t) = {[F(x) + + I(t)]p(x, ,t)} + p(x, ,t)
t x 2 x2
 
1 1 2 2
[K( )p(x, ,t)] + p(x, ,t). (7.7)
2 2

By using the MM for the Langevin model given by Eqs. (7.1) and (7.2), we obtain
equations of motion given by [11, 12]

dx
= F(x) +   + I(t), (7.8)
dt
d  1
= K( ), (7.9)
dt
dx2 
= 2xF(x) + 2x  + 2xI(t) + 2 , (7.10)
dt
 2
d 2  2
=  K( ) + , (7.11)
dt
dx  1
=  F(x) +  2  +  I(t) + xK( ). (7.12)
dt
In order to close equations of motion within the second moment, we approximate
K( ) by [21]

2(2 q)
K( )  with rq = , (7.13)
rq (5 3q)

which is derived by a replacement of the 2 term in the denominator of Eq. (7.3) by


its expectation value: 2  2 . Equation (7.2) reduces to


= + (t), (7.14)
rq

which generates Gaussian noise with variance depending on q and but non-
Gaussian noise in a strict sense.
112 H. Hasegawa

We consider means, variances, and covariances defined by

= x, =  , = x2  x2 , =  2   2 , = x  x . (7.15)

When we expand Eqs. (7.8)(7.12) as x = + x and = + around the mean


values of and , and retaining up to their second-order contributions such as
( x)2 , equations of motion become [11, 12]

d
= f0 + f2 + + I(t), (7.16)
dt
d
= , (7.17)
dt rq
d
= 2( f1 + ) + 2 , (7.18)
dt
   2
d 2
= + , (7.19)
dt rq
 
d 1
= f1 +, (7.20)
dt rq

where f = (1/!)  F( )/ x .
When we adopt the stationary values for , , and :

rq 2 rq2 2
 s = 0,  s = ,  s = , (7.21)
2 2(1 rq f1 )
equations of motion for and become
d
= f0 + f2 + I(t), (7.22)
dt
d rq2 2
= 2 f1 + + 2, (7.23)
dt (1 rq f1 )
where rq is given by Eq. (7.13).
We may obtain the effective Langevin equation given by

x = F(x) + e f f (t) + (t) + I(t), (7.24)


with
rq
e f f =  , (7.25)
1 rq f 1

from which Eqs. (7.22) and (7.23) are derived [11, 12]. Equations (7.24) and (7.25),
which are the main results of this study, clearly express the effect of non-Gaussian
colored noise. The effective magnitude of noise e f f depends on q and .
7 A Moment-Based Approach to Bounded Non-Gaussian Colored Noise 113

7.3 Discussion

We will compare the result of the MM with those of several analytical methods such
as the universal colored noise approximation (UCNA) [9,15] and functional-integral
methods (FIM-1 [21] and FIM-2 [7, 8]).
(a) UCNA. Jung and Hanggi [9, 15] proposed the UCNA, interpolating between
the two limits of = 0 and = of colored noise, and it has been widely adopted
for a study of effects of Gaussian and non-Gaussian colored noises. Employing the
UCNA, we may derive the effective Langevin equation. Taking the time derivative
of Eq. (7.1) with = 0 and using Eq. (7.14) for , we obtain the effective Langevin
equation given by [13]

x = Fe f f (x) + e f f (t) + Ie f f (t), (7.26)

with

F(x) rq (I + rq I)

FeUf f (x) = , eUf f = , IU (t) = , (7.27)
(1 rq F ) (1 rq F ) e f f (1 rq F )

where F = F(x), F = F (x), and rq is given by Eq. (7.13). It is noted that eUf f given
by Eq. (7.27) generally depends on x, yielding the multiplicative noise in Eq. (7.26).
(b) Functional-Integral Method (FIM1). Wu, Luo, and Zhu [21] started from the
formally exact expression for P(x,t) of Eqs. (7.1) and (7.14) with I(t) = 0 given by


P(x,t) = [F(x)P(x,t)]  (t) (x(t) x)
t x x

 (t) (x(t) x), (7.28)
x

where  denotes the average over the probability P(x,t) to be determined. They
obtained the effective Langevin equation which yields Eq. (7.26) but with [13]

rq
FeWf f (x) = F(x), eWf f =  , IeWf f (t) = 0, (7.29)
1 rq Fs

where F = dF/dx and Fs et al. denote steady-state values at x = xs .


(c) Functional-integral method (FIM2)
Applying the alternative functional-integral method to the FPE for p(x, ,t) given
by Eqs. (7.1) and (7.14) with = I(t) = 0, Fuentes, Toral, and Wio [7,8] derived the
FPE of P(x,t), which leads to the effective Langevin equation given by Eq. (7.26)
but with [13]
114 H. Hasegawa

Table 7.1 A comparison among various approaches to the Langevin model given
by Eqs. (7.1) and (7.2) [or (7.14)], which yield the effective Langevin equation
given by x = Fe f f + e f f (t) + Ie f f (t), where rq = 2(2 q)/(5 3q) and sq =
1 + (q 1)( /2 2 )F 2 : (1) MM [13], (2) FIM1 [21], (3) UCNA [9, 15] and (4)
FIM2 [7, 8] (see text)
Fe f f e f f Ie f f Method

F rq /( 1 rq f1 ) I(t) MM(1)

F rq /( 1 rq Fs ) FIM1(2)
F/(1 rq F ) rq /(1 rq F ) [I(t) + I(t)]/
UCNA(3)
(1 rq F )
F/(1 sq F ) sq /(1 sq F ) FIM2(4)

F sq
FeFf f (x) = , eFf f = , I F (t) = 0, (7.30)
(1 sq F )
(1 sq F ) e f f

with
   

sq = 1 + (q 1) F2 . (7.31)
2 2

eFf f in Eq. (7.30) depends on x in general and yields the multiplicative noise in
Eq. (7.26).
Results of various methods are summarized in Table 7.1. We note that the result
of the MM agrees with that of FIM1, but it is different from those of UCNA and
FIM2. Even for q = 1.0 (Gaussian noise), the UCNA does not agree with the MM
within O( ).
Figure 7.1 shows stationary PDFs calculated with F(x) = x, = 1.0, =
0.5, and = 0.0 for several sets of (q, ) by using various methods as well as
DS which is performed for Eqs. (7.1) and (7.2). We note that widths of PDFs for
= 1.0 are narrower
 than those for = 0.5, because the effective noise strength of
e f f [= rq / 1 + rq ] is decreased with increasing . Widths of PDF for q = 0.8
are slightly narrower than those for q = 1.0. This is due to a reduced rq = 0.92 for
q = 0.8 which is smaller than rq = 1.0 for q = 1.0.

7.4 Conclusion

We have applied the MM to the Langevin model subjected to bounded NGCN,


obtaining the following results: (a) the width of stationary PDF is decreased with
decreasing q from unity and/or with increasing , (b) the result of the MM agrees
with that of FIM1, but disagrees with those of the UCNA and FIM2 (Table 7.1),
and (c) numerical results of the MM are in fairly good agreement with DS. The
present MM may be extended to various directions: dynamical properties, another
types of V (x) like bi-stable potential [13], and an ensemble of Brownian particles
(the augmented moment method) [10, 11].
7 A Moment-Based Approach to Bounded Non-Gaussian Colored Noise 115

2 2
a b
P(x)
1 1

q=0.8 q=0.8
=0.5 =1.0
0 0
2 1 0 1 2 2 1 0 1 2
2 2
c d
MM
UCNA
P(x)

1 FIM2
1
DS
q=1.0 q=1.0
=0.5 =1.0
0 0
2 1 0 1 2 2 1 0 1 2
x x

Fig. 7.1 Stationary PDFs P(x) for F(x) = x calculated by MM (solid curves), UCNA (chain
curves), FIM2 (dotted cures), and DS (dashed curves), results of FIM1 being the same as those of
MM: (a) (q, ) = (0.8, 0.5), (b) (0.8, 1.0), (c) (1.0, 0.5), and (d) (1.0, 1.0) ( = 1.0, = 0.5 and
= 0.0)

Acknowledgments This work is partly supported by a Grant-in-Aid for Scientific Research from
the Japanese Ministry of Education, Culture, Sports, Science, and Technology.

References

1. Anteneodo, C., Tsallis, C.: J. Math. Phys. 44, 5194 (2003)


2. Anteneodo, C., Riera, R.: Phys. Rev. E 72, 026106 (2005)
3. Bag, B.C.: Eur. Phys. J. B 34, 115 (2003)
4. Bag, B.C., Hu, C.-K.: J. Stat. Mech Ther. Exp. P02003 (2009)
5. Borland, L.: Phys. Lett. A 245, 67 (1998)
6. Castro, F.J., Kuperman, M.N., Fuentes, M., Wio, H.S.: Phys. Rev. E 64, 051105 (2001)
7. Fuentes, M.A., Toral, R., Wio, H.S.: Phys. A 295, 114 (2001)
8. Fuentes, M.A., Toral, R., Wio, H.S.: Phys. A 303, 91 (2002)
9. Hanggi, P., Jung, P.: Adv. Chem. Phys. 89, 239 (1995)
10. Hasegawa, H.: Phys. Rev. E 67, 041903 (2003)
11. Hasegawa, H.: J. Phys. Soc. Jpn. 75, 033001 (2006)
12. Hasegawa, H.: Phys. A 374, 585 (2007)
13. Hasegawa, H.: Phys. A 384, 241 (2007); In this reference Eq. (A.7) is missing the term of
( 2 /2 2 )x  and the last term of Eq. (A.12) should be ( 2 /2 2 )
14. Hasegawa, H.: Phys. A 387, 2697 (2008)
15. Jung, P., Hanggi, P.: Phys. Rev. A 35, 4464 (1987)
16. Nozaki, D., Mar, D.J., Grigg, P., Collins, J.J.: Phys. Rev. Lett. 82, 2402 (1999)
116 H. Hasegawa

17. Sakaguchi, H.: J. Phys. Soc. Jpn. 70, 3247 (2001)


18. Tsallis, C.: J. Stat. Phys. 52, 479 (1988)
19. Tsallis, C., Mendes, R.S., Plastino, A.R.: Phys. A 261, 534 (1998)
20. Wiesenfeld, K., Pierson, D., Pantazelou, E., Dames, Ch., Moss, F.: Phys. Rev. Lett. 72, 2125
(1994)
21. Wu, D., Luo, X., Zhu, S.: Phys. A 373, 203 (2007)
22. YanHang, X., YuBing, G., YingHang, H.: Sci. China B Chem. 52, 1186 (2009)
23. Zhang, R., Hou, Z., Xin, H.: Phys. A 390, 147 (2011)
Chapter 8
Spatiotemporal Bounded Noises and Their
Application to the GinzburgLandau Equation

Sebastiano de Franciscis and Alberto dOnofrio

Abstract In this work, we introduce three spatiotemporal colored bounded noises,


based on the zero-dimensional CaiLin, TsallisBorland, and sine-Wiener noises.
Then we study and characterize the dependence of the defined stochastic processes
on both a temporal correlation parameter and a spatial coupling parameter .
In particular, we found that varying may induce a transition of the distribution
of the noise from bimodality to unimodality. With the aim to investigate the role
played by bounded noises on spatially extended nonlinear dynamical systems, we
analyze the behavior of the real GinzburgLandau time-varying model additively
perturbed by such noises. The observed phase transitions phenomenology is quite
different from the one observed when the perturbations are unbounded. In particular,
we observed inverse order-to-disorder transitions, and reentrant transitions, with
dependence on the specific type of bounded noise.

Keywords Bounded noises Spatially extended system Spatiotemporal


stochastic processes Phase transitions GinzburgLandau equation CaiLin
noise TsallisBorland noise sine-Wiener noise

8.1 Introduction

In zero-dimensional nonlinear systems noise may induce a wide spectrum of


important phenomena such as stochastic resonance [1], coherence-resonance [2],
and noise-induced transitions [24]. Note that noise-induced-transitions are well

S. de Franciscis A. dOnofrio ()


Department of Experimental Oncology, European Institute of Oncology, Milan, Italy
e-mail: alberto.donofrio@ieo.eu

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 117


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 8, Springer Science+Business Media New York 2013
118 S. de Franciscis and A. dOnofrio

distinct from phase transitions that need spatially extended systems [4]. Genuine
noise-induced phase transitions have been, instead and not surprisingly, found in
many spatiotemporal dynamical systems [57].
Many studies in the field of noise-induced phenomena both in zero-dimensional
and in spatially extended systems were, respectively, based on temporal [3] or
spatiotemporal white noises [710]. This important model of noise is, however,
mainly appropriate when modeling internal hidden degrees of freedom, of mi-
croscopic nature. On the contrary, extrinsic fluctuations (i.e., originating externally
to the system in study) may exhibit both temporal and spatial structures [6, 11],
which may induce new effects. For example, it was shown that zero-dimensional
systems perturbed by colored noises exhibit correlation-dependent properties that
are missing in case of null autocorrelation time, such as the emergence of stochastic
resonance also for linear systems, and reentrance phenomena, i.e. transitions from
monostability to bistability and back to monostability [2, 4, 12]. Even more striking
effects are observed in spatially extended systems that are perturbed by spatially
white but temporally colored noises. These phenomena are induced by a complex
interplay between noise intensity, spatial coupling, and autocorrelation time [4].
Garca-Ojalvo, Sancho, and Ramrez-Piscina introduced in [13] the spatial ver-
sion of the OrnsteinUhlenbeck noise, which we shall call GSR noise, characterized
by both a temporal scale and a spatial scale [14]. The GinzburgLandau
field modelone of the best-studied amplitude equation representing universal
nonlinear mechanismsadditively perturbed by the GSR noise was investigated
in [6, 15], where it was shown the existence of a non-equilibrium phase transition
controlled by both the correlation time and the correlation length [6, 15].
In order to generate a temporal bounded noise, two basic recipes have been
adopted so far. The first consists in generating the noise by means of an appropriate
stochastic differential equation [16,17], whereas the second one consists in applying
a bounded function to a standard Wiener process. In the purely temporal setting,
two relevant examples of noises obtained by implementing the fist recipe are the
TsallisBorland [16] and the CaiLin [17] noises, whereas an example generated
by following the second recipe is the zero-dimensional sine-Wiener noise [18].
Our aim here is twofold. First, we want to review the definitions and properties
of three simple spatiotemporal bounded noises we recently introduced [19, 20]. The
first two noises extend the above-mentioned TsallisBorland and CaiLin noises
[19], the third extends the sine-Wiener bounded noise [20].
Second, we want to assess the effects of such bounded stochastic forces (i.e., of
additive bounded noises) on the statistical properties of the spatiotemporal dynamics
of the GinzburgLandau (GL) equation.
Phase transitions induced in GL model by additive and multiplicative unbounded
noises were extensively studied in the last 20 years [2, 6, 13, 2127]. Thus, our aim
here is uniquely to focus on the effects related to the boundeness of the noises.
8 Spatiotemporal Bounded Noises 119

8.2 Background on a Spatiotemporal Colored


Unbounded Noise

Let us consider the well-known zero-dimensional OrnsteinUhlenbeck stochastic


differential equation:

1 2D
(t) = (t) + (t), (8.1)


where 2D is the noise strength and (t) is a Gaussian white noise of unitary
intensity:  (t) (t1 ) = (t t1 ), and > 0 is the autocorrelation time since
 (t) (t1 ) exp (|t t1 |/ ). In [13] Eq. (8.1) was generalized in a spatially
extended setting by including in it the simplest spatial coupling, the Laplace
operator, yielding the following partial differential Langevin equation

2 2 1 2D
t (x,t) = (x,t) (x,t) + (x,t), (8.2)
2
where > 0 is the spatial correlation strength [13] of (x,t).
As usual in non-equilibrium statistical physics, we shall investigate the lattice
version of (8.2):

2 2 1 2D
p (t) = L p (t) p (t) + p (t), (8.3)
2
where p = h (i, j) is a point on an N N lattice with step equal to h (in [13] and here
it is assumed h = 1). The symbol 2L denotes the discrete version of the Laplaces
operator:
1
2L p (t) = (i p ),
h2 ine(p)
(8.4)

where ne(p) is the set of the neighbors of the lattice point p.

8.3 Generalizations of the TsallisBorland Bounded Noise

A family of Langevin equations generating bounded noises that extend the Tsallis
Borland noise [16, 28] is the following:

2D
(t) = f ( ) +
(t), (8.5)

where (t) is a Gaussian white noise with  (t) (t1 ) = (t t1 ), and f ( ) is
a continuous decreasing function such that: (i) f (0) = 0; (ii) f ( ) = f ( ); (iii)
f (+B) = and f (B) = +; (iv) the potential U( ) associated with f ( ) is such
120 S. de Franciscis and A. dOnofrio

that U(B) = +. Our extension of the TsallisBorland noise allows generating a


noise with a pre-assigned stationary density P ( ). Indeed, if P ( ) is given, then
the FokkerPlanck equation associated with Eq. (8.5) implies:

D P ( )
f ( ) = . (8.6)
2 P ( )

For example, if P ( ) = (1/2)Cos( )+ , then f ( ) = (D/ 2 )tan( ).


Following the approach by [13, 21] to extend the OU noise, it is straightforward
to define spatiotemporal bounded noises based on the generalized TsallisBorland
family of noises of Eq. (8.5):

2 2 2D
t (x,t) = f ( ) + (x,t) + (x,t). (8.7)
2c

In line with [16, 28], here we shall consider:

1 (t)
f ( ) = . (8.8)
1 ( (t)/B)2

We recall that in the temporal case the TsallisBorland noise is such that: i) its
stationary distribution of (t) is a Tsallis q-statistics [16, 28]:
1
PT B ( ) = A(B2 2 )+1q , q [, 1) ; (8.9)

the true autocorrelation time c of (t) is given by [16] c (5 3q)/2, and


D = B2 /2.

8.4 The Temporal and Spatiotemporal CaiLin


Bounded Noise

In [17, 29, 30], the following family of bounded noises was introduced:

1
(t) = (t) + g( ) (t), (8.10)
c

where g(|B|) = 0 and (t) is a Gaussian white noise with  (t) (t1 ) = (t t1 ).
If g( ) is symmetric, then the process (t) has zero mean, and the same
autocorrelation of the OU process [17,29], i.e., c denotes the actual autocorrelation
time of the process (t).
As shown in [17,29], a pre-assigned stationary distribution P( ) can be obtained
by the Eq. (8.10) by setting
8 Spatiotemporal Bounded Noises 121


 x
2
g( ) = uP(u)du. (8.11)
c P( ) B

In line with our spatiotemporal version of the extended TsallisBorland noise, we


may define a bounded spatiotemporal noise given by:

1 2 2
t (x,t) = (x,t) + (x,t) + g( ) (x,t). (8.12)
c 2c

In line with [17, 29], here we shall set:



B2 2
g( ) = , (8.13)
c (1 + )

implying, in the purely temporal case, the following stationary distribution for :

PCL ( ) = A B2 2 + , > 1. (8.14)

For > 0 PCL ( ) is unimodal, for (1, 0) it is bimodal with PCL (B) = +.

8.5 Boundedness of the Spatiotemporal Noises

The boundedness of the above-defined spatiotemporal noises follows from the


boundedness of the corresponding purely temporal noises. For example, let us
consider a point p in the lattice, and that at time t1 p (t1 ) = B(1 ) with 0 1.
Let us suppose also that j (t1 ) [B, B] with j = p, and Max j= p p (t1 ). Then,
the Laplacian is such that:

2L p (t1 ) 0. (8.15)

If = 0, the noise is composed of independent purely temporal bounded noises, so


that p (t1 + dt) = a [B, B]. If > 0, given p (t1 ) immediately follows that:

2 2
p (t1 + dt) = a + dt p (t1 ) a B, (8.16)
2c L

and, being dt infinitesimal, p (t1 + dt) B. Similar reasoning holds p (t1 ) is very
close to B.
122 S. de Franciscis and A. dOnofrio

8.6 The Sine-Wiener Spatiotemporal Bounded Noise:


Definition and Properties

The sine-Wiener
 noise is obtained by applying the bounded function h(u) =
B sin( u) to a standard Wiener process W (t) yielding:
2

 
2
(t) = B sin W (t) . (8.17)

The stationary probability density of (t) is given by

1
Pe q( ) =  , (8.18)
B2 2

thus: Peq (B) = +.


Here, as a natural spatial extension of the sine-Wiener noise, we define the
following spatiotemporal noise:

(x,t) = B sin (2 (x,t)) , (8.19)

where (x,t) is the spatiotemporal correlated noise defined by (8.2).

8.7 Statistical Features of Spatiotemporal Bounded Noises

In order to characterize the properties of the bounded noises defined in the previous
sections, we may study the global behavior of the noise by means of the equilibrium
heuristic probability density of the noise lattice variables p , Peq ( ) (Fig. 8.1).

a 0.1 b 0.12
=0.0 2D=0.1
=0.5 2D= 0.15
0.075 =0.75
0.08 2D= 0.2
=1
2D=0.25
Peq(p)

Peq(p)

=2
0.05 2D=0.5
0.04
0.025
0.02
0 0
1 0.5 0 0.5 1 1 0.5 0 0.5 1
p p

Fig. 8.1 Equilibrium distribution Peq ( p ) for some spatiotemporal bounded noises, on a 40 40

lattice system with B = 1. Panel (a): CaiLin noise with = 1 and = 0.5 and 2D = 1;
panel (b): sine-Wiener noise with c = 2 and = 0
8 Spatiotemporal Bounded Noises 123

In both CaiLin and TsallisBorland bounded noises the coupling parameter


affects the distribution of p deeply and in a noise-type-dependent manner, indeed:
(a) for TsallisBorland noise and for CaiLin noises noise with > 0 the variance
of the noise is a decreasing function of ; (b) for Cai noise with < 0 the increase
of induces a transition from a bimodal to an unimodal density (Fig. 8.1a). In all
cases, the distribution is independent of .
Concerning the sine-Wiener noise, we found that when varying or D or of the
underlying GSR noise, the distribution of p undergoes a transition from bimodality
to trimodality (Fig. 8.1b), since an additional mode at = 0 appears.

8.8 The GinzburgLandau Equation Perturbed


by Spatiotemporal Bounded Noise

Let us consider the following lattice-based GinzburgLandau equation:

1
t p = p p3 + 2L p + A p (t), (8.20)
2

where A p (t) is a generic bounded or unbounded additive noise. If A p (t) is the GSR
noise, it was shown [6] that both spatial and temporal correlation parameters ( and
) shift the transition point towards larger values.
In the following we will illustrate some analytical and numerical results for the
case where A p (t) is one of the three bounded noises above described. Our aim is to
provide a testbed to our novel spatiotemporal bounded noises and not to evidence
some unknown aspects of the very studied GL model.
In line with [6], phase transitions in GL equation will be characterized by means
of the order parameter global magnetization M and of its relative fluctuation M :

< | p p| > < | p p |2 > < | p p | > 2
M , M . (8.21)
N2 N2

Again in line with [6], we define a transition from large to small values of the order
parameter as an order to disorder transition.
All simulations have been performed in a 40 40 lattice for a time interval
[0, 250], and the temporal averages were computed in the interval [125, 250]. In all
cases, noise initial condition was set to 0. Moreover the initial condition of GL
system was the ordered phase, i.e., (x, 0) = 1x; thus we measured the robustness
of order against the presence of the bounded noise.
124 S. de Franciscis and A. dOnofrio

8.8.1 Some Analytical Considerations on the Role of B

System (8.20) is a cooperative system [31] since:

k p 0. (8.22)

This property and the fact that A p (t) B implies that p (t) p (t), where
1
t p =
p p3 + 2L p B, p (0) = p (0). (8.23)
2

Now, if 0 < B < B = 1/(3 3), then the equation

s s3 = 2B (8.24)

has three solutions sa (B) < 0, sb (B) (0, 1) and sc (B) (0, 1) such that sb (B) <
sc (B). For example, for B = 0.19 < B it is: sa (0.19) 1.15306, sb (0.19)
0.52331 and sc (0.19) = 0.62975. In particular, if B 1, then it is sc (B) 1 B
and sa (B) 1 B. It is an easy matter to show that if p (0) > sb (B) then
p (t) > sb (B), also implying p (t) > sb (B) and of course that M(t) > sb (B) and
Ms (t) > sb (B). Indeed, suppose that at a given time instant t1 all p (t1 ) sb (B), but
a point q where q (t1 ) = sb (B). Thus, it is

1 1
t q (t1 ) = q q3 + 2L q B = 0 + 2L q 0. (8.25)
2 2

The vector c(B) = sc (B)(1, . . . , 1) is a locally stable equilibrium point for the
differential system ruling the dynamics of p (t). Indeed, c is a minimum of the
associated energy. However, the system might be multistable, similar to the GL
model with total coupling in the lattice [32]. By adopting a Weiss mean field
approximation, one can proceed as in [32] and infer that the equilibrium is unique
for N
1. Namely, defining the auxiliary variable m p = jne(p) j , the equilibrium
equations reads

p3 + 3 p = 4m p 2B. (8.26)

We are only interested to the subset p sb (B) that also implies m p sb (B). Note
now that the equation s + 3s3 = x for x > 0 has a unique positive solution s = k(x).
Thus

p = k(4m p 2B). (8.27)

Now, by the following approximation m p (1/N) Nj=1 j , one gets the equation:

m = k(4m 2B), (8.28)

which has to be solved under the constraint m > sb (B). As it is easy to verify, the
above equation has only one solution, m = sc (B).
8 Spatiotemporal Bounded Noises 125

Any case for B 1 the initial point p (0) = 1 should be such that p (t) remains
in the basin of attraction of c(B), so that for large times p (t) sc (B), implying that

LimIn ft+ p (t) sc (B). (8.29)

From the inequality A p (t) B, by using similar methods one may infer that for
small B it is

LimSupt+ p (t) uc (B), (8.30)

where uc (B) > 1 is the unique positive solution (for B < B ) of the equation

u u3 = 2B. (8.31)

Note that it is uc (B) = sa (B), due to the anti-symmetry of function ss3 . Summing
up, we may say that for small B and probably for all B (0, B ) it is asymptotically

sc (B) < p (t) < uc (B). (8.32)

Finally, we numerically solved the system

1
p p3 + 2L p B = 0 (8.33)
2

for various values of B in the interval (0.01, B ) and in all cases we found only
one equilibrium with components greater than sb (B): = c(B) = sc (B)(1, . . . , 1).
Similarly, when setting A p (t) = +B in Eq. (8.20), we found only one equilibrium
value: uc (B)(1, . . . , 1).

8.8.2 Phase Transitions

Figure 8.2a shows the effect of the noise amplitude B on curve M vs. . For small
B, in line with our analytical calculations, no phase transition occurs. For larger B,
a phase transition is observed, whose transition point decreases with increasing B.
Based on the analytical study of the previous subsection, it is excluded that for small
values of B a phase transition could be observed for any values of c .
In absence of spatial coupling ( = 0), the magnetization M is a decreasing
function of autocorrelation time c (see Fig. 8.2b). This finding suggests that
bounded noises promote the disordered phase for the GL system. If the perturbation
is the GSR noise, then the phenomenology is opposite: c enhances the ordered
phase [21].
The differences of Peq ( p ) between the bounded and the unbounded noises may
roughly explain these behaviors. Indeed, in the GSR noise the standard deviation of
126 S. de Franciscis and A. dOnofrio

a 1 b 1
B= 0.19 TB q=-1
B=1.6 CL =+0.5
0.75 B=2.4 0.75 CL =-0.5
B=3.2 SW
B=20
M

M
0.5 0.5

0.25 0.25

0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
c c

Fig. 8.2 Panel (a): effect of the noise amplitude B on the curve M vs. c for GL model perturbed
by additive spatiotemporal sine-Wiener noise. Here the initial condition is (x, 0) = 1. Other
parameters: = 1 and 2D = 1. Panel (b): effects of autocorrelation parameter c on GL
perturbed by additive spatiotemporal bounded noise. Other parameters: B = 2.4, = 0
model
and 2D = 1

a b 1
1
0.75
0.75
M

0.5
0.5 =-0.75
=-0.5
0.25 =-0.25 0.25 q =-0.3333
=0.0 q= -1.0
=+0.25 q=-3.0
0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3

Fig. 8.3 Effects of CaiLin parameter (panel (a)) and of TsallisBorland parameter q (panel (b))
on the reentrant transition. Other parameters: 40 40 lattice, B = 2.6 and c = 0.3. Taken from
Ref. [19]. (C) American Physical Society (2011)

scales with 1 . Thus, the related disorder-to-order transition with c in the GL field
could be caused by the noise amplitude reduction. On the contrary, in both the Cai
Lin and TsallisBorland noises the equilibrium standard deviation is independent of
c , while in sine-Wiener noise it is weakly dependent. As a consequence, the field
is driven by an even more quenched noise, c dependent, with a constant broad
distribution, enhancing the disordered phase.
The behavior of the system is deeply affected by the spatial coupling. In fact,
for CaiLin and TsallisBorland noises one observes in some cases a reentrant
transition order/disorder/order in (Fig. 8.3). It is possible to explain the emergence
of the reentrant transition by the double role that has on noise equilibrium
distribution: from one side enhances the spatial quenching of the noise, while
from the other it reduces noise amplitude in terms of standard deviation of
P( p ). The intrinsic dynamical process that generates bounded noise, and not
8 Spatiotemporal Bounded Noises 127

a 1 b 0.3
=0 =1 = 6
= 0.5 = 3
0.75
0.2

M
0.5
M

0.1
0.25

0 0
0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5
2D 2D

Fig. 8.4 Reentrant phase transition in GLmodel perturbed by additive spatiotemporal sine-Wiener
noise for varying white noise strength 2D. Initial condition is (x, 0) = 1. Panel (a): global
magnetization M. Panel (b): relative fluctuation M . Other parameters B = 2.6 and c = 2

0.03 0.16
2D=0.05 0.14 B=0.19
0.025 2D=0.75 B=1
0.12 B=2.6
0.02 2D=2
0.1
Peq(p)

Peq(p)

0.015 0.08
0.01 0.06
0.04
0.005 0.02
0 0
2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2
p p

Fig. 8.5 Stationary distribution of the field for the GL model perturbed by additive spatiotemporal
sine-Wiener noise, in response to changes in noise parameters B (left panel) and D (right panel).
Other parameters are, respectively, (c = 2, = 1, 2D = 0.75) and (c = 2, = 1, B = 2.6)

simply the resulting noise equilibrium distribution, is determinant in the resulting


transition phenomenology. Indeed, using CaiLin and TsallisBorland noise with
the same equilibrium distributions, e.g. CL with = 0.5 and TB with q = 1,
the correspondent GL phase transitions are different, although the equilibrium
distribution of the noises is the same.
By varying the white noise strength 2D we observed also for the sine-Wiener
noise a reentrant transition (Fig. 8.4). Note that increases the lower value of M and
shifts the first transition point, whereas its effect on the second transition point
where it existsis modest.
Figure 8.5 shows the effects of B (left panel) and of D (right panel) on the
stationary distribution of the field . Varying B causes transitions from bimodality
located close to = 1 to bimodality with modes roughly at = 1.25. Varying D
causes a reentrant transition unimodality to bimodality back to unimodality, in line
with the reentrant transition showed in Fig. 8.4.
128 S. de Franciscis and A. dOnofrio

8.9 Concluding Remarks

In the first part of this work, we defined three classes of spatiotemporal colored
bounded noises, which extend the zero-dimensional TsallisBorland noise, the
CaiLin noise, and the sine-Wiener noise. We analyzed the role of the spatial cou-
pling parameter and of the temporal correlation parameter c on the distribution
of the noise by studying the noise equilibrium distribution. Unlike the case of GSR
noise, the equilibrium distributions of the noises introduced here do not depend
on (or have a weak dependence), while in some cases the increase of induces
transitions from bimodality to unimodality or trimodality in the distributions. These
features could be important in the study of bounded noise-induced transitions of
stochastically perturbed nonlinear systems.
In the second part we employed the above-mentioned bounded noises to investi-
gate the phase transitions of the GinzburgLandau model under additive stochastic
perturbations. Our simulations showed a phenomenology quite different from the
one induced by colored unbounded noises. To start, in the presence of spatially
uncoupled bounded noises, the increase of the temporal correlations enhances the
quenching of the noise, eventually producing an order-to-disorder transition in the
GL model.
If the perturbation is unbounded, an opposite transition is observed. Furthermore,
spatial coupling induces contrasting effects on the spatiotemporal fluctuations of the
noise, resulting for some kind of noises in a reentrant transition (orderdisorder
order) in the GL field. This specific case of dependence of the transition on the
type of noise has not been observed previously, at the best of our knowledge, in
spatiotemporal dynamical systems, and it is in line with previous observations in
zero-dimensional systems.
We studied the effect of bounded perturbations on GL transitions, and stressed
out, with both numerical simulation and analytical considerations, that the bound-
edness of noise is crucial for the stability of the ordered state.
In general the observed phenomenologies in GL systems resulted to be strongly
depend on the specific model of noise that has been adopted. Then in absence of
experimental data on the distribution of the stochastic fluctuations for the problem
in study, could be necessary to compare multiple kinds of possible stochastic
perturbations models. This is in line with similar observations concerning bounded
noise-induced-transitions in zero-dimensional systems [33].

Acknowledgements This research was performed under the partial support of the Integrated EU
project P-medicineFrom data sharing and integration via VPH models to personalized medicine
(Project No. 270089), which is partially funded by the European Commission under the Seventh
Framework program.
8 Spatiotemporal Bounded Noises 129

References

1. Gammaitoni, L., Hanggi, P., Jung, P., Marchesoni, F.: Rev. Mod. Phys. 70, 223 (1998)
2. Ridolfi, L., DOdorico, P., Laio, F.: Noise-Induced Phenomena in the Environmental Sciences.
Cambridge University Press, Cambridge (2011)
3. Horsthemke, W., Lefever, R.: Noise-Induced Transitions: Theory and Applications in Physics,
Chemistry, and Biology. Springer Series in Synergetics. Springer, New York (1984)
4. Wio, H.S., Lindenberg, K.: Modern challenges in statistical mechanics. AIP Conf. Proc. 658,
1 (2003)
5. Ibanes, R.T.M., Garca-Ojalvo, J., Sancho, J.M.: Lect. Note Phys. 557/2000, 247 (2000)
6. Garca-Ojalvo, J., Sancho, J.M.: Noise in Spatially Extended Systems. Springer, New York
(1996)
7. Sagues, F., Sancho, J., Garca-Ojalvo, J.: Rev. Modern Phys. 79(3), 829 (2007)
8. Wang, Q.Y., Lu, Q.S., Chen, G.R.: Phys. A Stat. Mech. Appl. 374(2), 869 (2007)
9. Wang, Q.Y., Lu, Q.S., Chen, G.R.: Eur. Phys. J. B Condens. Matter Complex Syst. 54(2),
255 (2006)
10. Wang, Q.Y., Perc, M., Lu, Q.S., Duan, S., Chen, G.R.: Int. J. Modern Phys. B 24, 1201 (2010)
11. Sancho, J., Garca-Ojalvo, J., Guo, H.: Phys. D Nonlinear Phenom. 113(24), 331 (1998)
12. Jung, P., Hanggi, P.: Phys. Rev. A 35, 4464 (1987)
13. Garca-Ojalvo, J., Sancho, J.M., Ramrez-Piscina, L.: Phys. Rev. A 46, 4670 (1992)
14. Lam, P.M., Bagayoko, D.: Phys. Rev. E 48, 3267 (1993)
15. Garca-Ojalvo, J., Sancho, J.M.: Phys. Rev. E 49, 2769 (1994)
16. Wio, H.S., Toral, R.: Phys. D Nonlinear Phenom. 193(14), 161 (2004)
17. Cai, G.Q., Lin, Y.K.: Phys. Rev. E 54, 299 (1996)
18. Bobryk, R.V., Chrzeszczyk, A.: Phys. A Stat. Mech. Appl. 358(24), 263 (2005)
19. de Franciscis, S., dOnofrio, A.: Phys. Rev. E 86, 021118 (2012)
20. de Franciscis, S., dOnofrio, A.: arXiv:1203.5270v2 [cond-mat.stat-mech] (2012)
21. Garca-Ojalvo, J., Sancho, J., Ramrez-Piscina, L.: Phys. Lett. A 168(1), 35 (1992)
22. Garca-Ojalvo, J., Parrondo, J.M.R., Sancho, J.M., Van den Broeck, C.: Phys. Rev. E 54,
6918 (1996)
23. Carrillo, O., Ibanes, M., Garca-Ojalvo, J., Casademunt, J., Sancho, J.M.: Phys. Rev. E 67,
046110 (2003)
24. Maier, R.S., Stein, D.L.: Proc. SPIE Int. Soc. Opt. Eng. 5114, 67 (2003)
25. Komin, N., Lacasa, L., Toral, R.: J. Stat. Mech. Theor Exp. P12008 (2010) doi:10.1088/1742-
5468/2010/12/P12008
26. Scarsoglio, S., Laio, F., DOdorico, P., Ridolfi, L.: Math. Biosci. 229(2), 174 (2011)
27. Ouchi, K., Tsukamoto, N., Horita, T., Fujisaka, H.: Phys. Rev. E 76, 041129 (2007)
28. Borland, L.: Phys. Lett. A 245(12), 67 (1998)
29. Cai, G., Suzuki, Y.: Nonlinear Dyn. 45, 95 (2006)
30. Cai, G., Wu, C.: Probabilist. Eng. Mech. 19(3), 197 (2004)
31. Coppel, W.: Asymptotic Behavior of Differential Equations. Heath, Boston (1965)
32. Komin, N., Lacasa, L., Toral, R.: J. Stat. Mech. Theor Exp. 10, 12008 (2010)
33. dOnofrio, A.: Phys. Rev. E 81, 021923 (2010)
Part II
Bounded Noises in the Framework
of Discrete and Continuous Random
Dynamical Systems
Chapter 9
Bifurcations of Random Differential Equations
with Bounded Noise

Ale Jan Homburg, Todd R. Young, and Masoumeh Gharaei

Abstract We review recent results from the theory of random differential equations
with bounded noise. Assuming the noise to be sufficiently robust in its effects
we discuss the feature that any stationary measure of the system is supported on a
Minimal Forward Invariant (MFI) set. We review basic properties of the MFI sets,
including their relationship to attractors in systems where the noise is small. In the
main part of the paper we discuss how MFI sets can undergo discontinuous changes
that we have called hard bifurcations. We characterize such bifurcations for systems
in one and two dimensions and we give an example of the effects of bounded noise
in the context of a HopfAndronov bifurcation.

Keywords Bounded noises Random differential equations Stationary


measures Stochastic bifurcations HopfAndronov bifurcation Hard
bifurcations

A.J. Homburg
KdV Institute for Mathematics, University of Amsterdam, Science park 904,
1098 XH Amsterdam, The Netherlands
Department of Mathematics, VU University Amsterdam, De Boelelaan 1081,
HV Amsterdam, The Netherlands
e-mail: a.j.homburg@uva.nl
T.R. Young ()
Department of Mathematics, Ohio University, Morton Hall, Athens, OH 45701, USA
e-mail: youngt@ohio.edu
M. Gharaei
KdV Institute for Mathematics, University of Amsterdam, Science park 904, 1098 XH
Amsterdam, The Netherlands
e-mail: m.gharaei@uva.nl

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 133


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 9, Springer Science+Business Media New York 2013
134 A.J. Homburg et al.

9.1 Introduction

A large proportion of work on the topic of stochastic or random dynamics has


focused on noise that is unbounded and, in particular, normally distributed. With
such noise, the entire phase space is accessible (i.e., from any initial point any
neighborhood may be reached with nonzero probability) and it follows that if the
system has a stationary measure then it is unique and its support is the whole phase
space. Typically, the density function for the stationary measure varies continuously
with any parameter of the system. In light of these facts, Zeeman proposed that
a bifurcation in a stochastic system be defined as a change in character of the
density function as a parameter is varied [42, 43]. Such bifurcations have come to
be known as phenomenological, or P-bifurcations. Arnold in his extensive work
on Random Dynamical Systems (RDS) proposed two more definitions, namely
abstract bifurcation when (local) topological conjugacy changes and dynamical
bifurcation which is typically evidenced by a change of sign in one of the Lyapunov
exponents of the dynamical system (see, for example, [2, 3, 20]). Many studies,
starting with the work of I. Prigogine and his followers (see [28]), have addressed
issues of bifurcations in stochastic systems from these perspectives, referring to one
or the other of these bifurcations, nearly all in systems with unbounded noise (e.g.,
[34, 40]).
Being dominated by the study of dynamical systems perturbed by Gaussian
or other unbounded noises, much of the applied and mathematical literature on
stochastic bifurcations has focused on the study of Langevin systems:

x = A(x) + B(x)t ,

where the dependence on the noise is linear. Bounded noise in contrast may be much
more general but is less understood. In recent years the effects of bounded noise
have received increasing attention for dynamical systems generated by both maps
and differential equations. One type of bounded noise that has been of interest is
Dichotomous Markov Noise (see the review article [11]). This type of noise is often
accessible to analysis and arises naturally in various applications (e.g., [21, 37]).
In these pages we review aspects of dynamics and bifurcations in another type of
bounded noise system, namely, random differential equations (RDEs) with bounded
noise. We will consider random differential equations of the form

x = f (x, t ), (9.1)

depending on both a deterministic parameter and noise with realizations t that


take values from some bounded ball in Rn . The state x will belong to a compact,
connected, smooth d dimensional manifold M.
A class of examples fitting into our context is constituted by certain degenerate
Markov diffusion systems [5, 30] of the form
9 Bifurcations of Random Differential Equations with Bounded Noise 135

m
dx = X0 (x)dt + fi ( )Xi (x)dt,
i=1
l
d = Y0 ( )dt + Y j ( ) dW j ,
j=1

given by differential equations for the state space variable x, driven by a stochastic
process defined by a Stratonovich stochastic differential equation on a bounded
manifold, see, e.g., [31]. Another example is by random switching between solution
curves of a finite number of ordinary differential equations [9], a generalization
of dichotomous Markov noise. Under some conditions such noise is sufficiently
rich to fit into the framework of this paper. Reference [15] also discusses some
constructions of stochastic processes with bounded noise.
We will discuss the fact that under mild conditions on the noise, the RDEs
admits a finite number of stationary measures with absolutely continuous densities.
The stationary measures provide the eventual distributions of typical trajectories.
Their supports are the regions accessible to typical trajectories in the long run. It is
important to note that in the case of bounded noise, there may exist more than one
stationary measure.
It was observed that under parameter variation, stationary measures of RDEs can
experience dramatic changes, such as a change in the number of stationary measures
or a discontinuous change in one of their supports. The RDEs we consider possess a
finite number of absolutely continuous stationary measures. The stationary measures
therefore have probability density functions. We distinguish the following changes
in the density functions:
1. The density function of a stationary measure might change discontinuously
(including the possibility that a stationary measure ceases to exist), or
2. The support of the density function of a stationary measure might change
discontinuously.
A discontinuous change in the density function is with respect to the L1 norm
topology. A discontinuous change of the support of a stationary measure is with
respect to the Hausdorff metric topology. It is appropriate to call such changes
hard in reference to hard loss of stability in ordinary differential equations. In [7]
a loss of stability of an invariant set is called hard if it involves a discontinuous
change, in the Hausdorff topology, of the attractor. There is an obvious analogy
with discontinuous changes in (supports of) density functions. The examples studied
later show how adding a small amount of noise to a family of ordinary differential
equations unfolding a bifurcation can lead to a hard bifurcation of density functions.
We note that these hard bifurcations may not be captured by Arnolds notion of
dynamical bifurcation.
Hard bifurcations are related to almost or near invariance in random dynamical
systems, and the resulting effect of metastability. This phenomenon found renewed
interest in [13, 14, 39]. In the context of control theory near invariance was studied
136 A.J. Homburg et al.

in [16, 24] for RDEs and [17] for random diffeomorphisms. One approach, taken in
[17, 44], to study near invariance is through bifurcation theory. It is then important
to describe mechanisms that result in hard bifurcations.
The following sections will contain an overview of the theory of RDEs, in
particular their bifurcations, along the lines of [12, 26, 27]. We do not touch on
the similar theory for iterated random maps. Here are some pointers to the literature
developing the parallel theory for randomly perturbed iterated maps. A description
in terms of finitely many stationary measures can be found in [1, 44]. Aspects of
bifurcation theory are considered in [33,44,45], see [25] for an application in climate
dynamics. References [17, 23, 38, 44] consider quantitative aspects of bifurcations
related to metastability and escape, we do not address such issues here.

9.2 Random Differential Equations

In this section we describe the precise setup of the random differential equations
discussed in this chapter. Let M be a compact, connected, smooth d dimensional
manifold and consider a smooth RDE

x = f (x, t ) (9.2)

on M. The time-dependent perturbation that will represent noise may be con-


structed in a number of ways. We consider belonging to the space U =
L (R, Bn ( )) of bounded functions with values in the closure Bn ( ) of the ball in
Rn . Give U the weak* topology, which makes it compact and metrizable (see [19,
Lemma 4.2.1]). The flow defined by the shift:

: RU U , t ( ()) = ( + t),

is then a continuous dynamical system (see [19, Lemma 4.2.4]). Further, t is a


homeomorphism of U and t is topologically mixing [19]. We refer to any random
perturbation of this form as noise of level .
Since U is measurable, and f is smooth and bounded, the differential equa-
tion (9.2) has unique, global solutions t (x, ) in the sense of Caratheodory, i.e.:
 t
t (x, ) = x + f ( s (x, ), s ) ds,
0

for any U and all initial conditions x in M, and the solutions are absolutely
continuous in t. Furthermore, solutions depend continuously on in the space U .
By the assumptions, t (, ) : M M is a diffeomorphism for any , and t 0.
Further, if is continuous, then t is a classical solution. We also consider the
skew-product flow on U M given by St t t .
9 Bifurcations of Random Differential Equations with Bounded Noise 137

We will suppose the following condition on the noise:


(H1) There exist 1 > 0 and t1 > 0 such that

t (x, U ) B( t (x, 0), 1 ) t > t1 , x M.


The assumption (H1) can be interpreted as guaranteeing that the perturbations are
sufficiently robust.
We call a set C M a forward invariant set if

t (C, U ) C (9.3)

for all t R+ . There is a partial ordering on the collection of forward invariant


sets by inclusion, i.e. C  C if C C. We call C a minimal forward invariant set,
abbreviated MFI set, if it is minimal with respect to the partial ordering .
Theorem 1 ([26]). Let (9.2) be a random differential equation with -level noise
whose flow satisfies (H1) on a compact manifold M. Then there are a finite number
of MFI sets E1 , . . . , Ek on M. Each MFI set is open and connected. The closures of
different MFI sets are disjoint.
We note that the concept of MFI set is the same as the concept of invariant control
sets used in control theory [18, 19]. Up to this point the discussion is topological:
MFI sets can be studied without assumptions about the noise realizations and can
in particular be studied for differential inclusions [8, 33]. In deterministic systems,
forward invariant sets are commonly called trapping regions and attractors are
analogous to MFI sets. We will see later in the case of small noise, this relationship
is more than analogy.
We continue with a discussion of stationary measures. For this we assume
conditions on the distribution of transition probabilities. We suppose that U has
a t -invariant probability measure P. Consider the evaluation operator t : U
Bn ( ) given by t ( ) = t . Also consider the measure

= t P

on Bn ( ). Since P is t invariant, it follows easily that is independent of t. We call


the distribution of the noise. Let x be a point in M. We define the push-forward
of P from U to M via t as the probability which acts on continuous functions
: M R by integration as:

( t (x) P) = t (x, ) dP( ).
U

The topological support of P may, for instance, be the continuous functions


C(R, Bn ( )), the cadlag functions (see [2]), or even as in [29] the closure of the
set of shifts of a specific function . We will assume that t is ergodic w.r.t. P.
Rather than U one may consider instead the topological support of P in U .
138 A.J. Homburg et al.

(H2) There exists t2 > 0 so that t (x) P is absolutely continuous w.r.t. a


Riemannian measure m on M for all t > t2 and all x M.
Assumption (H2) requires that the noise does not have spikes. We remark
that (H1) and (H2) may be replaced by conditions on the vector field and the noise.
A probability on M is said to be stationary if P is St invariant, i.e. for any
Borel set A U M:

P (St (A)) = P (A) (9.4)

for all t R+ . We say that a stationary measure is ergodic if P is ergodic for


the skew product flow St . Birkhoffs ergodic theorem then ensures that:
 T 
1
lim (St (x, )) dt = d(P )
T T 0 MU

for P almost every ( , x) and for every C0 (U M, R). In particular, if


is ergodic, setting = M for C0 (M, R) and the coordinate projection
M : M U M, we obtain:
 T 
1
lim ( t (x, ))dt = d (9.5)
T T 0 U

for P -a.e. ( , x) U M.
We say that a point x M is -generic if (9.5) holds for every C0 (M, R)
and for P-a.e. U . The set of generic points of a stationary ergodic measure
is called the ergodic basin of and will be denoted E( ). An ergodic stationary
probability measure whose basin has positive volume, m(E( )) > 0, will be called
a physical measure.
Theorem 2 ([22,26]). Let (9.2) be a random differential equation with -level noise
whose flow satisfies (H1) and (H2) on a compact manifold M. Then there are a finite
number of physical, absolutely continuous invariant probability measures 1 , . . . , k
on M. Each i is supported on the closure of a minimal forward invariant set Ei .
Further, given any x M and almost any U , there exists t = t (x, ), such that
t (x, ) Ei for some i and all t > t .
We end this general introduction with a simple but important example of how
MFI sets may occur. Suppose that the random differential equation (9.2) is a small
perturbation of a deterministic system. In this case, attractors generally become
minimal forward invariant sets. Consider a random differential equation:

x = f (x, t ) (9.6)

where is a small parameter. For = 0 the system is deterministic.


9 Bifurcations of Random Differential Equations with Bounded Noise 139

Definition 1. A set A is called an attractor for (9.6) with = 0 if it is:


A1 Invariant and compact
A2 There is a neighborhood U of A such that for all x U, t (x, 0) U for all
t 0 and t (x, 0) A as t .
A3 (a) There is some x U such that A is the limit set of x, or, (b) A contains a
point with a dense orbit, or, (c) A is chain transitive.
If A satisfies A1 and A2 only, it is said to be asymptotically attracting or an
attracting set. We call U a trapping region.
Theorem 3. Suppose that (9.6) satisfies (H1) and that for = 0 it has an attractor
A. Then for sufficiently small, (9.6) has an MFI set that is a small neighborhood
of A. Suppose that U is a trapping region for A, then given small enough, this MFI
set is unique in U.
Proof. Since A is asymptotically stable for = 0, there exists a smooth Lyapunov
function in a neighborhood of A, this Lyapunov function is strictly decreasing along
solutions outside of A and these level sets enclose trapping regions [36]. Thus given
any > 0 there is a trapping region, which we denote by U whose boundary is the
level set of the Lyapunov function. For a fixed it follows that for small the
Lyapunov function is decreasing along solutions at the boundary of U . Thus U is
a forward invariant set for (9.2) for any sufficiently small. Thus U must contain at
least one minimal forward invariant set. Now consider any point x U . It follows
easily that the set of all possible orbits of x, i.e.,

O+ (x) = t0 t (x, U ) U (9.7)

is forward invariant [26]. Since A is asymptotically stable and x is inside its basin
(for = 0) it follows from (H1) that O+ (x) intersects A. Since A is an attractor,
any of the conditions A3 (a), (b), or (c) with (H1) implies that A O+ (x). Since
a forward invariant set must contain the forward orbits of all of its points, every
forward invariant set in U contains A. Therefore, there is only one MFI set in U
and it contains A.
Now consider a trapping region U A. Suppose that is small enough that
U U and 1 is small enough that the previous conclusion holds for U . Note that
K = U \ U is compact and that the Lyapunov function is strictly decreasing on it.
Thus there exists 2 such the Lyapunov function is also decreasing for the perturbed
flow on K for 2 . Thus there can be no forward invariant set in K for any less
than the minimum of 1 and 2 and the conclusion holds. 

Corollary 1. If x0 is an asymptotically stable equilibrium for = 0, then for all
sufficiently small > 0 the system has a small MFI set that contains x0 . If is an
asymptotically stable limit cycle for = 0, then for small > 0 the system has a
MFI set that is a small neighborhood of .
140 A.J. Homburg et al.

9.3 Random Differential Equations in One Dimension

We discuss the simplest case of random differential equations on a circle. Consider


an RDE
x = f (x, t ) (9.8)
with x from the circle. In the context of bifurcations it is convenient to assume that
the noise takes values from = Bn ( ) = [ , ] and the following:
(H3) For each x the map Tx M given by v  f (x, v) is a diffeomorphism
with a strictly convex image D(x) = f (x, ).
Definition 2. We say that an MFI set E is isolated or attracting if for any proper
neighborhood U (E U) there is an open forward invariant set F U such that E
F, F contains no other MFI set and t (F, U ) F for all t > 0. Such a neighborhood
F is called an isolating neighborhood.
Also note that under (H3) for each x, f (x, ) is a closed interval with endpoints
f (x, ) and f (x, ). Thus there is an envelope of all possible vector fields which
are bounded below and above by f (, ) and f (, ). Denote by f () and f+ ()
the upper and lower vector fields.
Recall that MFI sets are invariant under forward solutions of the RDE, for all
noise realizations and minimal with respect to set inclusion. For RDEs on the circle,
the MFI sets are bounded open intervals or possibly the entire circle.
Proposition 1. If (a, b) is an MFI set, then for any x (a, b),

0 int( f (x, )).

Proof. If not, then there is an x (a, b) such that either f (x, ) 0 or f (x, ) 0. In
the first case the forward invariance of (a, b) implies that (a, x) is forward invariant.
In the second case we obtain that (x, b) is forward invariant. Either case contradicts
the minimality of (a, b). 

Proposition 2. If (a, b) is an MFI set, then

f (a, ) 0 and f (b, ) 0 (9.9)

for all = [ , ] and that f (a) = 0 and f+ (b) = 0. Further, f (a) 0 and
f+ (b) 0.
Proof. The inequalities (9.9) are necessary for a and b to be boundary points of
an MFI set. The claim that f (a) = f+ (b) = 0 follows from (H1). The final claim
f (a) 0 and f+ (b) 0 then follows from the assumption that f is C1 . 

We can distinguish the following types for endpoints a and b based on the
properties of f . We say that a is hyperbolic if f (a) = 0 and similarly for b.
Otherwise, a or b is said to be non-hyperbolic. For one-dimensional RDEs the
following stability result is straightforward.
9 Bifurcations of Random Differential Equations with Bounded Noise 141

a b
a E c
E
a b b

f f+ f f+

Fig. 9.1 (a) A stable one-dimensional MFI set. Both endpoints of E = (a, b) are hyperbolic.
(b) A random saddle-node in one dimension. E = (b, c) is minimal forward invariant. Taken from
Ref. [27]

Proposition 3. Given any f satisfying (H1), (H3) suppose that (a, b) is a MFI
set with both a and b hyperbolic. Then (a, b) is isolated with some isolating
neighborhood W . If f is sufficiently close to f in the C1 topology, then f has a
unique MFI set (a, inside W . Further, a and b are close to a and b, respectively,
b)
and are each hyperbolic.
Proof. If a is hyperbolic it follows that f (x, t ) > 0 for all x in some neighborhood
(c, a) and all . Similarly, there is a neighborhood (b, d) on which f (x, t ) < 0.
It follows that W = (c, d) is an isolating neighborhood for (a, b).
Now let > 0 be sufficiently small so that f (x) > f (a)/2 for all x [a ,
a + ]. If f is within f (a)/2 of f in the C1 topology, then the conclusion holds. 

We continue with families of RDEs and consider equations

x = f (x, t ), (9.10)

depending on both a deterministic parameter R and noise of level . For


background on bifurcation theory in families of differential equations we recom-
mend [32].
Definition 3. We say that a one-parameter family of vector fields g (x) generically
unfolds a quadratic saddle-node point at x , if g(x ) = 0, g (x ) = 0, g (x ) = 0 and
g (x )/ = 0.
A one-dimensional RDE (9.10) generically unfolds a quadratic saddle-node at
x = a or b of an MFI set (a, b), if one of the extremal vector fields f (, )
generically unfolds a quadratic saddle-node at x (Fig. 9.1).

Theorem 4 ([27]). In a generic one-parameter family of one-dimensional bounded


noise random differential equations (9.10) the only codimension one bifurcation of
a MFI set is the generic unfolding of a quadratic saddle-node.
Proof. By Proposition 3 an MFI set (a, b) is stable if a and b are both hyperbolic.
Thus a bifurcation can occur only if hyperbolicity is violated at one of the endpoints.
For codimension one hyperbolicity cannot be violated at both the endpoints
simultaneously.
If the stationary point is odd, a standard argument shows that the bifurcation is
not codimension one. If the stationary point is of even order 4, then standard
arguments show that the family is not generic. 

142 A.J. Homburg et al.

9.4 Random Differential Equations on Surfaces

We will consider bifurcations in a class of random differential equations


x = f (x, t ) (9.11)
as the parameter R is varied. Here x will belong to a smooth compact two-
dimensional surface M. We treat such random differential equations with bounded
noise where t takes values in a closed disk R2 . We will assume some regularity
conditions on the way the noise enters the equations. In particular we assume that
the range of vectors f (x, ) is a convex set for each x M.
Let R2 be the unit disc. We will assume that f (x, v) is a smooth vector field
depending smoothly on parameters R and v , i.e. (x, v, )  f (x, v) T M
is a C smooth function. When discussing properties of single vector fields, we
suppress dependence of the RDE on from the notation.
Definition 4. We will denote by R the space of bounded noise vector fields f
satisfying (H1), (H3). We will take as a norm on R the C norm on the vector
fields f : M T M.
Definition 5. We will say that an MFI set E for f is stable if there is a neighborhood
U E such that if f is sufficiently close to f in R then f has exactly one MFI set
E U and E is close to E in the Hausdorff metric. We will say that f R is stable
if all of its MFI sets {Ei } are stable.
Definition 6. A one-parameter family of RDEs in R is a mapping from an interval
(0, 1) given by  f that is smooth in in R .
From (H3), the vectors f (x, ) range over a strictly convex set D (x) f (x, )
that is diffeomorphic to a closed disk and has a smooth boundary, varying smoothly
with x and . Define K (x) as the cone of positive multiples of vectors in D (x).
Again, whenever we are concerned with single RDEs, we suppress dependence on
the parameter .
Definition 7. A point x M will be called stationary if 0 D(x), i.e. there is a
possible vector field for which x is fixed.
If 0 intD(x), then K(x) = R2 . Outside the closed set R = {x M | 0 D(x)},
the cones K(x) depend smoothly on x. By (H3) if 0 D(x), then K(x) is an open
half-plane. Consider the direction fields Ei , i = 1, 2, defined by the extremal half
lines in the cones K(x) over the open set P = M \ R. By standard results we can
integrate these two direction fields, obtaining two sets of smooth solution curves i ,
i = 1, 2 in P. Note that these two sets of curves each make a smooth foliation of P.
We remark that the direction fields Ei are defined on the closure of P, but may give
rise to nonunique solution curves at points in the boundary of P. Further, by the
assumptions, the angle between the direction fields at any point is bounded below.
However, at points on the boundary, the angle may be , in which case the solution
curves are tangent or coincide (but flow in opposite directions).
9 Bifurcations of Random Differential Equations with Bounded Noise 143

Fig. 9.2 Extremal flow lines near a stationary point (left picture) or wedge point (right picture) on
the boundary of an MFI set. Taken from Ref. [27]

Definition 8. For each x P denote by i (x), i = 1, 2, the two local solution curves
to the extremal direction fields. Denote by i the forward and backward portions of
these curves.
We will build up a description of the possible boundary components of an MFI
set. To begin, for a point on the boundary either (9.1) K(x) is less than a half
plane, or, (9.2) K(x) is an open half plane, in which case x must a stationary point,
i.e. f (x, ) = 0 for some . We begin by classifying points of type (9.1).
Lemma 1. If x E for an MFI set E and K(x) is less than a half plane, then
either:
One of the local solution curves i (x) coincides locally with E, or,
Both backward solution curves i (x) belong to the boundary E.
Definition 9. We call a boundary point, x, of an MFI set, E, regular if one of i (x)
coincides locally with E. We call a segment of the boundary of E a solution arc if
it consists of regular points. If both i belong to E locally, then we call x a wedge
point.
The following theorem describes the geometry of MFI sets for typical RDEs on
compact surfaces. Figure 9.2 depicts parts of the boundary and extremal flow lines
near stationary and wedge points.
Theorem 5 ([27]). There is an open and dense set V R so that for any random
differential equation in V , an MFI set E has piecewise smooth boundary consisting
of regular curves, a finite number of wedge points, and a finite number of hyperbolic
points that belong to disks of stationary points inside E. Further, if a component
is a periodic cycle, it has Floquet multiplier less than one. Any RDE in V is stable.
Codimension one bifurcations in families of RDEs on compact surfaces are
described in the following result.
Theorem 6 ([27]). There exists an open dense set O of one-parameter families of
RDEs in R such that the only bifurcations that occur are one of the following:
1. Two sets of stationary points collide at a stationary point on the boundary E
which undergoes a saddle-node bifurcation.
144 A.J. Homburg et al.

2. An MFI E collides with a set of stationary points outside E at a saddle-point p.


3. The Floquet multiplier of a non-isolated periodic cycle becomes one and then
the cycle disappears.

9.5 Random Hopf Bifurcation

We will consider Hopf bifurcations in a class of random differential equations on


the plane, as a case study of bifurcations in two-dimensional RDEs.
Consider a smooth family of planar random differential equations

(x, = f (x, y) + (u, v),


y) (9.12)

where R is a parameter and u, v are noise terms from = {u2 + v2 1},


representing radially symmetric noise. We consider noise such that hypotheses (H1)
and (H3) are fulfilled. We assume that without the noise terms, i.e. for = 0,
the family of differential equations unfolds a supercritical Hopf bifurcation at
= 0 [32].
In a supercritical Hopf bifurcation taking place in (9.12) for = 0, a stable limit
cycle appears in the bifurcation for > 0. For a fixed negative value of , the
differential equations without noise possess a stable equilibrium and the RDE with
small noise has an MFI set which is a disk around the equilibrium. Likewise, at a
fixed positive value of for which (9.12) without noise possesses a stable limit
cycle, small noise will give an annulus as MFI set. A bifurcation of stationary
measures takes place when varying . We will prove the following bifurcation
scenario for small > 0: the RDE (9.12) undergoes a hard bifurcation in which
a globally attracting MFI set changes discontinuously, by suddenly developing a
hole. This hard bifurcation takes place at a delayed parameter value = O( 2/3 )
as described in Theorem 7 below.
For studies of Hopf bifurcations in stochastic differential equations (SDEs)
we refer to [4, 6, 10, 41]. In such systems there is a unique stationary measure,
with support equal to the entire state space. Bifurcations of supports of stationary
measures, as arising in RDEs with bounded noise, do not arise in the context of
SDEs.
Theorem 7 ([12]). Consider a family of RDEs (9.12) depending on one parameter
, that unfolds, when = 0, a supercritical Hopf bifurcation at = 0.
For small > 0 and near 0, there is a unique MFI set E . There is a single
hard bifurcation at bif = O( 2/3 ) as 0. At = bif the MFI set E changes from
a set diffeomorphic to a disk for < bif to a set diffeomorphic to an annulus for
bif . At bif the inner radius of this annulus is r = O( 1/3 ).
Figure 9.3 shows images taken from [12] of numerically computed invariant
densities. For these images the RDEs are taken in normal form as
9 Bifurcations of Random Differential Equations with Bounded Noise 145

Fig. 9.3 Images of the invariant densities for system (9.13) with = 0.1 and increasing values of
. From top left = 0.004, 0.020, 0.041, 0.204, 0.407, 0.448. The bottom middle plot, ( = 0.407),
is immediately after the hard bifurcation. In all six plots the circle exterior to the visible density is
the outer boundary of the MFI set. In the last two plots the interior circle is the inner boundary of
the MFI set. Figures taken from [12],  c American Institute for Mathematical Sciences 2012

x = x y x(x2 + y2 ) + u, (9.13)
y = x + y y(x2 + y2 ) + v.

The noise terms u and v are generated via the stochastic system:

du = dW1 , (9.14)
dv = dW2 ,

where dW1 and dW2 are independent (of each other), normalized white noise
processes. Equations (9.14) are interpreted in the usual way as Ito integral equations.
In this setting in order to assure boundedness, (u, v) are restricted to the unit disk by
imposing reflective boundary conditions.
The deterministic Hopf bifurcation involves the creation of a limit cycle. In the
remaining part of this section we discuss the occurrence of attracting random cycles.
Random cycles are closed curves that are invariant for the skew-product system
and thus have a time-dependent position in state space depending on the noise
realization. The following material fits into the philosophy advocated by Arnold
in [2] of studying random dynamical systems through a skew product dynamical
systems approach, so as to capture dynamics with varying initial conditions.
146 A.J. Homburg et al.

A comparison of bifurcations in both contexts of stationary measures and of


invariant measures for the skew product system is contained in [45] in the context
of random circle diffeomorphisms.
Random cycles are defined in analogy with random fixed points [2]. They
are most elegantly treated in a framework of invertible flows, where the noise
realizations and the flow t (x, ) are given for two-sided time t R. We
henceforth consider the skew product flow

(x, )  t (x, ), t

with

t s = t+s
for t, s R.
Recall that a random fixed point is a map R : U R2 that is flow invariant,

t (R( ), ) = R( t )

for P almost all . A random cycle is defined as a continuous map S : U S1 R2


that gives an embedding of a circle for P almost all U and is flow invariant in
the sense

t (S( , S1 ), ) = S( t , S1 ).

Different regularities of the embeddings S1  S( , S1 ), such as Lipschitz continuity


or some degree of differentiability, may be considered.
The random cycle is attracting if there is a neighborhood U of the MFI set E ,
so that for all x U , the distance between t (x, ) and S( t , S1 ) goes to zero as
t .
The following result establishes the occurrence of attracting random cycles
following the hard bifurcation, for small noise amplitudes.
Theorem 8. Consider a family of RDEs (9.12) depending on one parameter , that
unfolds, when = 0, a supercritical Hopf bifurcation at = 0.
For values of ( , ) with > bif and sufficiently small, the MFI set E
is diffeomorphic to an annulus and the flow t admits a Lipschitz continuous
attracting random cycle S : U S1 R2 inside E .
Proof. The proof is an adaptation of the construction of limit cycles in the
differential equations without noise. The boundedness of the noise allows one to
replace the estimates by estimates that are uniform in the noise for small enough
noise amplitudes. We indicate the steps in a proof, leaving details to the reader.
First note that we may replace the flow t by its time one map, which is
a diffeomorphism on the plane; this diffeomorphism and its derivatives depend
continuously on the noise . So, consider a map z  f (z; ) on the complex
plane C, unfolding a supercritical Neimark-Sacker bifurcation in , depending on
bounded noise U and on the parameter that multiplies the amplitude of the
9 Bifurcations of Random Differential Equations with Bounded Noise 147

noise. Such maps without noise, i.e. with = 0, are known to possess invariant
circles for small positive values of . We follow their construction as elaborated in
[35]. With a normal form transformation, applied to the map without noise, a map

F (z) = z(1 + f1 ( )|z|2 )ei( ( )+ f3 ( )|z| ) + O(|z|5 )


2
(9.15)

on the complex plane C is obtained. The reasoning in [35] continues with the
following
 steps. Apply a rescaling and change to polar coordinates to write z =



f ( ) e (1 + u). Expressing F in , u coordinates gives a map of the form
i
1

F ( , u) = ( + 1 ( ) + 3/2 K (u, ), (1 2 )u + 3/2 H (u, )). (9.16)

Next a graph transform is defined on a class of Lipschitz continuous graphs


Lip1 (S1 , [1, 1]), with Lipschitz constant bounded by 1, equipped with the supnorm.
It is determined by

graph F (w) = F (graph w). (9.17)

This is shown to be a contraction, leading to a unique fixed point which is the


attracting invariant circle.
For small enough, this reasoning carries through to the random map as follows.
First a graph transform depending on U is defined. That is, F from (9.15)
(and (9.16)) gets replaced by a map F , and the graph transform likewise by F , .
Iterates of F , are obtained as

Fn, = F , n1 . . . F , 1 F , . (9.18)

The previous contraction argument is replaced by pull-back convergence:

S( , S1 ) = lim Fn, n (w), (9.19)


n

for any w Lip1 (S1 , [1, 1]). The graph of the limit function is called the pull-back
attractor, its orbit under the flow t is the random limit cycle. Note that this is the
point where two-sided time is needed.
The computations to check convergence in (9.19) are most easily carried out
by writing = rei for the noise and expanding F , in for small : writing
F = Aei and = rei we get

F , = (A + O( ))ei( + A O( )) .
1
(9.20)

Then following the computations using the rescaling, assuming is sufficiently


small for given , makes clear that the graph transform remains well defined, i.e.
maps Lip1 (S1 , [1, 1]) into itself, and a contraction (for each fixed ).
148 A.J. Homburg et al.

Finally, the contraction properties of the graph transform, uniform in the random
parameter, imply that the random cycle is attracting. 

We have confined ourselves with a statement on Lipschitz continuous random
cycles, the graph transform techniques, however, allow establishing more smooth-
ness [35]. The result does not discuss the dynamics on the random cycle, it is still
possible to find an attracting random fixed point on it, compare [4, 6].

References

1. Araujo, V.: Ann. Inst. Henri Poincare, Analyse non lineaire 17, 307 (2000)
2. Arnold, L.: Random Dynamical Systems. Springer, Berlin (1998)
3. Arnold, L.: IUTAM Symposium on Nonlinearity and Stochastic Structural Dynamics, Madras,
1999. Solid Mech. Appl., vol. 85, pp. 15. Kluwer Academic, Dordrecht (2001)
4. Arnold, L., Bleckert, G., Schenk-Hoppe, K.R.: In: Crauel, H., Gundlach, M. (eds.) Stochastic
Dynamics (Bremen, 1997), pp. 71. Springer, Berlin (1999)
5. Arnold, L., Kliemann, W.: On unique ergodicity for degenerate diffusions. Stochastics 21, 41
(1987)
6. Arnold, L., Sri Namachchivaya, N., Schenk-Hoppe, K.R.: Int. J. Bifur. Chaos Appl. Sci. Eng.
6, 1947 (1996)
7. Arnold, V.I., Afraimovich, V.S., Ilyashenko, Yu.S., Shilnikov, L.P.: Bifurcation Theory and
Catastrophe Theory. Springer, Berlin (1999)
8. Aubin, J.P., Cellina, A.: Differential Inclusions. Springer, Berlin (1984)
9. Bakhtin, Y., Hurth, T.: Nonlinearity 25, 2937 (2012) (unpublished)
10. Bashkirtseva, I., Ryashko, L., Schurz, H.: Chaos Solit. Fract. 39, 72 (2009)
11. Bena, I.: Int. J. Modern Phys. B 20, 2825 (2006)
12. Botts, R.T., Homburg, A.J., Young, T.R.: Discrete Contin. Dyn. Syst. Ser. A 32, 2997 (2012)
13. Bovier, A., Eckhoff, M., Gayrard, V., Klein, M.: J. Eur. Math. Soc. 6, 399 (2004)
14. Bovier, A., Gayrard, V., Klein, M.: J. Eur. Math. Soc. 7, 69 (2005)
15. Colombo, G., Pra, P.D., Krivan, V., Vrkoc, I.: Math. Control Signals Syst. 16, 95 (2003)
16. Colonius, F., Gayer, T., Kliemann, W.: SIAM J. Appl. Dyn. Syst. 7, 79 (2007)
17. Colonius, F., Homburg, A.J., Kliemann, W.: J. Differ. Equat. Appl. 16, 127 (2010)
18. Colonius, F., Kliemann, W.: In: Crauel, H., Gundlach, M. (eds.) Stochastic Dynamics. Springer,
Berlin (1999)
19. Colonius, F., Kliemann, W.: The Dynamics of Control. Birkhauser Boston, Boston (2000)
20. Crauel, H., Imkeller, P., Steinkamp, M.: In: Crauel, H., Gundlach, M. (eds.) Stochastic
Dynamics. Springer, Berlin (1999)
21. dOnofrio, A., Gandolfi, A., Gattoni, S.: Phys. A Stat. Mech. Appl. 91, 6484 (2012)
22. Doob, J.L.: Stochastic Processes. Wiley, New York (1953)
23. Froyland, G., Stancevic, O.: ArXiv:1106.1954v2 [math.DS] (2011) (unpublished)
24. Gayer, T.: J. Differ. Equat. 201, 177 (2004)
25. Ghil, M., Chekroun, M.D., Simonnet, E.: Phys. D 237, 2111 (2008)
26. Homburg, A.J., Young, T.R.: Regular Chaotic Dynam. 11, 247 (2006)
27. Homburg, A.J., Young, T.R.: Topol. Methods Nonlin. Anal. 35, 77 (2010)
28. Horsthemke, W., Lefever, R.: Noise-Induced Transitions: Theory and Applications in Physics,
Chemistry, and Biology. Springer Series in Synergetics, vol. 15. Springer, Berlin (1984)
29. Johnson, R.: In: Crauel, H., Gundlach, M. (eds.) Stochastic Dynamics. Springer, Berlin (1999)
30. Kliemann, W.: Ann. Probab. 15, 690 (1987)
31. Kunita, H.: Stochastic Flows and Stochastic Differential Equations. Cambridge University
Press, Cambridge (1990)
9 Bifurcations of Random Differential Equations with Bounded Noise 149

32. Kuznetsov, Yu.A.: Elements of Applied Bifurcation Theory. Springer, Berlin (2004)
33. Lamb, J.S.W., Rasmussen, M., Rodrigues, C.S.: ArXiv:1105.5018v1 [math.DS] (2011)
(unpublished)
34. Mallick, K., Marcq, P.: Eur. Phys. J. B 36 119 (2003)
35. Marsden, J.E., McCracken, M.: The Hopf Bifurcation and Its Applications. Springer, Berlin
(1976)
36. Nadzieja, T.: Czechoslovak Math. J. 40, 195 (1990)
37. Ridolfi, L., DOdorico, P., Laio, F.: Noise-Induced Phenomena in the Environmental Sciences.
Cambridge University Press, Cambridge (2011)
38. Rodrigues, C.S., Grebogi, C., de Moura, A.P.S.: Phys. Rev. E 82, 046217 (2010)
39. Schutte, C., Huisinga, W., Meyn, S.: Ann. Appl. Probab. 14, 419 (2004)
40. Tateno, T.: Phys. Rev. E 65, 021901 (2002)
41. Wieczorek, S.: Phys. Rev. E 79, 036209 (2009)
42. Zeeman, E.C.: Nonlinearity 1, 115 (1988)
43. Zeeman, E.C.: Bull. Lond. Math. Soc. 20, 545 (1988)
44. Zmarrou, H., Homburg, A.J.: Ergod. Theor. Dyn. Syst. 27, 1651 (2007)
45. Zmarrou, H., Homburg, A.J.: Discrete Contin. Dyn. Syst. Ser. B 10, 719 (2008)
Chapter 10
Effects of Bounded Random Perturbations
on Discrete Dynamical Systems

Christian S. Rodrigues, Alessandro P.S. de Moura, and Celso Grebogi

Abstract In this chapter we discuss random perturbations and their effect on


dynamical systems. We focus on discrete time dynamics and present different
ways of implementing the random dynamics, namely the dynamics of random
uncorrelated noise and the dynamics of random maps. We discuss some applications
in scattering and in escaping from attracting sets. As we shall see, the perturbations
may dramatically change the asymptotic behaviour of these systems. In particular,
in randomly perturbed non-hyperbolic scattering trajectories may escape from
regions where otherwise they are expected to be trapped forever. The dynamics also
gains hyperbolic-like characteristics. These are observed in the decay of survival
probability as well as in the fractal dimension of singular sets. In addition, we show
that random perturbations also trigger escape from attracting sets, giving rise to
transport among basins. Along the chapter, we motivate the application of such
processes. We finish by suggesting some possible further applications.

Keywords Bounded noises Discrete-time dinamical systems Random pertur-


bations Escape from attracting sets Fractal dimension

10.1 Introduction

The essence of Dynamics or Dynamical Systems is to mathematically understand


general laws governing processes going through transformations in time. There

C.S. Rodrigues ()


Max Planck Institute for Mathematics in the Sciences, Inselstr., 22, 04103 Leipzig, Germany
e-mail: christian.rodrigues@mis.mpg.de
A.P.S. de Moura C. Grebogi
Department of Physics and Institute for Complex Systems and Mathematical Biology, Kings
College, University of Aberdeen - Aberdeen AB24 3UE, UK
e-mail: a.moura@abdn.ac.uk; grebogi@abdn.ac.uk

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 151


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 10, Springer Science+Business Media New York 2013
152 C.S. Rodrigues et al.

are many examples of such processes arising from different areas of knowledge.
One could, for instance, study the time-dependent scattering of plankton population
with the sea streaming around an island, or the behaviour of charged particles
travelling through a magnetic field, the dynamics of interval between neural spikes,
the oscillation of share prices in the stock market, among many other phenomena.
Their dynamical behaviour is described in terms of observable quantities. We say
that our system evolves in a space of states or phase space M, that is, the collection
of relevant variables describing the dynamics. In our examples, it could be the
concentration of plankton, the position and speed of charged particles, and so on. In
many cases the phase space M is a subset of an Euclidian space Rn .1
The dynamical behaviour of such quantities can be modelled, in general, by
systems of ordinary differential equations. Thus, the model is thought to evolve
in continuous time. Alternatively, we can think of analysing snapshots of the
continuous time dynamics, which gives rise to discrete time models. In this case,
the present state, say given by the possibly multidimensional variable x M, evolves
every unit of time under a given rule f : M M to the state f (x). For a given
initial condition x0 , its associated orbit is the sequence of points xn , obtained by the
iteration of our rule, such that, for n = 1, 2, . . ., we have xn+1 = f (xn ).
Dynamically we would like to reach robust conclusions from a model by seeking
for methods and tools which describe the behaviour of most orbits as time goes
to infinity rather than focusing on a single trajectory. Another important concern
is the understanding of how stable the asymptotic behaviour is under random
perturbations. It represents a natural point of view of dynamics since observations
of phenomena in nature are always subjected to small fluctuations; to some level of
noise. Therefore, the physical perception would intrinsically correspond to some
random contaminated process rather than being a purely deterministic one. The
concept of random dynamical systems is relatively recent, although the interest in
random perturbation of dynamical systems goes back to Kolmogorov. The main
purpose of this chapter is to give an overview on the dynamics of random bounded
perturbed discrete time systems. Obviously the treatment here is not complete, since
the subject is very broad. Instead, we present the general framework and discuss
some examples of applications and the effects of such perturbations. The remaining
part of this chapter is divided as follows. In Sect. 10.2, we discuss generalities
of random bounded perturbed dynamics. Then we present some applications to
scattering dynamics in Sect. 10.3, and to escape from attracting sets in Sect. 10.4.
Each application uses a different approach to randomly perturb the dynamics,
and we comment on the choice of perturbation scheme based on what we are
interested in. We finish this chapter by presenting some further applications and
possible directions.

1 More generally one should consider M to be a manifold.


10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 153

10.2 Bounded Random Perturbations

There are different ways of how randomness comes into play in dynamical systems.
We shall present two different frameworks to study the random perturbation of dy-
namics. In both cases we consider bounded perturbations. The choice between either
mechanism basically depends on the phenomena one is interested in measuring or
modelling, as we shall discuss in the following sections. In general, we also want
some sort of regularity in the class of functions to be considered, for example, it is
common to deal with smooth functions whose inverse are differentiable.

10.2.1 Dynamics Under Random Independent Noise

The first kind of perturbed systems to be discussed here is given by additive


uncorrelated uniform noise. Let us first consider a deterministic discrete time
dynamics as in the introduction, xn+1 = f (xn ). Imagine we iterate the point x j by
the deterministic system f . However, let us say that at the moment we take the f (x j )
to evolve the dynamics again, we make a small error given our limited precision.
Although the error has no preferable direction, we assure that it is always less than
. In other words, our perturbed system is described by a random dynamics that in
this context can be written as

F(x j ) = f (x j ) + j ,

with || j || < , where j is the vector of random noise added to the deterministic
dynamics at the iteration j, and is its maximum amplitude. Note that the sequences
of perturbations applied to each trajectory are independent. We illustrate this in
Fig. 10.1. The idea of perturbations not having preferential directions is to ensure
that the perturbed trajectory should scatter uniformly around the unperturbed one.

Fig. 10.1 Illustrative picture


of the perturbed dynamics
with amplitude of noise
|| j || < . From the paper
[19]. Copyright: American
Physical Society 2010
154 C.S. Rodrigues et al.

10.2.2 Dynamics of Random Maps

The second kind of perturbed dynamics is to focus on random perturbations of the


parameters defining the system. Here, we use the convention of calling these systems
as random maps. In this case, all orbits are evolved under the same sequence of
randomly chosen maps.2 The dynamics of random maps is given by

xn+1 = fn (xn ),

where we randomly choose slightly different maps fn for each iteration n (see
Eq. (10.4) below). It is known that such dynamics has well-defined (in the ensemble
sense) values of dynamical invariants such as fractal dimensions and Lyapunov
exponents [13]. Note that we associate the choice of the map with the iteration.
Therefore, all initial conditions in a given iteration are mapped by the same sequence
of random maps.
In the following sections we shall discuss some effects of random bounded
perturbation to discrete time dynamics.

10.3 Random Perturbation on Scattering

We start by analysing the effect of random perturbations on processes described by


chaotic scattering. Such dynamics are used to describe a great number of physical
processes. Although this has been a very active topic in the past few years, previous
studies have been rather focused on deterministic scattering. For non-hyperbolic
scattering, in particular, previous work has focused on trajectories starting outside
invariant structures, because the ones starting inside one are expected to stay trapped
there forever. This is true though only for the deterministic case. We shall see in what
follows that, under small random fluctuations of the field, trajectories drastically
change their statistical behaviour. The non-hyperbolic dynamics gains hyperbolic
characteristics due to the effect of the random perturbations.

10.3.1 Chaotic Scattering

Scattering dynamics is the general term referring to dynamics taking place in a


unbounded phase space, such that there is a small localised interaction region where

2 Within the mathematical literature the random perturbations are defined in terms of spaces of

maps. In this setting, we have a family of maps and the iteration is obtained by randomly selecting
them. Thus it is said to be a family of random maps even when different sequences are applied to
different orbits.
10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 155

the dynamics is non-trivial. Trajectories3 can either come from infinity into the
interaction region before being scattered again towards infinity or initialised inside
the interaction region and escape towards infinity.
The way trajectories are scattered fundamentally depends on the characteristics
of the scattering region. We call it chaotic scattering whenever the dynamics inside
the scattering regions is chaotic, or that characteristic quantities associated with the
particles, or trajectories, during the scattering are sensitive to their initial conditions.
There are many ways of detecting such sensitivity. For example, one may be
interested in measuring the time that a set of particles takes to leave a given region
after being sprinkled in. When the scattering is chaotic, such time delay, according
to the initial position of the particles, diverges on a fractal set [4, 5]. Another way
of characterising such sensitivity is by measuring the scattering angle as a function
of the initial angle (or position) of the particles. These are two examples of what
is more generally called scattering functions [4]. The chaotic behaviour of such
scattering is due to the presence of a chaotic non-attracting set containing periodic
orbits of arbitrarily large periods as well as aperiodic orbits distributed on a fractal
geometric structure in the phase spacethe chaotic saddle [4].

10.3.1.1 Hyperbolic Chaotic Scattering

From the ergodic point of view, the dynamics of hyperbolic chaotic scattering is
explained by the existence of a chaotic saddle in invertible maps, or a chaotic
repeller in non-invertible ones. It is a zero-measure non-attracting fractal set [4].
Therefore, a randomly chosen initial condition has full probability of escaping the
scattering region. Accordingly, the dynamics of an initial ensemble of particles with
smooth distribution is characterised by orbits that leave the region after a very short
transient, followed by a distribution of particles that leave the scattering region after
a long time [4].
Recall that for hyperbolic systems, the dynamics can be decomposed into
complementary stable and unstable subdynamics.4 As a consequence, points which
are infinitesimally displaced from each other approach or diverge exponentially,
if they are in the stable or unstable invariant directions, respectively. When the
scattering dynamics has an associated hyperbolic structure, we can imagine that
trajectories that approach the saddle along its stable direction and leave it along
its unstable direction will be displaced exponentially too. The overall implication
is that hyperbolic scattering is also associated with an exponential escape rate of
particles. Suppose that there are no disjoint saddles, due to the hyperbolic splitting,

3 Because scattering dynamics are so closely related to scattering of particles in physical systems,

we shall refer to the dynamics of initial conditions in a region of the phase space as dynamics of
particles started in such region.
4 The term subdynamics is used here as a simplification of the splitting of the tangent bundle. See,

for example, [4, 6].


156 C.S. Rodrigues et al.

all initial non-vanishing distribution around the stable manifold diverge with escape
rate independent of the initial density [4]. Thus, all orbits close to the chaotic
saddle are unstable. There is also a short transient due to some orbits escaping
without approaching the stable manifold. If a given number N0 of initial conditions
is randomly chosen within the scattering region, the decay of the number of particles
still in that region after time t scales as N(t) e t , hence,

P(t) e t , (10.1)

where P(t) is the probability of particles still remaining in the interaction region
after time t, and a constant5 [4].

10.3.1.2 Non-hyperbolic Chaotic Scattering

Non-hyperbolic chaotic scattering is also defined in an unbounded space. Never-


theless, the phase space in this sort of systems is, in analogy with more general
Hamiltonian dynamics, characterised by the presence of KolmogorovArnold
Moser (KAM) tori, which adds an extra complication to the interaction region. The
scattering region is characterised by the presence of main KAM islands, which are
surrounded by smaller islands, in a hierarchical structure ad infinitum. These islands
surround marginally stable periodic orbits. Orbits from the outside of a scattering
region can spend a long time in the vicinity of the KAM islands in an almost regular
behaviour before escaping [4,5,7,8]. Such stickiness effect causes a slower escape
rate than that found in hyperbolic systems. For large enough t, the non-hyperbolic
escape rate follows a power law [4, 5, 7, 8]

N(t) t . (10.2)

Recall that non-hyperbolic dynamics, contrary to hyperbolic one, is not stable


under perturbations. A further important remark is that due to the invariance of the
islands and the area preserving property, all trajectories started inside an island are
expected to be trapped there forever [4].

10.3.2 Random Perturbations

These general characteristics of scattering just discussed do not take random


perturbations into account. In what follows we want to study their effect on the
statistical behaviour of scattering. As non-hyperbolicity is not a stable characteristic
of dynamics, we expect the perturbations of such systems to result in a hyperbolic-

5 In the case of discrete time dynamical systems, t n, the number of iterations.


10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 157

a b c

Fig. 10.2 Simplified representation of the orbits on the torus. In (a), it is represented a single torus
and its centre. In (b), it is represented the effect of the perturbation. The difference between the
dashed and the continuous line is due to the effect of the noise that shifts the centre of the torus from
O1 to O2 . In (c) we show what would be expected for 4 iterations. Each continuous line represents
the orbit of a particle on the torus for each iteration, hence different values of perturbations at each
iteration. The centre is expected to move around O1 at each iteration and for long enough time,
the distribution of centres would fill densely the area within the dashed circle. From the paper [9].
Copyright: American Physical Society 2010

like dynamics. In particular, escaping in hyperbolic systems should continue to be


exponential, whereas for non-hyperbolic systems, the power law distribution is not
expected to persist [9].
We shall apply the random maps scheme described in Sect. 10.2.2, since we are
interested in the structural statistical behaviour of the system as a whole. Although
it is possible to define random invariance in more general context [2], as this
section is concerned, it makes no sense to talk about dynamical invariants such
as the Lyapunov exponent and fractal dimensions when uncorrelated perturbations
like in Sect. 10.2.1 are used. In this case, trajectories will be smoothed-out
on small scales and all fine dynamical structures will disappear. In the case of
perturbation of parameters, on the other hand, the random maps [3] are known to
have well-defined dynamical invariants in a measure-theoretical sense [1]. The case
dealing with independent random noise added to each trajectory has been previously
considered [1015].

10.3.2.1 Statistical Model

In order to derive a heuristic model for the escaping, suppose we choose a slightly
different map at each iteration, although they are chosen within the non-hyperbolic
range of parameters. The average effect along the orbits is to shift the invariant tori
in the phase space, such that we end up with a sort of random walk of the KAM
structures around the location for the unperturbed parameters. See the illustration in
Fig. 10.2. This random motion can be thought to cause orbits to acquire motion in
the direction transversal to the tori. The magnitude of this transversal component of
the motion is proportional to the intensity of the perturbation (in the case of small
perturbations). Since only the component of the motion transversal to the tori can
cause an orbit initialised in a KAM island to escape, we focus on this component
of the motion alone, which can be idealised as a one-dimensional random walk.
158 C.S. Rodrigues et al.

The size of the step of the random walk is proportional to the amplitude of the
perturbation. After n steps,
the typical distance D from the starting position reached
by the walker is D n [16]. Let D0 be a typical transversal distance a particle
needs to traverse in order to cross the last KAM surface and escape. Thus the average
time (number of steps) it takes for particles to escape scales as D20 2 . So
scales as 2 . The conclusion is that our simple model predicts an exponential
decay of particles, with a decay rate 1 scaling as [9]

2. (10.3)

Although this is a very simple heuristic model, we do expect it to reflect the


effective behaviour relevant for numerical and/or experimental approach.

10.3.2.2 Numerical Results

We validated our simple model by numerically obtaining the time decay of


probability for particles (or orbits) to remain within the scattering region under the
dynamics of an area-preserving non-hyperbolic map. We have used the suitable map

xn+1 = n [xn (xn + yn )2 /4]


1
yn+1 = [yn + (xn + yn )2 /4], (10.4)
n
which is non-hyperbolic for  6.5 [5]. We chose = 6.0. Then, for each iteration
n we randomly chose a perturbation |n | in the interval |n | < , where is the
amplitude of the perturbation, such that, for any |n | < , the map is non-hyperbolic.
The initial conditions were randomly chosen with uniform probability in the line
interval x [2.05, 2.07], and y = 0.465. For this interval, the particles start their
trajectories inside a KAM structure (See Fig 10.3). We computed the probability
decay for a scattering region W = {|x| < 5.0, |y| < 5.0}.
We show in Fig. 10.4 that under the random perturbations the distribution indeed
becomes hyperbolic-like (exponential). From Eq. (10.3) and Fig. 10.5a, identifying
, and , we see that the dependence of the exponent on the amplitude
of the perturbation agrees with the simple random walk model, the quadratic law
predicted by Eq. (10.3) [9]. A random walk model has also been used in [15], where
the authors consider the behaviour of trajectories starting outside the islands under
uncorrelated noisy dynamics.

10.3.3 Fractal Dimension of Singular Sets

The fractal dimension of singular sets is another fundamental difference between


non-hyperbolic and hyperbolic chaotic scattering. Choosing initial conditions from
a line within the scattering region, one expects the set of particles that remain in
10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 159

Fig. 10.3 Phase space for the map 10.4, for = 6.0. The inset shows a blow-up of the region
(x, y) [2.05, 2.20] [0.44, 0.47]. From the paper [9]. Copyright: American Physical Society 2010

there after a given time T0 should form a Cantor set with fractal dimension d < 1.
On the other hand, given their algebraic decay, non-hyperbolic chaotic scattering
is characterised by maximal value of the fractal dimension, d = 1, or very close
to it, within limited precision [5, 7, 17, 18]. We estimated the fractal dimension of
the time-delay function T (x), for initial conditions chosen inside a KAM structure.
We chose y0 = 0.465, and different values of x0 were randomly chosen to belong
to the interval [2.05, 2.15]. As it is known, the box-counting fractal dimension d is
given by d = 1 , where is the uncertainty exponent [19]. Since the larger the
amplitude of the perturbations, the further the statistical behaviour of our systems is
supposed to be from that of non-hyperbolic behaviour, we expect that the larger the
amplitude of the perturbation, the further the d is from 1, which would correspond
to the non-hyperbolic limit. This is exactly what we have obtained, as shown in
Fig. 10.5b [9].

10.4 Escaping and Trapping in Randomly Perturbed


Dynamics

In this section we shall look at another effect of random perturbations. We study


the escape of orbits from attracting sets, or the loss of stochastic stability of such
sets. The sets we have in mind are invariant subsets M of the deterministic
160 C.S. Rodrigues et al.

a 100
= 1.21
72 x 10 -6
= 4.0
10-1 422 x
10 -6
=
1.2
85
3x
10-2 = 10 -5
4.77
04 x
P(t)

= 0.0004
10-3
10

= 0.0008
-5
= 1.7744 x

= 0.0016
= 0.0032
10-4
= 0.0064
= 0.0128
10

= 0.0256
10-5
-4

= 0.0512
= 0.1024

10-6
0 2105 4105 6105 8105 1106
t
b 100
= 0.0064
10-1 = 0.0128
= 0.0256
= 0.0512
= = 0.1024
= 2.00

10-2 1.7
744
=

x1 -
0 4
7.6
62 x 10
P(t)

208

10-3
x1
-3
= 4.8916

0
-4

10-4
= 1.1735 x 10-2

x 10

10-5
-3

10-6
0 20000 40000
t

Fig. 10.4 Probability distribution of escaping time n from the region W = {|x| <
5.0, and |y| < 5.0} for different values of . Initial conditions were randomly chosen in the
line x [2.05, 2.07], y = 0.465 inside the nested KAM structures. For each value of , we present
the exponent that best fits the exponential region of the probability distribution. (a) shows all
used values of , (b) shows small values of n. From the paper [9]. Copyright: American Physical
Society 2010

dynamics, which attract their neighbouring points, that is, f n (x) tends to as
n ; these are the attractors of the system. The points whose orbits eventually
come close and converge to are its basin of attraction, which we denote by
10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 161

a 100
0.0001 0.001 0.01 0.1

104
1.8489
102

106
1.7039

104

106
0.0001 0.001 0.01 0.1

b = 0.1024
= 0.0512
1 = 0.0128
= 0.0032
f()

d = 1 - 0.076759
0.1

d= 1 - 0.027166

0.1 d = 1- 0.10516 0.37



d = 1 - 0.058165
0.01 0.1

1012 109 106

Fig. 10.5 (a) Different values of exponent as a function of from Fig. 10.4, as well as the
power law that best fits the distribution. In the inset, we consider only values of < 0.02, (b)
Estimated fractal dimension of the T , for different values of . We notice that, when the amplitude
of the perturbation is decreased, the dynamics approach the non-hyperbolic limit, and so does the
estimated values of fractal dimension which approach 1 as 0. The inset shows the value of
as a function of . From the paper [9]. Copyright: American Physical Society 2010
162 C.S. Rodrigues et al.

W s ( ). In what follows, we present an overview of a more detailed discussion


in [20]. The framework here is slightly different from the previous section. We
apply uncorrelated random perturbations, as described in Sect. 10.2.1, as the use
of random maps would not result in loss of stochastic stability.
The escape of trajectories from attracting sets has been studied for long time with
applications to chemical reactions [21], Statistical Mechanics [22] and many other
areas [13, 2326]. Nevertheless, general assumptions strongly require the noise to
have unbounded Gaussian distribution as a condition, such that, some machinery
from stochastic analysis can be applied [22, 24, 27]. When the noise is bounded,
very little is known about escaping, despite the importance of bounded noise to a
large number of applications where models using unbounded Gaussian noise seem
unrealistic. To mention a few examples, one could consider minimum energy for
bursting to take place in Neuroscience [28], in Geophysics, one may consider the
overcoming of some potential barrier just before an earthquake [29], or the critical
outbreak magnitude for the spread of epidemics [30].

10.4.1 Statistical Description

Although the dynamics may seem more complicated under the presence of noise,
it is actually well characterised from a statistical point of view. In other words,
for small enough amplitude of noise, there exists a distribution of probability
for the orbit to stay close to the original attractor of the deterministic dynamics.
Furthermore, the distribution of probability for the perturbed system converges to
the original distribution of the deterministic systems as the level of noise decreases
to zero [20].
We can picture the noisy dynamics inside the basin of attraction for small enough
amplitude of noise as that of a closed system. The effect of the noise beyond a
threshold is that these perturbations introduce a hole in the basin of attraction, from
where the orbits can escape [20]. Consider our deterministic dynamics xn+1 = f (xn ).
As in Sect. 10.2.1, we shall add some random perturbation of maximum size . The
hole in the noisy dynamics will be defined in terms of points which might escape.
To see that, consider the subset I of the phase space neighbouring the boundary of
the basin of attraction. We define it by the set of points x whose f (x) is within a
distance from some point in the boundary of the basin of attraction ,

I = {x M; B( f (x), ) = 0},
/ (10.5)

where B( f (x), ) is the ball of radio around f (x). See the illustration in Fig. 10.6a.
Finally, imagine from the deterministic dynamics that we begin to add noise of a
very small amplitude. As the amplitude of noise increases, the density of probability
becomes more spread around the deterministic one. When the amplitude is large
enough, meaning the density of probability is well spread that it comes close to the
boundary, it eventually overlaps the set I . This is exactly how the hole I is defined,
10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 163

a b 100 0.002 Rotor-map


(a) = 0.086 Hnon-map
C
10-1 0.001 3/2
( C)
1/ T
0
10-2
0 0.02 0.04
C
10-3
1/T
10-4
0.002 (b)
C = 0.021
0.001
10-5 1/ T
0
10-6 1.6
( C )
1.7
0 0.01 0.02
( C )
C
10-7
10-3 10-2 10-1
C

Fig. 10.6 (a) A basin of attraction W s ( ) and its basin boundary (dashed line). We illustrate
that the iteration z  f (z), from the point z initially in W s ( ) brings the orbit within a distance
from the boundary. Therefore, the random perturbation applied to f (z) with some || || < could
push the random orbit outside the basin. On the other hand, for the point x W s ( ) the iteration
x  f (x) brings it farther than the maximum perturbation away from the boundary . Therefore,
in our illustration z I but x
/ I . (b) The inverse of the mean escape time scaling with amplitude
of noise for the Map (10.9)black circlesand for the Map (10.10)blue squares. For each map,
the values of mean escape time were obtained by iterating 103 random orbits for each value of .
The dashed lines show the expected scaling ( c )3/2 and the thick continuous lines show the
best fitting for the Rotor map, 1.7, and for the Henon map, 1.6. In the insets, we show the
transition. For < c , we have T  = , the random orbits do not escape, therefore 1/T  = 0. For
c , the escape time scales as Eq. (10.8). For the used parameters c = 0.086 0.006 for the
Rotor mapinset (a)and c = 0.021 0.002 for the Henon mapinset (b). From the paper [19].
Copyright: American Physical Society 2010

I = I supp , (10.6)

where supp is the support of the measure. Given that we use bounded noise,
for very small noise amplitudes I = 0. / As the amplitude of the noise is increased
beyond a critical amplitude = c , we have I = 0,
/ and escape takes place for any
> c . We call I the conditional boundary. The importance of I ( ) stems from
the fact that it represents the set of points which one iteration of the map f can
potentially send close enough to the boundary , such that a random perturbation
with amplitude may send them out of the basin of attraction. The dynamics of
escape is thus governed by this set, and it can be understood as a hole that sucks
orbits from the basin if they land on it.

10.4.2 Escape Scaling

Under some assumptions, it is actually possible to estimate the size of this hole,
or its measure, (I ) > 0, in terms of and c ; see the details in [20]. For noise
amplitude greater than the critical value, it scales as [20]
164 C.S. Rodrigues et al.

(I ) ( c ) , (10.7)

where depends on the dimension of the system. For two-dimensional dynamics,


= 3/2 [20]. Even though for > c the measure is not invariant, it can be
described in terms of conditionally invariant measures. Since there is a certain
probability that a particle escapes if it falls into I , we can use Kacs Lemma,
T  (I1 ) [31], to compute the average escape time

T  ( c ) . (10.8)

We numerically checked the scaling of escaping times with amplitude of noise


for two distinct two-dimensional systems [20]. The first perturbed system we have
chosen was the randomly perturbed single rotor map [32], defined by
     
xj x j + y j (mod2 ) x j
F = + , (10.9)
yj (1 )y j + 4 sin(x j + y j ) y j

where x [0, 2 ], and y R, and represents the dissipation parameter. As a second


testing system, we have chosen the perturbed dissipative Henon map, in the form
     
xj 1.06x2j (1 )y j x j
G = + , (10.10)
yj xj y j

where x and y are real numbers and again, represents the dissipation parameter. We
used = 0.02 for both maps, as for such value they present very rich dynamics [33].
For each map, we computed the time that random orbits took to escape from their
respective main attractors for a range of noise amplitudes. In each case, the mean
escape time was obtained for 103 random orbits for each value of . The results
are shown in Fig. 10.6b. For the parameter used here, we obtained c = 0.086
0.006 for the perturbed Rotor map and c = 0.021 0.002 for the perturbed Henon
map, which is shown in the insets. In both cases, we obtained a good agreement
between our simulations and the predictions of our theory over a range of various
decades; see the details in [20]. It has been proved that similar power laws in the
unfolding parameters are in fact lower bounds for the average escape time scale for
the one-dimensional case [34]. From the numerical perspective, it is also difficult to
accurately estimate the value of c .
As a last remark, we note that the idea of opening a closed system by adding an
artificial hole6 to it have been first considered in [38] and used by many others; see
other references in [20]. In the case of artificial holes, the authors in [37] show that
the random perturbations may have an interesting decreasing effect to the escape
rate, by making the trajectories miss the hole.

6 We say artificially placed hole when the hole is defined as a region [3537]. Our intention is just

to contrast to the case discussed in [20] and presented in Sect. 10.4, where the hole is given by the
random perturbations.
10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 165

10.5 Concluding Remarks, Perspectives and Further


Reading

The understanding of relevant concepts in Dynamics and their use in modelling


dynamical process have considerably grown during the past few decades. Although
most of the modelled phenomena from a dynamics point of view are subject to some
sort of randomness, the investigation of the combination of dynamics and random
perturbations is relatively recent. As it may have become clear along this chapter,
there are many reasons to consider it. The presence of random perturbations may
change drastically the otherwise expected asymptotic behaviours. It represents a
more realistic approach to modelling. Furthermore, it provides a better understand
of how stable a given model can be. The main reasons they have been neglected, in
our opinion, are the lack of tools to treat the problem and the lack of knowledge of
the effects random perturbations can cause. This is true, in particular, in the case of
bounded perturbations, where the use of typical approaches from stochastic analysis
may fail. There are many open problems from either applied modelling, physical or
mathematical perspectives. We wish to close this chapter by briefly providing some
additional perspectives of applications, possible directions and suggesting some
further reading.

10.5.1 Randomly Perturbed Billiards

Billiards are a very interesting class of Dynamical systems used to model many
different physical phenomena. These paradigmatic dynamics connect many areas
of mathematics with questions ranging over several levels of difficulty. Some of
them, however, are suitable to be treated also numerically. In a recent paper [39],
the authors considered the effects of random perturbations to a billiard with mixed
phase-space where a hole is placed. They combined the possible effect of missing
the hole [37] and the random walk model to characterise the possible decay regimes
of survival probability. They used, however, uncorrelated noise as in Sect. 10.2.1,
and chose initial conditions outside KAM structures. Since the use of random
maps are more likely to distinguish between structural characteristics given by the
dynamics itself from those added by the uncorrelated noise, it would be interesting
to see the effect of using random maps as well as choosing initial conditions
inside invariant islands. Furthermore, it may be possible to capture changes in the
behaviour from hyperbolic to non-hyperbolic measuring how recurrence properties
are modified using random maps.
166 C.S. Rodrigues et al.

10.5.2 Transport in Randomly Perturbed Systems

One of the most important characteristics of dynamics under bounded random


perturbation is that it allows the system to have more than one invariant measure.
This is not true for unbounded perturbations, for example, by adding a Brownian
motion, where the whole phase space is the support of a unique invariant measure.
The fact that bounded noise may have non-overlapping invariant measures, and the
idea of escaping depending on the parameter as developed in [20] and Sect. 10.2.1,
gives rise to the possibility of transport among different basins of attraction. This
hopping dynamics has already been pointed out in [10] and addressed by many
others. Notwithstanding, their transport properties had not been accessed. In a recent
paper [40], we have characterised the hopping dynamics among the basins and we
show the existence of a sub-diffusive anomalous transport. Currently, not much
more is known about it. In the context of perturbation of Hamiltonian dynamics,
it has also been investigated the effect of symplectic perturbation to the dynamics
and how one observes it in their decay of correlations [41].

10.5.3 Random Perturbations in Mathematical Biology

An increasing trend in applied sciences consists in using methods from math-


ematical and physical sciences to understand biological process; the so-called
mathematical biology or theoretical biophysics. For example, in the case of
advection in blood vessels, it has been observed the existence of large vortices
dominating the dynamics. The dynamics itself is very likely non-hyperbolic with
extra difficulties to be understood due to the walls corresponding to a set of
degenerated fixed points [42]. As random perturbations are naturally present in such
systems, for example, due to variation in heart beating frequency, we expect the
transition of non-hyperbolichyperbolic dynamics to play a very important role in
such modelling.

10.5.4 Random Perturbations and Markov Chain Model

A more subtle mathematical topic, but somehow with strong implications to applied
dynamics and modelling, consists in understanding the relation between Markov
chains and random maps. Consider the framework from Sect. 10.2.1. Then, for
perturbations of maximum size , a Markov chain is defined by a family {p ( |x)}
of probability distributions, such that, every p ( |x) is supported inside the -
neighbourhood of f (x). In other words, given a subset U of M, we say that each
p (U|x) is a conditional probability telling us that given x, what the probability of
f (x) to be found in U is. It turns out that it is actually possible to consider an intrinsic
distance and probability in the collection of maps we choose as a perturbation of
10 Effects of Bounded Random Perturbations on Discrete Dynamical Systems 167

our deterministic system. The idea of representing this Markov chain in terms of
randomly perturbed systems consists in finding the equivalence of this probability
in our collection of maps and the Markov chain. For any sequence of randomly
perturbed systems one can prove that it is always possible to find a Markov chain
which is represented by this scheme. The opposite question, however, relies in many
subtle mathematical issues. In other words, given a Markov chain model, can one
find a sequence of random maps such that the points evolving under both schemes
coincide? For example, it strongly depends on the shape of the distribution of our
probability density for the Markov chain. Some of this issues have recently been
addressed from a rigourous point of view in [43].

10.5.5 Bifurcation of Invariant Measures

As a last topic we briefly mention the bifurcation of invariant measures. As


discussed in Sect. 10.5.2, the use of bounded noise gives the possibility of having
multiple coexisting invariant measures. An important issue permeating all the
previous sections is to consider deterministic systems undergoing some bifurcation
process. In other words, suppose we chose a deterministic system to be perturbed
by one of the schemes previously presented. Suppose in addition that this map
actually have some bifurcation parameter. Then we ask what happens to the invariant
measures for the perturbed maps if the bifurcation parameter is changed. Do
the invariant measures fuse together or split depending on the direction of the
changes? Can we identify the bifurcation in this blur dynamics? If so, how are they
related to the deterministic bifurcation? As we commented before, notice that it has
implications to distributions of survival probabilities in billiards, or to modelling
of systems biology as the behaviour of fused almost invariant density of probability
may differ from that of disjoint invariant measures. It also has some strong influence
on the statistical behaviour of transport in hopping dynamics. Surprisingly there is
currently not a good understanding of such processes, not even from the numerical
point of view. The main reasons are related to the difficulties in efficiently detecting
how the invariant measures approach, or split from each other. This problem has
been addressed in [44] by considering only what happens to the support of the
distributions.

Acknowledgements C.S.R. is grateful to J. Jost, R. Klages, M. Kell, J. Lamb, M. Rasmussen, and


P. Ruffino for inspiring discussions along these subprojects and acknowledges the financial support
from the University of Aberdeen and from the Max-Planck Society.

References

1. Falconer, K.J.: The Geometry of Fractal Sets. Cambridge University Press, Cambridge (1986)
2. Arnold, L.: Random Dynamical Systems. Springer, New York (1998)
168 C.S. Rodrigues et al.

3. Romeiras, F.J., Grebogi, C., Ott, E.: Phys. Rev. A 41, 784 (1990)
4. Ott, E.: Chaos in Dynamical Systems, 2nd edn. Cambridge University Press, Cambridge (2002)
5. Lau, Y.T., Finn, J.M., Ott, E.: Phys. Rev. Lett. 66, 978 (1991)
6. Robinson, C.: Dynamical Systems: Stability, Symbolic Dynamics, and Chaos, 2nd edn. CRC
Press, FL (1999)
7. Motter, A.E., Lai, Y.-C., Grebogi, C.: Phys. Rev. E 68, 056307 (2003)
8. Motter, A.E., Lai, Y.-C.: Phys. Rev. E 65, 015205 (2002)
9. Rodrigues, C.S., de Moura, A.P.S., Grebogi, C.: Phys. Rev. E 82, 026211 (2010)
10. Poon, L., Grebogi, C.: Phys. Rev. Lett 75 4023 (1995)
11. Feudel, U., Grebogi, C.: Chaos 7, 597 (1997)
12. Seoane, J.M., Huang, L., Suanjuan, M.A.F., Lai, Y.-C.: Phys. Rev. E 79, 047202 (2009)
13. Kraut, S., Grebogi, C.: Phys. Rev. Lett. 92, 234101 (2004)
14. Kraut, S., Grebogi, C.: Phys. Rev. Lett. 93, 250603 (2004)
15. Altmann, E.G., Kantz, H.: Europhys. Lett. 78, 10008 (2007)
16. Feller, W.: Introduction to Probability Theory and Applications. Wiley, New York (2001)
17. de Moura, A.P.S., Grebogi, C.: Phys. Rev. E 70, 36216 (2004)
18. Seoane, J.M., Sanjuan, M.A.F.: Int. J. Bifurcat. Chaos 20, 2783 (2008)
19. Grebogi, C., McDonald, S.W., Ott, E., Yorke, J.A.: Phys. Lett 99A, 415 (1983)
20. Rodrigues, C.S., Grebogi, C., de Moura, A.P.S.: Phys. Rev. E 82, 046217 (2010)
21. Hanggi, P.: J. Stat. Phys. 42, 105 (1986)
22. Demaeyer, J., Gaspard, P.: Phys. Rev. E 80, 031147 (2009)
23. Kramers, H.A.: Phys. (Utrecht) 7, 284 (1940)
24. Grasberger, P.: J. Phys. A 22, 3283 (1989)
25. Kraut, S., Feudel, U., Grebogi, C.: Phys. Rev. E 59, 5253 (1999)
26. Kraut, S., Feudel, U.: Phys. Rev. E 66, 015207 (2002)
27. Beale, P.D.: Phys. Rev. A 40, 3998 (1989)
28. Nagao, N., Nishimura, H., Matsui, N.: Neural Process. Lett. 12, 267 (2000); Schiff, S.J., Jerger,
K., Duong, D.H., et al.: Nature 370, 615 (1994)
29. Peters, O., Christensen, K.: Phys. Rev. E 66, 036120 (2002); Bak, P., Christensen, K., Danon,
L., Scanlon, T.: Phys. Rev. Lett 88, 1785011 (2002); Anghel, M.: Chaos Solit. Fract. 19, 399
(2004)
30. Billings, L., Bollt, E.M., Schwartz, I.B.: Phys. Rev. Lett 88, 234101 (2002); Billings, L.,
Schwartz, I.B.: Chaos 18, 023122 (2008)
31. Kac, M.: Probability and Related Topics in Physical Sciences, Chap. IV. Intersciences
Publishers, New York (1959)
32. Zaslavskii, G.M.: Phys. Lett. A 69, 145 (1978); Chirikov, B.: Phys. Rep. A 52, 265 (1979)
33. Rodrigues, C.S., de Moura, A.P.S., Grebogi, C.: Phys. Rev. E 80, 026205 (2009)
34. Zmarrou, H., Homburg, A.J.: Ergod. Theor. Dyn. Sys. 27, 1651 (2007); Discrete Cont. Dyn.
Sys. B10, 719 (2008)
35. Altmann, E.G., Tel, T.: Phys. Rev. Lett. 100, 174101 (2008)
36. Altmann, E.G., Tel, T.: Phys. Rev. E 79, 016204 (2009)
37. Altmann, E.G., Endler, A.: Phys. Rev. Lett. 105, 255102 (2010)
38. Pianigiani, G., Yorke, J.A.: Trans. Am. Math. Soc. 252, 351 (1979)
39. Altmann, E.G., Leitao, J.C., Lopes, J.V.: pre-print: arXiv:1203.1791v1 (2012) -To appear in
Chaos special issue: Statistical Mechanics and Billiard-Type Dynamical Systems
40. Rodrigues, C.S., Grebogi, C., de Moura, A.P.S., Klages, R.: Pre-print (2011)
41. Kruscha, A., Kantz, H., Ketzmerick, R.: Phys. Rev. E 85, 066210 (2012)
42. Schelin, A.B., Karolyi, Gy., de Moura, A.P.S., Booth, N.A., Grebogi, C.: Phys. Rev. E 80,
016213 (2009)
43. Jost, J., Kell, M., Rodrigues, C.S.: Pre-print: arXiv:1207.5003
44. Lamb, J.S.W., Rasmussen, M., Rodrigues, C.S.: Pre-print: arXiv:1105.5018
Part III
Bounded Stochastic Fluctuations in Biology
Chapter 11
Bounded Stochastic Perturbations May Induce
Nongenetic Resistance to Antitumor
Chemotherapy

Alberto dOnofrio and Alberto Gandolfi

Abstract Recent deterministic models suggest that for solid and nonsolid tumors
the delivery of constant continuous infusion therapy may induce multistability in
the tumor size. In other words, therapy, when not able to produce tumor eradication,
may at least lead to a small equilibrium that coexists with a far larger one. However,
bounded stochastic fluctuations affect the drug concentration profiles, as well as
the actual delivery scheduling, and other factors essential to tumor viability (e.g.,
proangiogenic factors). Through numerical simulations, and under various regimens
of delivery, we show that the tumor volume during therapy can undergo transitions
to the higher equilibrium value induced by a bounded noise perturbing various
biologically well-defined parameters. Finally, we propose to interpretate the above
phenomena as a new kind of resistance to chemotherapy.

Keywords Bounded noises Mathematical oncology Oncology Solid


tumors Angiogenesis Chemotherapy Chemoresistance

A. dOnofrio ()
Department of Experimental Oncology, European Institute of Oncology,
Via Ripamonti 435, I20141 Milan, Italy
e-mail: alberto.donofrio@ieo.eu
A. Gandolfi
Istituto di Analisi dei Sistemi ed Informatica A. Ruberti - CNR
Viale Manzoni 30, I00185 Roma, Italy
e-mail: alberto.gandolfi@iasi.cnr.it

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 171


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 11, Springer Science+Business Media New York 2013
172 A. dOnofrio and A. Gandolfi

11.1 Introduction

Clonal resistance (CR) to chemotherapy, i.e. the emergence through fast mutations
of drug-insensitive cells in a tumor under therapy, was up to the recent past, and to
some extent it is still at present, the main paradigm used to explain the high rate of
relapses during chemotherapeutic treatments of tumors [1, 2].
However, in the last decade, a number of investigations [35] revealed that a
significant fraction of cases of resistance to therapy is actually linked to phenomena
that may, broadly speaking, be defined as physical resistance (PR) to drugs [6, 7].
This means that resistance cannot only be imputed to a sort of Darwinian evolution
of the cancerous population through the birth of new clones, but also to the
dynamics of the molecules of the drug in the tumor. A non-exhaustive list of such
physical phenomena is the following: (a) limited ability of the drug to penetrate into
the tumor tissue because of uneffective vascularization [8] and poor or nonlinear
diffusivity [9]; (b) binding of drug molecules to the surface of tumor cells or to the
extracellular matrix [10]; (c) prevalence of lowly proliferating and quiescent tumor
cells [11]; (d) collapse of blood vessels [12].
We recently proposed [1315] two deterministic population-based models to
describe the chemotherapy of vascularized solid tumors and also of nonsolid
tumors that may exhibit multistability under constant continuous drug infusion,
unlike other models of tumor growth, which in such a case predict unimodality.
The multistability is the consequence of the interplay between the nonlinear
pharmacodynamics of the drug at the tissue level and the population dynamics of
the tumor cells. In particular, we have shown that multistability can also derive from
the well-known NortonSimon hypothesis [16].
In [14, 15] we suggested the possible existence of a third path for the insurgence
of the resistance, different from CR and having some relationships with PR, due
to the interaction between the multistability of the tumor and the unavoidable fluc-
tuations of the blood concentration of the delivered drug, through the well-known
mechanisms of equilibrium metastability [17] and of noise-induced transitions [18].
This novel kind of resistance thus comes from the complex interplay among
the pharmakodynamics and pharmacokinetics of the agent and the phsyiological
condition of the patient. In case of vascular solid tumors a major role is played by
the physical barriers caused by the abnormal nature of tumor blood vessels, and by
the interaction between the tumor and the endothelial cell populations.
However, in contrast to the classical non-equilibrium statistical physics, we shall
not assume that the noise affecting the drug concentration is gaussian. In [1921]
we stressed that possible biological inconsistencies might derive from the use
of gaussian noise, and here we shall then consider only bounded noises, whose
theoretical study has recently attracted a number of physicists [2126].
Concerning the origin of those fluctuations, we shall consider in this chapter three
separate and different settings. In the first, we shall consider a therapy periodically
delivered by means of boli. Here we may have two different irregularities: the first
is inherent to intra-subject temporal variability of pharmacokinetics parameters,
among them the clearance rate constant(s) of the drug. The other source of
11 Bounded Noises and Nongenetic Resistance to Antitumor Chemotherapies 173

fluctuations is linked with irregularities of the time of delivering. Note that in case
of boli-based therapy there is the copresence of both stochastic fluctuations and
periodic deterministic fluctuations due to the periodicity of the administration of the
agent. In the second, random changes occur in the nonlinear effectiveness of the
antitumor drugs. Finally, the third scenario involves oscillations in the proliferation
rate of vessels.

11.2 A NortonSimon-Like Model of Chemotherapy

Let us consider a tumor whose size (biomass, number of viable cells, etc.) at time t
is denoted as V , and which is growing according to a saturable growth law [27]:
 
V
V =f V,
K

where K > 0 and f (u) is a decreasing function of u such that f (1) = 0. The constant
K is usually called carrying capacity, and it depends on the available nutrients and/or
space for which the tumor cells compete. Another important parameter is the value
= f (0), which we shall call the baseline growth rate (BGR). can be read as a
measure of the intrinsic growth rate of the tumor, in absence of any competition. Of
course, since f (u) is decreasing, the BGR is also the maximal growth rate. Although
very simple, the above class of models revelead to be very effective in capturing the
main qualitative [2731] and quantitative [27, 3234] aspects of tumor growth.
Two well-known growth laws are the Gompertz law, where f (V /K) = log
(V /K), and the generalized logistic f (V /K) = (1 (V /K) ) with > 0. Note,
however, that in the Gompertz case, the BGR is infinite, which is not realistic, as
pointed out in [27, 31] (and references therein).
Let the tumor be under the delivery of a cytotoxic therapy with a drug whose
blood concentration, denoted by c(t), may be periodic or constant. What is the effect
of c(t) on the tumor growth? The log-kill hypothesis [35] prescribes that the killing
rate of tumor cells is proportional to the product c(t)V (t):
 
V
V = f V c(t)V (t). (11.1)
K

In the case of a bounded intrinsic growth rate, i.e. f (0) < , the condition c(t) >
f (0)/ implies that V (t) 0, independently of V (0) > 0.
However, since seventies Norton and Simon [16] stressed as a potential pitfall
of the log-kill hypothesis the fact that the relative killing rate is simply taken pro-
portional to c(t). According to the log-kill hypothesis, the same drug concentration
is indeed able to kill the same relative number of cells per unit time independently
of the tumor burden. Moreover, the absolute velocity of regression caused by c(t)
would be greater in the larger tumors. This is often unrealistic. On the contrary,
in clinics it is often observed that the effort to make a large tumor regress is
174 A. dOnofrio and A. Gandolfi

considerable greater, whereas hystologically similar tumors of small volumes are


curable using the same delivered quantity of the chemotherapeutic agent. A possible
cause of this fact is the development of clones of cells that are resistant to the
delivered agent. However, since the reduced drug effectiveness may also be present
in the very first phases of a therapy, Norton and Simon [16] summarized these
observations, by assuming that the parameter is not constant but it is a decreasing
function of V , (V ). In particular, Norton and Simon proposed that (V ) were
proportional to f (V /K) [16]. We shall not assume this strict hypothesis, we shall
consider here a generic positive and decreasing in V , depending also on some
internal parameters p, which leads to the following non-logkill model:
 
V
V = f V (V ; p)c(t)V, V (0) = V0 . (11.2)
K

It is trivial to verify that if c(t) > / (0; p), then the tumor-free equilibrium Ve =
0 is locally stable, whereas in case of constant continuous infusion, c(t) = C, if
(V ; p)C > f (V /K), then the tumor free equilibrium Ve = 0 is globally stable. In
the general case, since (K; p) > f (1) = 0, if > (0; p)C there will be an odd
number N 1 of equilibria: V1 (C, K, p),. . . , VN (C, K, p), with Vi < V j if i < j. It is
easy matter to verify that the odd-numbered equilibria are locally stable, whereas
the even-numbered points are unstable. By varying C or K or p one may get one or
more hysteresis bifurcations.

11.3 Growth and Therapy of a Vascularized Solid Tumor

Solid tumors in their first phase of growth are small aggregates of proliferating
cells that receive oxygen and nutrients only through diffusion from external blood
vessels. In order to grow beyond 12 mm3 , the formation of new blood vessels inside
the tumor mass is required. Poorly nourished tumor cells start producing a series
of molecular factors that stimulate and also control the formation of an internal
vascular network [3638]. This process, called neo-angiogenesis, is sustained by
a variety of mechanisms [3638], such as the cooption of existing vessels and the
formation of new vessels from the pre-existing ones. As far as the tumor-driven
control of the vessel growth is concerned, endogenous antiangiogenic factors have
been both evidenced experimentally [3941] and studied theoretically [4244].
To describe the interplay between the tumor and its vasculature, we further
generalize a family of models previously proposed in [13] that includes as par-
ticular cases the models in [42, 43, 4548] (for different modeling approaches, see
[4958]).We assume that (a) the carrying capacity mirrors (through a proportionality
coefficients, or in any case through an increasing function) the size of the tumor
vasculature, and as such it is a state variable K(t); (b) the specific growth and
apoptosis rates of the tumor and the specific proliferation rate of vessels depend
on the ratio = K/V between the carrying capacity and the tumor size. Following
Hahnfeldt et al. [42, 43], the growth of the neo-vasculature is antagonized by
endogenous factors that depends on the tumor volume. Since the ratio may be
11 Bounded Noises and Nongenetic Resistance to Antitumor Chemotherapies 175

interpreted as proportional to the tumor vessel density, assumption (b) agrees with
the model proposed in [59]. As a consequence, we can write in absence of therapy
K K
V = P( )V ( )V, (11.3)
V V
 K 
K = K ( ) (V ) , (11.4)
V
where P(u) is the (specific) proliferation rate of the tumor with P(0) = 0, P (u) > 0,
P(+) < ; (u) is the apoptosis rate with (u) < 0, (+) = 0; (u) is the
proliferation rate of the vessels with (0) +, (u) < 0, (+) = 0; (V ),
with (V ) < 0, models the vessels loss induced by endogenous anti-angiogenic
factors secreted by the tumor cells, and represents the natural loss of vessels.
We prescribe P(1) = (1) so that at the equilibrium Ke /Ve = 1.
As an example of possible expressions of the net proliferation rate F(u) =
P(u) (u) we may consider the generalized-logistic: F(u) = (1 u ), > 0.
The function (u) may include power laws (u) = buw , w > 0, functions such as
(u) = M /(1+kun ), n 1, i.e. Hill functions in the variable u1 , and combinations
of the above two expressions: (u) = 1 uw + 2 /(1 + kun ). The power law
with w = 1 yields K (K/V ) = bV , as proposed in [42, 43]. The combination
function with w = 1 is such that K (K/V ) distinguishes between the endothelial
cell proliferation and the input of new endothelial cells from the tumor outside.
Concerning the function , we recall that (V ) = dV 2/3 has been assumed in
[42, 43].
The model predicts for the system, as it is easy to show, a unique equilibrium
point, which is globally attractive.
The antiproliferative or the cytotoxic efficacy of a blood-borne agent on the
tumor cells will depend on its actual concentration at the cell site, and thus it
will be influenced by the geometry of the vascular network and by the extent of
blood flow. The efficacy of a drug will be higher if vessels are close to each other
and sufficiently regular to permit a fast blood flow; it will be lower if vessels
are distanced but even if they are irregular and tortuous so to hamper the flow.
To represent simply these phenomena, we assumed in [13] that the drug action to be
included in Eq. (11.3) is dependent on the vessel density, i.e. on the ratio = K/V .
If c(t) is the concentration of the agent in blood, we assumed that its effectiveness is
modulated by an increasing or an initially increasing and then decreasing function
( ).
In case of delivery of cytotoxic drugs, Eq. (11.3) will then be modified by adding
the log-kill term ( )c(t)V (t), but also Eq. (11.4) has to be modified since often
cytotoxic agents may also disrupt the vessels [60]. So, it leads to the following
model [13]:
K K
V = V F( ) ( )c(t) , (11.5)
V V
K
K = K ( ) (V ) c(t) , (11.6)
V
176 A. dOnofrio and A. Gandolfi

Fig. 11.1 Chemotherapy


with an cytotoxic agent.
Hysteresis bifurcation
diagram of equilibrium
(normalized) versus C.
Dashed: unstable equilibria;
solid: locally stable
equilibria. From the paper
[15]. Copyright: American
Physical Society 2010

where: F( ) = P( ) ( ) and 0. As far as the measure units are concerned,


we shall assume that volumes are measured in cubic millimeters, the time is
measured in days and that the concentration of the agent in blood is appropriately
nondimensionalized.
In case of constant continuous infusion c(t) = C, we have n 1 equilibrium
vessel densities i (C) [13], whose corresponding equilibrium volumes Vi (C) are
given by
Vi (C) = 1 ( (i (C)) C) ,

provided, of course, that M(C; ) = (i (C)) C > 0. Thus, also here there
is a threshold drug level C , defined by M(C; ) = 0, and such that C > C implies
tumor eradication. We note that if > 0, the eradication is more easy to be reached,
whereas if = 0 the eradication is difficult or impossible since appears to be
very small. The vessel-disrupting action of a chemotherapic agent so appears very
important for the cure.
Also for such a therapy model it is an easy matter to show that under constant
continuous chemotherapy the system exhibits multistability [13, 15], as shown in
Fig. 11.1.
For the tumor dynamics, we assume in Fig. 11.1 and in all the simulations the
following kinetic
 functions: F(  ) = (ln(2)/1.5)(1 0.5 ), ( ) = 4.64/ , = 0,
( ) = / 1 + (( 2)/0.35) and (V ) = 0.01V 2/3 . With these values, there
2

are two hysteresis bifurcations at Ca 0.13376 and at Cb 0.2866.

11.4 Linking the Two Models

It might seem that the two different models we proposed in the previous sections
are somewhat unrelated. Our aim here to show that for solid vascularized tumors,
and under a well-defined approximation, the NortonSimon hypothesis, and a
generalization of it, can be derived by the assumptions we stated for the model
(11.5)(11.6).
11 Bounded Noises and Nongenetic Resistance to Antitumor Chemotherapies 177

Indeed, most often in humans the dynamics of vessels is faster than tumor
dynamics. As a consequence, we may consider K(t) at quasi-equilibrium. Setting
K 0 in Eq. (11.6) yields:

K
( ) (V ) c(t) 0,
V

and in turn:
&
1 ( (V ) + + c(t)) if (0) (V (t)) c(t) 0
(V, , c(t)) =
0 if V > V (t),

where:

(0) (V (t)) c(t) = 0.

Substituting (V, , c(t)) in Eq. (11.5) yields:

V = V P ( (V, , c(t))) ( (V, , c(t)))V ( (V, , c(t))) c(t)V. (11.7)

As a consequence, for targeted drugs such that = 0 and supposing that (0) is
smaller than the value maximizing , one gets:

V = V f (V ) (V ) c(t)V

where both the net growth rate f (V ) and the effectiveness of the drug (V ) are
decreasing functions of V .
Quite interestingly, if > 0 one gets that the cytotoxic chemotherapeutic drug
hasthanks to its side effect of killing vesselsnot only its main direct effect but
also an indirect antiproliferative and proapoptotic action.
Note that in the case where (0) is larger than the maximum of , the
approximation here employed suggests that for small tumors the cytotoxic effect
is initially an increasing function of V .
However, there is an important difference between the reduced unidimensional
model (11.7), which we recall is valid for solid vascularized tumors, and the Norton
Simon-like (NSL) model of Sect. 11.2. Indeed, in the model of Eq. (11.7) the effects
of fluctuations in the parameters affecting the carrying capacity are present both
in the net tumor growth rate and in the pharmacodynamics term describing drug
effectiveness. On the contrary in the NSL model, the carrying capacity uniquely
influences the net growth rate. In other words, growth and drug pharmacodynamics
are independent.
178 A. dOnofrio and A. Gandolfi

11.5 Bounded Noises: Why and Which?

The hysteresis bifurcations, as that in Fig. 11.1, are characterized by the existence of
two values of the bifurcation parameter such that infinitesimal changes around these
values of the parameter imply that the behavior of the solution has a sudden change.
This means that near those two points the behaviour of the system is extremely
sensitive to any kind of perturbations. . . . As a result the treatment. . . . requires that
the fluctuations be explicitly incorporated into the model [18, 61].
These and other observations led Horsthemke and Lefever to define the theory of
noise-induced transitions (NITs) [18] that study the stochastic bifurcations that are
induced by zero-mean noises in nonequilibrium systems. Those transitions depend
on characteristics of the noise, such as its variance, and have the effect of changing
the nature of the stationary probability density functions of state variables, for
example from unimodal to bimodal, or vice versa.
The NIT theory is of the utmost interest in biomedicine, since in-vivo the envi-
ronmental situations are. . . extremely complex and thus likely to present important
fluctuations [62]. For applications in the field of oncology see [62, 63].
The properties of our models strongly suggest that also in the therapy of tumors
such noise-induced transitions may occur, because of the inavoidable presence of
stochastic fluctuations in some parameters. The most remarkable point is that such
transitions would correspond to sudden tumor relapses during therapy that are not
due to genetic causes or to physical resistance.
These transitions may be caused by any of the parameters appearing in the
equation modeling the dynamics of V (t). In particular, fluctuations strongly affect
chemotherapy. For example, the case of constant infusion therapy, c(t) = C is an
idealization. Moreover, also nonconstant therapies are affected by various kinds
of noises, in particular by perturbations in drug pharmacokinetics. Finally, also
other parameters may randomly fluctuate, e.g. parameters in the drug effectiveness
function.
Thus, in order to give a more realistic description, given a parameter we set

(t) = m (1 + (t)),

where (t) is a noise and m is the average value of (t).


A classical approach consists in assuming that (t) is a gaussian white noise;
however, this is, in our case, an inappropriate solution for two reasons. The first
is that white noise (and often also gaussian colored noises) cannot be used where
the dynamics nonlinearly depends on the perturbed parameter [18]. The second
reason is more general, since, as stressed in [19, 20] in analyzing a different kind
of gaussian noise-induced transition, the use of gaussian noise leads to biological
inconsistencies. Let us consider indeed our model (11.2) of chemotherapy, and let
us allow that a constant continuous chemotherapy is perturbed by a white gaussian
noise (t) with variance 2 . Since the noise is unbounded, in a generic small interval
(t,t + t) there will be a non-null probability that (V (t))Cm ( t + (W (t + t)
W (t))) < 0 for any t > 0, W (t) denoting the standard Wiener process. Such an
event may also happen if if (t) is colored.
11 Bounded Noises and Nongenetic Resistance to Antitumor Chemotherapies 179

In other words, there will be a non-null probability that a cytotoxic chemotherapy


may add neoplastic cells to its target tumor, which is a nonsense. As a consequence
the gaussian noise should be avoided to investigate the effects of fluctuations of
chemotherapy. Note that extremely large killing rates per time unit are not possible,
thus precluding not only gaussian noises but also lognormal noises.
For these reasons, we shall assume that (t) is a bounded noise, i.e. that it exists
a B > 0 such that | (t)| B < +, with m (1 B) > 0. In our simulations we shall
use as bifurcation value the bound B.
Since the noise-induced transitions are dependent on the probability density of
the noise adopted [64], we shall assume three kinds of bounded noise. The first is
derived by applying a bounded function to a Wiener process, the second and the third
through a stochastic differential equation, nonlinear in the drift and in the diffusion
term, respectively.
The first noise we shall deal with is the sine-Wiener noise [26, 65, 66], i.e. the
process:
 
2
(t) = BSin W (t) , (11.8)
s

where W (t) is the Wiener process. The stationary density for this process is [65,66]:

S
PSW ( ) = ,
B2 2

and the autocorrelation time is corr = s .


The second is the TsallisBorland noise [2125], which is defined by the
following Langevin equation:
 
1
(t) = + 2D (t) , (11.9)
1 B 2
2

where (t) is a gaussian white noise with zero mean and unitary variance. The
stationary density of the above noise is a Tsallis q-statistics [2225]

  1q
1
2
PT S ( ) = A(q, B) 1 2 ,
B +

where q < 1, D = 0.5(1 q)B2 , and A(q, B) is a normalization constant, whereas


the autocorrelation time is such that
2
corr .
5 3q
180 A. dOnofrio and A. Gandolfi

The third noise we shall consider is the CaiLin noise [65, 66], which is defined
by the following Langevin equation:


(t) = + (B2 2 ) (t), (11.10)
+1

with > 1. As a consequence, if (0) [B, +B], then the noise is non-
gaussian with zero mean, and such that B < (t) < B. Moreover, the process (t)
has exactly the same autocorrelation function of the OrnsteinUhlenbeck process,
and thus its autocorrelation time is = 1/ . The stationary density of the CaiLin
noise is:
 
2
Pst ( ) = N 1 2 ,
B +

where N is a normalization constant. Note that the density is unimodal for > 0
and bimodal for < 0.

11.6 Numerical Simulations of Boli-Based Therapies

In this section we shall study numerically the dynamics of a tumor undergoing


a cytotoxic chemotherapy delivered at periodically spaced intervals. Namely, we
shall focus on the qualitative changes of the conditional probability density function
(PDF) of the tumor volume at time tre f , namely the density Q defined by:

Q(V ;V0 , K0 ,tre f )dV =


, -
Prob V < V (tre f ) < V + dV | (V, K)(0) = (V0 , K0 ) .

With a slight abuse of notation1 we shall call such qualitative changes noise-induced
transitions at time tre f .
In all simulations (if not explicitly noted) we set (V (0), K(0) = (3900, 8000),
which is a point belonging to the basin of attraction of the smaller equilibrium state
of system (11.5)(11.6) in the case of a continuous constant therapy c(t) = C = 0.36,
i.e. Ve 3315. As reference time tre f , we set tre f = 365 day.
As far as the drug administration is concerned, although continuous infusion
therapies are increasingly important from the biomedical point of view, the majority
of therapies are still scheduled by means of periodic delivery of boli of an
antitumor agent. Thus, if the agent has monoexponential pharmacokinetics, then

1 Indeed, the noise-induced transitions theory usually refers to transitions to/from multimodality in

steady state probability densities.


11 Bounded Noises and Nongenetic Resistance to Antitumor Chemotherapies 181

its concentration profile is ruled by the following impulsive differential equation:

c = ac (11.11)
c(nT + ) = c(nT ) + S, n = 0, 1, 2, . . . , (11.12)

where S is the ratio between the delivered dose and the distribution volume of the
agent, T is the constant interval between two consecutive boli, and a is the clearance
rate constant [67].
We start our analysis by examining the major stochastic factors that could perturb
system (11.11)(11.12), apart drug dosing, which is nowadays very accurate.
The first relevant phenomenon we shall consider is the presence of stochastic
fluctuations in the clearance rate of the drug [68], which are due to changes that
affect the physiologic mechanisms of drug elimination by the body. The reasons
underlying this kind of noises are due to manifold factors of disparate endogenous
and exogeneous nature, including, for example, the meals [69].
As a consequence, we consider a stochastic time-varying clearance rate

a(t) = am + a (t),

where a (t) is a bounded noise such that am + a (t) > 0. Moreover, we suppose that
am , T, S are such that, in absence of noise, the tumor size asymptotically oscillates
around a low value, i.e. in the deterministic setting there is a steady control of the
tumor.
Note that, given the structure of the pharmacokinetic equations, the noise here is
filtered, which might superficilly lead one to think that noise-induced phenomena
are not possible.
We started by simulating a cytotoxic therapy characterized by am = 1/7 day1 ,
T = 6 day, and Cm = 0.18, so that the delivered bolus is S = am TCm = 0.154. The
initial conditions of the tumor were V (0) = 3, 900, K(0) = 8, 000. In case of Tsallis
noise with q = 0 and corr = 0.5 days, we observed the onset of NIT at B 0.1am .
The bimodal PDF of the r.v. V (365) for B = 0.2am is shown in the right upper
panel of Fig. 11.2, whereas in the left upper panel it is shown the unimodal PDF for
B = 0.08am . In case of sine-Wiener noise, the density is bimodal for B = 0.11am
(not shown).
In a second simulation, we changed the scheduling passing to a more time dense
(metronomic [70,71]) scheduling, without decreasing the total quantity of delivered
drug. Namely, we halved both the period, T = 3, and the dose of the bolus, S =
0.077. The effect obtained is the almost total suppression of the bimodality in the
PDF at B = 0.2am , as illustrated in the lower panel of Fig. 11.2. Suppression of
the bimodality was also observed in case of sine-Wiener noise where at B = 0.2am
the PDF turned to be unimodal.
This result suggests that metronomic schedulings might have not only the
beneficial effects of reducing the side effects as well as of being more effective
182 A. dOnofrio and A. Gandolfi

30 500 60
Density

Density

Density
0 0 0
3200 V(365) 3245 3000 V(365) 9000 3080 V(365) 3200

Fig. 11.2 Stochastically varying clearance rate a(t) = am + (t) of a cytotoxic agent. Parameters:
T = 6, am = 1/7, S = 0.154. (t) is a Tsallis noise with q = 0 and corr = 0.5 days. Left panel:
plot of the PDF of the tumor volume at 1 year for B = 0.08am ; central panel: plot of the bimodal
PDF for B = 0.2am ; right panel: suppression of the bimodality for B = 0.2am by metronomic
scheduling with T = 3 and S = 0.077. Tumor volumes in mm3 . From the paper [15]. Copyright:
American Physical Society 2010

in reducing the tumor mass, but they even might reduce the possibility of relapse,
here suggested, due to the nonlinear interplay between tumor and vessels.
We now pass to consider another major phenomenon which is more directly
related to the human behavior: the irregularities of the drug delivery. Indeed, it is
well known that the times of delivering may be subject to unpredictables delays and
anticipations [72] Here we shall assume that the clearance rate is constant, whereas
the time of delivering is slightly irregular, which implies that Eq. (11.12) become:

c(Tn+ ) = c(Tn ) + S (11.13)


Tn = nTm + n , n = 0, 1, 2, . . . , (11.14)

where n is a random sequence such that n  = 0 and Tm + n > 0, so that


Tn  = nTm . In our simulations of a cytotoxic therapy we have supposed that {n }
are independent random variables uniformly distributed in the interval [A, A]. The
simulations showed that noise-induced transitions occur for A 0.33 day.

11.7 Fluctuations in Nonlinear Parameters

In nonlinear systems of the form X = (X; p) where p is a vector of parameters,


in the vast majority of cases the velocity (X; p) of the state variables depends
nonlinearly on the parameters, and this fact precludes the possibility of modeling
the fluctuations of p by means of gaussian white noise, whereas such fluctuations
are perfectly modelizable by means of bounded noises.
We will give here two examples of transitions induced by perturbations of
nonlinear parameters for a vascularized solid tumor under constant chemotherapy.
Namely we here consider fluctuations involving the internal structure of the function
( ) yielding
11 Bounded Noises and Nongenetic Resistance to Antitumor Chemotherapies 183

0.025 0.25

0.020 0.20

0.015 0.15

0.010 0.10

0.005 0.05

0.000 0.00
3250 3300 3350 3400 4000 5000 6000 7000 8000
V V

Fig. 11.3 Nonlinear fluctuations of ( ) during a continuous constant therapy. ( ) =



2 . Effect of CaiLin noise are shown. In both cases: corr = 1 days and = 1.
1+u (t)+(( 2)/0.35)
Left panel: for B = 0.125 the probability density is unimodal centered at low values of V ; right
panel: for B = 0.25 the distribution is bimodal. Tumor volumes in mm3


( ) = .
1 + u (t) + (( 2)/0.35)2 (1 + w (t))

In the simulations of this section we assumed a drug profile such that c(t) =
0.15, whose associated equilibrium points are E1 =(3323, 6924), E2 =(4053, 7398),
and E3 =(8794, 9577). Also in this case we assumed as initial condition (V0 , K0 ) =
(3900, 8000), which belongs to the basin of attraction of E1 .
In Fig. 11.3 it is illustrated the statistical response (tumor size) to the system for
the case w (t) = 0 and u (t) is a CaiLin noise with corr = 1 days and = 1. In left
panel (where B = 0.125) the smaller deterministic equilibrium is simply perturbed,
and the density is unimodal; in right panel (where B = 0.25), one may observe a
second mode roughly centered at the second and larger deterministic equilibrium
size. The transition threshold is at B 0.155.
No transition is instead observed in the case where u (t) = 0 and w (t) is a Cai
Lin noise with = 1 and corr = 1 or 5 days, and also when w (t) is a sine-Wiener
noise (with corr = 1 or 5 days).

11.8 Stochastic Oscillations in the Proliferation


Rate of Vessels

Up to now we have dealt with the impact on the outcome of antitumor chemothrapies
of perturbations concerning the pharmacokinetic, or the drug delivery, or the drug
effectiveness in killing the neoplastic cells.
Here, instead, we are interested to assess which are the consequences of irregular
oscillations (around an average value) of the proliferation rate of vessels due to
irregular production of the related proangiogenic factors.
184 A. dOnofrio and A. Gandolfi

0.030

0.025 0.20

0.020 0.15

0.015
0.10
0.010
0.05
0.005

0.000 0.00
3200 3250 3300 3350 3400 3450 3000 4000 5000 6000 7000 8000 9000
V V

Fig. 11.4 Stochastic oscillations in ( ) due to fluctuating production rate of anti-angiogenic


factors during a continuous constant therapy. ( ,t) = (1 + (t))(4.64/ ). Effects of CaiLin
noise are shown. In both panels: corr = 0.1 days and = 1. Left panel: for B = 0.1 the probability
density is unimodal centered at a low value of V ; right panel: for B = 0.25 the distribution is
bimodal. Tumor volumes in mm3

To this aim, we performed some simulations where we assumed that the tumor is
undergoing a constant continuous therapy similar to the one considered in Sect. 11.7,
and that, as a consequence of the aforementioned random oscillations, the growth
rate of vessels is given by

( ,t) = (1 + (t))m ( ),

where the noise is bounded and such that 1 + (t) > 0. Namely we assumed that
Cm = 0.15 and that m ( ) = 4.64/ .
Moreover, since the biochemical oscillations are by no means faster than the
tumor dynamics, and the vessel growth is also faster than the tumor cell proliferation
(note that (4.64)1 0.21 days), we assumed that the autocorrelation time of the
noise (t) is small, taking corr = 0.1.
Both in case of CaiLin and of sine-Wiener noise, we obtained that also for small
B there are noise-induced transitions (see Fig. 11.4). The transition thresholds are
B = 0.15 for CaiLin noise, and B = 0.1 for the sine-Wiener noise. This suggests
that not only the average value of the proangiogenic factors production rate matters
but also their random variability.

11.9 Concluding Remarks

In this work, we have presented an analysis of the possible onset of resistance to


tumor chemotherapy induced by the effects of bounded noises. The noises model
stochastic fluctuations in the time course of the blood concentration of the drug, or
in other parameters such as the production rate of pro-angiogenic factors.
11 Bounded Noises and Nongenetic Resistance to Antitumor Chemotherapies 185

The assumption of boundedness for the noise, in contrast to the use of gaussian
noises, allows a more faithful modeling of real biological phenomena and allows to
avoid artifact results deriving from the temporary negativity of parameters.
The interplay of the stochastic fluctuations with the intrinsic multistability
of the system may generate noise-induced transitions at the end of the therapy.
In other words, stochastic perturbations may induce a form of resistance to therapies
potentially able of leading to a stable disease in a variety of biologically meaningful
scenarios, which can be divided into some classes: (a) drug delivery-related fluctu-
ations (continuous infusion therapy and bolus-based therapy irregularly delivered);
(b) stochasticity of pharmacokinetics; (c) stochasticity of nonlinear pharmacody-
namics; (d) fluctuations in the production of pro-angiogenic factors.
In all the above cases multistability in our models origins from the drug
effectiveness that, based on some biological background, is nonlinear and unimodal.
Concerning the control of the effects of fluctuations in the drug clearance rate,
in order to reduce the possibility of relapse (i.e., of noise-induced transitions) our
simulations suggest that a possible benificial option is the so-called metronomic
scheduling of the therapeutical agent. Moreover, our simulations of the irregular
intake of the therapy show that a rigorous adherence to the prescribed scheduling
can avoid therapeutic failures. More difficult appears the control of other fluctuation
sources, such as the distribution volume of the drug, which should probably require
a feedback adaptation of the delivered dose.
Summarizing, we may say that the possible multistability of tumors under
constant continuous infusion chemotherapy, suggested by our models, calls for more
efforts in monitoring the drug delivery, also in view of therapy optimization.

Acknowledgments The work of A. dOnofrio was conducted within the framework of the EU
Integrated Projects Advancing Clinico-Genomic Trials on Cancer ACGT and P-Medicine. This
work was also partially supported by MIUR-Italy, PRIN 2008RSZPYY.

References

1. Tuerk, D., Szakacs, G.: Curr. Op. Drug Disc. Devel. 12, 246 (2009)
2. Kimmel, M., Swierniak, A.: Lect. Note Math. 1872, 185 (2006)
3. Tunggal, J.K., Cowan, D.S.M., Shaikh, H., Tannock, I.F.: Clin. Cancer Res. 5, 1583 (1999)
4. Cowan, D.S.M., Tannock, I.F.: Int. J. Cancer 91, 120 (2001)
5. Jain, R.K.: Ann. Rev. Biom. Eng. 1, 241 (2001)
6. Jain, R.K.: J. Contr. Release 74, 7 (2001)
7. Wijeratne, N.S., Hoo, K.A.: Cell Prolif. 40, 283 (2007)
8. Carmeliet, P., Jain, R.K.: Nature 407, 249 (2000)
9. Tzafriri, A.R., Levin, A.D., Edelman, E.R.: Cell Prolif. 42, 348 (2009)
10. Netti, P.A., Berk, D.A., Swartz, M.A., Grodzinsky, A.J., Jain, R.K.: Cancer Res. 60,
2497 (2000)
11. Cosse, J.P., Ronvaux, M., Ninane, N., Raes, M.J., Michiels, C.: Neoplasia 11, 976 (2009)
12. Araujo, R.P., McElwain, D.L.S.: J. Theor. Biol. 228, 335 (2004)
13. dOnofrio, A., Gandolfi, A.: J. Theor. Biol. 264, 253 (2010)
186 A. dOnofrio and A. Gandolfi

14. dOnofrio, A., Gandolfi, A., Gattoni, S.: Phys. A 391, 64846496 (2012)
15. dOnofrio, A., Gandolfi, A.: Phys Rev E 82, Art.n. 061901 (2010)
16. Norton, L., Simon, R.: Cancer Treat. Rep. 61, 1303 (1977)
17. Kramers, H.A.: Physica 7, 284 (1940)
18. Horstemke, W., Lefever, H.: Noise-Induced Transitions in Physics, Chemistry and Biology.
Springer, Heidelberg (2007)
19. dOnofrio, A.: Noisy oncology. In: Venturino, E., Hoskins, R.H. (eds.) Aspects of Mathemati-
cal Modelling. Birkhauser, Boston (2006)
20. dOnofrio, A.: Appl. Math. Lett. 21, 662 (2008)
21. dOnofrio, A.: Phys. Rev. E 81, 021923 (2010)
22. Fuentes, M.A., Toral, R., Wio, H.S.: Phys. A 295, 114 (2001)
23. Fuentes, M.A., Wio, H.S., Toral, R.: Phys. A 303, 91 (2002)
24. Revelli, J.A., Sanchez, A.D., Wio, H.S.: Phys. D 168169, 165 (2002)
25. Wio, H.S., Toral, R.: Phys. D 193, 161168 (2004)
26. Bobryk, R.B., Chrzeszczyk, A.: Phys. A 358, 263 (2005)
27. Wheldon, T.: Mathematical Models in Cancer Research. Hilger Publishing, Boston (1989)
28. Castorina, P., Zappala, D.: Phys. A Stat. Mech. Appl. 365, 14 (2004)
29. Molski, M., Konarski, J.: Phys. Rev. E 68, Art. No. 021916 (2003)
30. Waliszewski, P., Konarski, J.: Chaos Solit. Fract. 16, 665674 (2003)
31. dOnofrio, A.: Phys. D 208, 220235 (2005)
32. Kane Laird, A.: Br. J. Cancer 18, 490502 (1964)
33. Marusic, M., Bajzer, Z., Freyer, J.P., Vuk-Pavlovic, S.: Cell Prolif. 27, 7394 (1994)
34. Afenya, E.K., Calderon, C.P.: Bull. Math. Biol. 62, 527542 (2000)
35. Skipper, H.E.: Bull. Math. Biol. 48, 253 (1986)
36. Folkman, J.: Adv. Cancer Res. 43, 175 (1985)
37. Carmeliet, P., Jain, R.K.: Nature 407, 249 (2000)
38. Yancopoulos, G.D., Davis, S., Gale, N.W., Rudge, J.S., Wiegand, S.J., Holash, J.: Nature 407,
242 (2000)
39. OReilly, M.S., et al.: Cell 79, 315 (1994)
40. OReilly, M.S., et al.: Cell 88, 277 (1997)
41. Folkmann, J.: Ann. Rev. Med. 57, 1 (2006)
42. Hahnfeldt, P., Panigrahy, D., Folkman, J., Hlatky, L.: Cancer Res. 59, 4770 (1999)
43. Sachs, R.K., Hlatky, L.R., Hahnfeldt, P.: Math. Comput. Mod. 33, 1297 (2001)
44. Ramanujan, S., et al.: Cancer Res. 60, 1442 (2000)
45. dOnofrio, A., Gandolfi, A.: Math. Biosci. 191, 159 (2004)
46. dOnofrio, A., Gandolfi, A.: Appl. Math. Comput. 181, 1155 (2006)
47. dOnofrio, A., Gandolfi, A., Rocca, A.: Cell Prolif. 43, 317 (2009)
48. dOnofrio, A., Gandolfi, A.: Math. Med. Bio. 26, 63 (2008)
49. Capogrosso Sansone, B., Scalerandi, M., Condat, C.A.: Phys. Rev. Lett. 87, 128102 (2001)
50. Scalerandi, M., Capogrosso Sansone, B.: Phys. Rev. Lett. 89, 218101 (2002)
51. Arakelyan, L., Vainstein, V., Agur, Z.: Angiogenesis 5, 203 (2003)
52. Stoll, B.R., et al.: Blood 102 2555 (2003); Tee, D., DiStefano III, J.: J. Cancer Res. Clin. Oncol.
130, 15 (2004)
53. Chaplain, M.A.J.: The mathematical modelling of the stages of tumour development. In:
Adam, J.A., Bellomo, N. (eds.) A Survey of Models for Tumor-Immune System Dynamics.
Birkhauser, Boston (1997)
54. Anderson, A.R.A., Chaplain, M.A.J.: Bull. Math. Biol. 60, 857 (1998)
55. De Angelis, E., Preziosi, L.: Math. Mod. Meth. Appl. Sci. 10, 379 (2000)
56. Jackson, T.L.: J. Math. Biol. 44, 201 (2002)
57. Forys, U., Kheifetz, Y., Kogan, Y.: Math. Biosci. Eng. 2, 511 (2005)
58. Kevrekidis, P.G., Whitaker, N., Good, D.J., Herring, G.J.: Phys. Rev. E 73, 061926 (2006)
59. Agur, Z., Arakelyan, L., Daugulis, P., Ginosar, Y.: Discr. Cont. Dyn. Syst. B4, 29 (2004)
60. Kerbel, R.S., Kamen, B.A.: Nat. Rev. Cancer 4, 423 (2004)
61. Horstemke, W., Lefever, R.: Phys. Lett. 64A, 19 (1977)
11 Bounded Noises and Nongenetic Resistance to Antitumor Chemotherapies 187

62. Lefever, R., Horsthemke, H.: Bull. Math. Biol. 41, 469 (1979)
63. dOnofrio, A., Tomlinson, I.P.M.: J. Theor. Biol. 24, 367 (2007)
64. Deza, R., Wio, H.S., Fuentes, M.A.: Noise-induced phase transitions: effects of the noises
statistics and spectrum. In: Nonequilibrium Statistical Mechanics and Nonlinear Physics: XV
Conference on Nonequilibrium Statistical Mechanics and Nonlinear Physics, AIP Conf. Proc.
913, pp. 6267 (2007)
65. Cai, G.Q., Wu, C.: Probabilist. Eng. Mech. 87, 17203 (2004)
66. Cai, G.Q., Lin, Y.K.: Phys. Rev. E 54, 299203 (1996)
67. A. Rescigno, Pharm Res 35, 363 (1997)
68. Lansky, P., Lanska, V., Weiss, M.: J. Contr. Release 100, 267 (2004); Ditlevsen, S., de Gaetano,
A.: Bull. Math. Biol. 67, 547 (2005)
69. Csajka, C., Verotta, D.: J. Pharmacokin. Pharmacodyn. 33, 227 (2006)
70. Browder, T., Butterfiled, C.E., Kraling, B.M., Shi, B., Marshall, B., OReilly, M.S., Folkman,
J.: Cancer Res. 60, 1878 (2000)
71. Hahnfeldt, P., Folkman, J., Hlatky, L.: J. Theor. Biol. 220, 545 (2003)
72. Li, J., Nekka, F.: J. Pharmacokin. Pharmacodyn. 34, 115 (2007)
Chapter 12
Interplay Between Cross Correlation and Delays
in the Sine-Wiener Noise-Induced Transitions

Wei Guo and Dong-Cheng Mei

Abstract The analyses and a possible definition of cross-correlated sine-Wiener


noises are given first. As an application example, the model of tumorimmune
system interplay with time delays and cross-correlated sine-Wiener noises is
investigated by numerical simulations for the stationary probability distribution and
stationary mean value of tumor cell population.

Keywords Bounded noises Sine-Wiener noise Cross-correlated noises


Tumor Immune system Delay differential equations

12.1 Introduction

Traditionally, Gaussian noise is adopted to describe fluctuations of dynamical


systems. However, Gaussian noise is unbounded and as such there is a positive
chance of taking large values [13]. Strictly speaking, this fact contradicts the
very nature of a real physical quantity which is always bounded [3, 4]. Reference
[2] indicates that a suitable bounded noise should be introduced in the modeling
of the tumorimmune system interplay. Until now, a great deal of interesting
research on bounded noise has been published [3, 59]. The sine-Wiener (SW)
noise, for example, can induce transitions in different models [2, 3, 10, 11]. If
a dynamical system is affected by multiple noises that have a common origin,
cross-correlated noises should be included [12]. Theoretically, the role of the cross-
correlated bounded noises, e.g. cross-correlated sine-Wiener (CCSW) noises, is the
same as cross-correlated Gaussian noises.

W. Guo () D.-C. Mei


Department of Physics, Yunnan University, Kunming, China
e-mail: guowei997@gmail.com; meidch@ynu.edu.cn

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 189


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 12, Springer Science+Business Media New York 2013
190 W. Guo and D.-C. Mei

In realistic systems, an inclusion of time delay is natural. From the point of


view of physics, the transport of matter, energy, and information through a system
requires a finite time which is treated as time delay. The research of the analytical
solution on stochastic systems with time-delayed feedback is rather complicated.
However, several authors investigate the stationary probability distribution (SPD)
of one-dimensional Langevin equation (LE) with small time delay [1315]. For
the LE with large time delay one resorts to numerical simulation [16]. Many
interesting and meaningful phenomena induced by delayed feedbacks have been
found, e.g., traveling wave solutions [17], coherence resonance and spike death
[1820], excitability [21], symmetric break [22], suppressed population explosion
of the mutualism system [23], Hopf bifurcation [16], stochastic resonance [2426]
as well as critical phenomena [27, 28].
Tumors can arise through extremely complex (nonlinear and time-varying)
interactions with the immune system. Three essential phases have been envisaged
for tumor natural history in recent literature [29]: elimination phase, equilibrium
phase, and escape phase. The equilibrium phase between the tumor and antitumor
immune response may last many years, in which the immune system exerts
a selective pressure by eliminating susceptible tumor clones and a continuous
sculpting of tumor cell variants occurs [2931]. It is worth to note that the existence
of tumor-specific antigens has been experimentally confirmed by Willimsky [30].
The development of specific immune competence, namely the antigen recognition
and the antigen-stimulated proliferation of effectors, takes a certain time which can
be simulated by a time delay [3234]. Moreover, for adapting to their surrounding
environment constraints tumor cells need a reaction time, which has been treated as a
time delay [28,35]. Finally, since both the immune system and tumor cells are influ-
enced by many environmental factors (e.g., the supply of oxygen, chemical agents,
temperature, and radiations), it is unavoidable that the parameters of tumorimmune
interaction undergo stochastic perturbations [1, 28, 3639]. In certain situations, a
bounded noise instead of a Gaussian noise can give the parameter a reasonably
realistic stochastic character (see, e.g., [2]). If two or more parameters are affected
by noises that have a common origin, cross-correlated bounded noises (e.g., CCSW
noises) should be further considered in the tumorimmune interactions, in addition
to the time delays.

12.2 Definition Of Cross-Correlated Sine-Wiener Noises

The explicit definition of cross-correlated sine-Wiener (CCSW) noises, 1 (t) and


2 (t), will be given here. Consider two sine-Wiener (SW) noises,

2
1 (t) = A sin( 1 (t)),
1

2
2 (t) = B sin( 2 (t)). (12.1)
2
12 Interplay Between Cross Correlation and Delays in the Sine-Wiener Noise... 191

Here 1 and 2 are the correlation times of 1 (t) and 2 (t), respectively; A and
B are their noise intensities; 1 (t) and 2 (t) are two standard Wiener processes,
d 1 = 1 (t)dt and d 2 = 2 (t)dt. Two Gaussian white noises, 1 and 2 , satisfy the
fluctuationdissipation relation:

1 (t)1 (t ) = 2 (t)2 (t ) = (t t ),

1 (t)2 (t ) = 1 (t )2 (t ) = (t t ), (12.2)

where cross-correlation intensity [1, 1], and  denotes an ensemble average.


Using the initial value of the standard Wiener process, 1 (0) = 2 (0) = 0, one gets
1 (t)2 (t ) = 1 (t )2 (t) = min(t,t ), where min(t,t ) denotes taking the
smaller value between t and t .
To obtain the statistical properties of the noises, the following formula is
derived first

a2t b2t
exp(a1 (t) + b2 (t )) = exp[ + ab min(t,t ) + ]. (12.3)
2 2
Here, a and b are two constants. To prove it, the Gaussian noises are trans-
formed [40]:

1 (t) = (t),

2 (t) = (t) + 1 2 (t), (12.4)

where (t) and (t) are two independent Gaussian white noises with unitary
intensity. Note that Eq. (12.2) is still satisfied.
By substituting Eq. (12.4) into the left of Eq. (12.3), expanding the exponential
function, one can obtain
 t  t

exp(a1 (t) + b2 (t )) = 1 + a dt1 (t1 ) + b dt2 [ (t2 )
0 0

  t  t
1
+ 1 2 (t2 )] + {a dt1 (t1 ) + b dt2 [ (t2 )
2! 0 0

  t  t
1
+ 1 2 (t2 )]}2  + . . . + {a dt1 (t1 ) + b dt2 [ (t2 )
(2n)! 0 0

+ 1 2 (t2 )]}2n  + . . . (12.5)

Considering the properties of and , through a complex simplification, the


averages of the term with power of an odd number in Eq. (12.5) vanish, and then we
have
192 W. Guo and D.-C. Mei

1 1 (2n)!
exp(a1 (t) + b2 (t )) = 1 + f (t) + . . . + f (t)n + . . . ,
2! (2n)! 2n n!
n = 1, 2, . . . (12.6)

with
 t t  t  t
f (t) = a 2
dt1 dt2  (t1 ) (t2 ) + b 2
dt1 dt2 [ 2  (t1 ) (t2 )
0 0 0 0
 t  t
+(1 2 )] (t1 ) (t2 )] + 2ab dt1 dt2  (t1 ) (t2 ). (12.7)
0 0

In this derivation, we used


 t  t
... dt1 . . . dt2n1 i (t1 ) . . . i (t2n1 ) = 0,
0 0
 t  t  t t
(2n)!
... dt1 . . . dt2n i (t1 ) . . . i (t2n ) = [ dt1 dt2 (t1 t2 )]n ,
0 0 2n n! 0 0

where, i = , , the factor (2n)!


2n n! is the number of permutation [41].
The summation in Eq. (12.6) is performed in the exponential, i.e.

1
exp(a1 (t) + b2 (t )) = exp{ f (t)}, (12.8)
2
and using the integral formula

 t  t

dt1 dt2 i (t1 )i (t2 ) = min(t,t ),
0 0

Eq. (12.8) can be written as Eq. (12.3).


According to the Euler representation of sine function and the formula Eq. (12.3),

as t t , the following properties of the noises are derived [42]:

1 (t) = 2 (t) = 0, (12.9)



A2 t t t
1 (t)1 (t ) = exp( ) [1 exp(4 )], (12.10)
2 1 1

B2
t t t
2 (t)2 (t ) = exp( ) [1 exp(4 )], (12.11)
2 2 2

AB t t t
1 (t)2 (t ) = 1 (t )2 (t) = exp( ) [1 exp(4 )], (12.12)
2 3 3
12 Interplay Between Cross Correlation and Delays in the Sine-Wiener Noise... 193


Fig. 12.1 as a function of and t /3 from Eq. (12.13). From paper [42] (C) Elsevier Science
Ltd (2012)

with

2(1 )t

1 exp(4 t3 )
= exp( ) . (12.13)
3 1 exp(4 t 3 )


as a function of and t 3 is plotted in Fig. 12.1 from Eq. (12.15). It can be

seen that the values of are influenced greatly by and t
3 in the case of

t
3< 2, and influenced tinily for > 5 but the curves pass three points ( , ) =
t
3
(1, 1; 0, 0; 1, 1) constantly. Generally, a long time is needed for the system to

reach the stationary state that results in t
3
5. Namely, may be approximately

treated as an independent variable on and t 3 . Morover, the cross-correlated
statistical properties should include cross correlation time (independent on the self-
correlation times 1 and 2 ). 3 in Eq. (12.14) may play the role.

Consequently, and 3 are redefined as two new variables, [1, 1] and 3
0, which are the cross-correlation intensity and cross-correlation time, respectively.
In this way Eq. (12.14) can be defined as the cross-correlated statistical properties
of the noises following the definition of cross-correlated colored noises. It should

be noted that the cross-correlation time 3 must be zero when the intensity is 0.
We will consider the CCSW noises with the statistical properties Eqs. (12.9)(12.11)
and (12.14) in the following section.
194 W. Guo and D.-C. Mei

12.3 Cross Correlation and Delays in the Transitions

12.3.1 Model

Consider a model of tumorimmune system interplay [42]

dx x xx
= [r + 1 (t)](1 )x [ + 2 (t)] , (12.14)
dt k 1 + x2

where x is the number of tumor cells (the same meaning with a tumor volume or
mass) at time t; r is per capita birth rate in the presence of innate immunity and
r > 0, which means weak innate immunity or highly aggressive tumor; k (> 0) is
the largest intrinsic carrying capacity allowed by the environment; ( 0) is specific
immune coefficient; x = x(t ) and x = x(t ). Two constant delay times,
and , are used to simulate a reaction time of tumor cell population to their
surrounding environment constraints, and a time taken by both the tumor antigen
identification and tumor-stimulated proliferation of effectors (e.g., effector cells and
effector molecules), respectively.
Now, the main reasons for the introduction of CCSW noises in the model are
presented. First, fluctuation of a Gaussian noise (e.g., the white noise) is large and,
in certain situation, it is questionable to make a positive parameter subject to it.
In Fig. 1 of [2], a positive parameter (equals 1.8) under a white Gaussian noise of
unitary intensity is negative with a large percentage ( 37.9%). In our model, r > 0
and 0, after r and are affected by external perturbations, 2r [r + 1 (t)] > 0
and 2 [ + 2 (t)] 0 are always ensured by taking the values of A and B as
0 A r and 0 B in Eqs. (12.6) and (12.7). Second, since 1 (t) and 2 (t) are
assumed as having a common origin (the external disturbance mentioned above),
the noises may be correlated [12].

12.3.2 Algorithm

The transitions between the unimodal and bimodal SPD are termed the nonequi-
librium phase transitions [43, 44]. The studies of the dynamical systems with
cross-correlated bounded noises are complicated and the research in this field is
rare. Generally, since the cross-correlated noises cannot be treated directly (it is
difficult), it is mandatory to develop a transformation, i.e., a decoupling scheme
[40], or a stochastic equivalent method [45]. In order to investigate the transitions in
the system, we present here, SPD is simulated from Eq. (12.14).

For simplicity, we limit that 0 and all the correlation times take the same
value (i.e., 1 = 2 = 3 = ). Here, the CCSW noises are obtained by the following
transformations (similar to Eq. (12.4)),
12 Interplay Between Cross Correlation and Delays in the Sine-Wiener Noise... 195

Fig. 12.2 The SPD as a function of x for A = 0.5, B = 1, r = 1, k = 10, = 2 and = = 0



with (a) = 0; = 0.03, 0.05, 0.08 and 0.09, and (b) = 0.5; = 0.07, 0.1, 0.2 and 0.3. From
paper [42] (C) Elsevier Science Ltd (2012)


Fig. 12.3 The SPD as a function of x in the case (a) = 0.8; = 0.2, 0.3, 0.4 and 0.5, and

(b) = 0.9; = 0.5, 0.65, 0.85 and 0.9. The other parameter values are the same as in Fig. 12.2.
From paper [42] (C) Elsevier Science Ltd (2012)

2
1 (t) = A sin( (t)),

 
2  2
2
2 (t) = B sin( (t)) + B 1 sin( (t)), (12.15)

where and are two independent standard Wiener processes. These transfor-
mations do not change the statistical properties of Eqs. (12.9)(12.12).
By substituting Eq. (12.15) into Eq. (12.1), we integrate Eq. (12.14) with the
BoxMueller algorithm for generating the Gaussian white noise and the Euler
forward procedure [46, 47]. For each value of the delay times and the noise
parameters, SPD is calculated as an ensemble average of independent realizations.
Every realization spans 2.5 106 integration steps to allow the system reaching a
stationary state. We employed as initial value x(t 0) (0, 0.1) and the integration
step was t = 0.001. The results are presented as follows.
196 W. Guo and D.-C. Mei

Fig. 12.4 The SPD as a


function of x for = 0,

= 0.5 and = 0.5 with
= 0, 1.0, 1.5 and 1.8,
respectively. The other
parameter values are the same
as in Fig. 12.2. From paper
[42] (C) Elsevier Science Ltd
(2012)

Fig. 12.5 The SPD as a


function of x for = 2,

= 0.5 and = 0.5 with
= 0, 0.4, 0.8 and 1.2,
respectively. The other
parameter values are the same
as in Fig. 12.2. From paper
[42] (C) Elsevier Science Ltd
(2012)

12.3.3 Numerical Simulation Results and Discussions

The SPD as a function of x for different values of the correlation time ( ) is plotted in
Figs. 12.2 and 12.3. Figure 12.2a shows that unimodal SPD centered at a low value
of x becomes bimodal structure with the second maximum centered at larger values
of the tumor size x, as is increased. There is a critical value of = 0.05 (denoted by
cr1 = 0.05), near which a transition appears. Figure 12.2b reveals that the unimodal
SPD becomes bimodal with increasing and there is a transition next to = 0.1
(denoted by cr2 = 0.1). Likewise, in Fig. 12.3a,b the transitions from the unimodal
SPD to the bimodal SPD occur close to two critical value cr3 = 0.3 and cr4 = 0.65,
12 Interplay Between Cross Correlation and Delays in the Sine-Wiener Noise... 197


Fig. 12.6 The xst as a function of and for = 0.5 and = 0.5. The other parameter
values are the same as in Fig. 12.2. From paper [42] (C) Elsevier Science Ltd (2012)

respectively. From Figs. 12.2 and 12.3, the critical correlation time cr increases

with the rising correlated intensity . Namely, an increase in the correlation degree
between noises can suppress the transitions caused by , that is, the escape of tumor

is suppressed by .
In Figs. 12.4 and 12.5, the SPD as a function of x for different time delays and
is plotted, respectively. Figure 12.4 depicts that the left peak of the SPD become
higher as 1 and the right peak become lower until disappears at about = 1.8
(denoted by cr = 1.8) with increasing , i.e., a transition arises near cr . A sim-
ilar transition also appears in [28] where the system is driven by a Gaussian white
noise. In Fig. 12.5, the emergence of bimodal SPD is in the vicinity of = 0.8 (de-
noted by cr = 0.8) as is increased. Namely, a transition can be induced by .
In Fig. 12.6, xst as a function of two time delays and is plotted. It displays
that xst decreases obviously with increasing when approaches 0, and xst
increases obviously with increasing as comes close to 2. Namely, large
promotes the transitions induced by and small promotes the transitions
induced by . Now, the behavior in Figs. 12.412.6 is discussed. The equilibrium
phase is unstable from the view of mathematical physics [39] and lasts for a longest
time among the three phases in tumorigenesis [29]. For a tumor in the equilibrium
phase, large means the low adaptive capacity to current surrounding environment
and in this case the tumor transfers to the escape phase if the immune response is
198 W. Guo and D.-C. Mei


Fig. 12.7 The xst as a function of and . The other parameter values are the same as in
Fig. 12.2. From paper [42] (C) Elsevier Science Ltd (2012)

blunted enough (see the emergence of bimodal SPD in Fig. 12.5 and xst vs in
Fig. 12.6 for 1.6). On the contrary, in the case of the rapid immune response
(small ), the adaptive capacity at a low level leads to the suppression of the escape
(see, the emergence of unimodal SPD in Fig. 12.4 and xst vs in Fig. 12.6 for
0.5).

In Fig. 12.7, xst as a function of two noise parameters and is plotted.
xst rises pronouncedly first and changes slightly then with respect to increasing

correlation time for fixed correlated intensity . Meanwhile, the critical value
of correlation time cr , i.e., corresponding to the significantly increased xst , is

increased as the noise correlation degree increased from 0 (uncorrelated noises)
to 1 (the strongest correlated noises). This also confirms the results of Figs. 12.2
and 12.3.

12.4 Conclusions

We report a study on interplay between cross correlation and delays in the sine-
Wiener noise-induced transitions in a model of the tumorimmune system interplay.
The CCSW noises are defined and it is worth to note that they are useful
for modeling dynamical systems. Moreover, although the corresponding system
12 Interplay Between Cross Correlation and Delays in the Sine-Wiener Noise... 199

exhibits rich dynamical behaviors, at the best of our best knowledge, the interplay
between cross-correlated bounded noises and delays implies that it is difficult to
obtain the analytical results, due to the complexity in the systems. We expect that
these numerical findings will trigger new investigations on this topic.

Acknowledgments This work was supported by the National Natural Science Foundation of
China (Grant No. 11165016) and the program for Innovative Research Team (in Science and
Technology) in University of Yunnan province.

References

1. dOnofrio, A.: Appl. Math. Lett. 21, 662 (2008)


2. dOnofrio, A.: Phys. Rev. E 81, 021923 (2010)
3. Bobryk, R.V., Chrzeszczyk, A.: Phys. A 358, 263 (2005)
4. Cai, G.Q., Wu, C.: Probab. Eng. Mech. 19, 197 (2004)
5. Borland, L.: Phys. Lett. A 245, 67 (1998)
6. Fuentes, M.A., Toral, R., Wio, H.S.: Phys. A 295, 114 (2001)
7. Revelli, J.A., Sanchez, A.D., Wio, H.S.: Phys. D 168169, 165 (2002)
8. Fuentes, M.A., Wio, H.S., Toral, R.: Phys. A 303, 91 (2002)
9. Wio, H.S., Toral, R.: Phys. D 193, 161 (2004)
10. Bobryk, R.V., Chrzeszczyk, A.: Nonlinear Dyn. 51, 541 (2008)
11. dOnofrio, A., Gandolfi, A.: Phys. Rev. E 82, 061901 (2010)
12. Fulinski, A., Telejko, T.: Phys. Lett. A 152, 11 (1991)
13. Guillouzic, S., LHeureux, I., Longtin, A.: Phys. Rev. E 59, 3970 (1999)
14. Frank, T.D.: Phys. Rev. E 71, 031106 (2005)
15. Frank, T.D.: Phys. Rev. E 72, 011112 (2005)
16. Nie, L.R., Mei, D.C.: Phys. Rev. E 77, 031107 (2008)
17. Bressloff, P.C., Coombes, S.: Phys. Rev. Lett. 80, 4815 (1998)
18. Huber, D., Tsimring, L.S.: Phys. Rev. Lett. 91, 260601 (2003)
19. Tsimring, L.S., Pikovsky, A.: Phys. Rev. Lett. 87, 250602 (2001)
20. Guo, W., Du, L.C., Mei, D.C.: Eur. Phys. J. B 85, 182 (2012)
21. Piwonski, T., Houlihan, J., Busch, T., Huyet, G.: Phys. Rev. Lett. 95, 040601 (2005)
22. Wu, D., Zhu, S.Q.: Phys. Rev. E 73, 051107 (2006)
23. Nie, L.R., Mei, D.C.: Europhys. Lett. 79, 20005 (2007)
24. Masoller, C.: Phys. Rev. Lett. 90, 020601 (2003)
25. Borromeo, M., Marchesoni, F.: Phys. Rev. E 75, 041106 (2007)
26. Mei, D.C., Du, L.C., Wang, C.J.: J. Stat. Phys. 137, 625 (2009)
27. Du, L.C., Mei, D.C.: J. Stat. Mech. 11, P11020 (2008)
28. Du, L.C., Mei, D.C.: Phys. Lett. A 374, 3275 (2010)
29. Swann, J.B., Smyth, M.J.: J. Clin. Invest. 117, 1137 (2007)
30. Willimsky, G., Blankenstein, T.: Nature 437, 141 (2005)
31. Paul, W.E.: Fundamental Immunology. Lippincott Williams and Wilkins, Philadelphia (2003)
32. Villasana, M., Radunskaya, A.: J. Math. Biol. 47, 270 (2003)
33. Banerjee, S., Sarkar, R.R.: BioSystems 91, 268 (2008)
34. dOnofrio, A., Gatti, F., Cerrai, P., Freschi, L.: Math. Comput. Model. 51, 572 (2010)
35. Bodnar, M., Forys, U.: J. Biol. Syst. 15, 453 (2007)
36. Bru, A., Albertos, S., Lopez Garca-Asenjo, J.A., Bru, I.: Phys. Rev. Lett. 92, 238101 (2004)
37. Fiasconaro, A., Spagnolo, B.: Phys. Rev. E 74, 041904 (2006)
38. Zhong, W.R., Shao, Y.Z., He, Z.H.: Phys. Rev. E 73 R060902 (2006)
39. Bose, T., Trimper, S.: Phys. Rev. E 79, 051903 (2009)
200 W. Guo and D.-C. Mei

40. Zhu, S.Q.: Phys. Rev. A 47, 2405 (1993)


41. Risken, H.: The Fokker-Planck Equation, p. 4546. Springer, Berlin (1989)
42. Guo, W., Du, L.C., Mei, D.C.: Phys. A 391, 1270 (2012)
43. Horsthemke, W., Lefever, R.: Noise-Induced Transitions: Theory and Applications in Physics,
Chemistry and Biology. Springer, Berlin (1984)
44. Landa, P.S., McClintock, P.V.E.: Phys. Rep. 323, 1 (2000)
45. Wu, D.J., Cao, L., Ke, S.Z.: Phys. Rev. E 50, 2496 (1994)
46. Sancho, J.M., San Miguel, M., Katz, S.L., Gunton, J.D.: Phys. Rev. A 26, 1589 (1982)
47. Fox, R.F., Gatland, I.R., Roy, R., Vemuri, G.: Phys. Rev. A 38, 5938 (1988)
Chapter 13
Bounded Extrinsic Noises Affecting Biochemical
Networks with Low Molecule Numbers

Giulio Caravagna, Giancarlo Mauri, and Alberto dOnofrio

Abstract After being considered as a nuisance to be filtered out, it became


increasingly clear that noises play a complex role, often fully functional, for
biochemical networks. The influence of intrinsic and extrinsic noises on these
networks has intensively been investigated in the last 10 years, though contributions
on the co-presence of both are sparse. Extrinsic noise is usually modeled as an
unbounded white or colored gaussian stochastic process, even though realistic
stochastic perturbations should be bounded. In the first part of this work we consider
Gillespie-like stochastic models of nonlinear networks (i.e. networks affected by
intrinsic stochasticity) where the model jump rates are affected by colored bounded
extrinsic noises synthesized by a suitable biochemical state-dependent Langevin
system. These systems are described by a master equation, and a simulation
algorithm to analyze them is derived. This new modeling paradigm should enlarge
the class of systems amenable at modeling. As an application, in the second part of
this contribute we investigate the influence of both amplitude and autocorrelation
time of an harmonic noise on a genetic toggle switch. We show that the presence
of a bounded extrinsic noise induces qualitative modifications in the probability
densities of the involved chemicals, where new modes emerge, thus suggesting the
possible functional role of bounded noises.

G. Caravagna (equal contributor) G. Mauri


Dipartimento di Informatica, Sistemistica e Comunicazione, Universit`a degli Studi
Milano-Bicocca, Viale Sarca 336, I-20126 Milan, Italy
A. dOnofrio (equal contributor) ()
Department of Experimental Oncology, European Institute of Oncology,
Via Ripamonti 435, 20141 Milan, Italy
e-mail: alberto.donofrio@ieo.eu

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 201


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 13, Springer Science+Business Media New York 2013
202 G. Caravagna et al.

13.1 Introduction

In biomolecular networks, multiple locally stable equilibria allow for the presence
of multiple cellular functionalities [16]. This key role for multistability was
immediately understood by the first pioneering investigations in what is now known
as Systems Biology [7, 8].
A second key concept is that deterministic modeling of biomolecular networks
is only a quite coarse-grained approximation. Indeed, real dynamics of biochemical
signals exhibits stochastic fluctuations due to their interplay with many unknown
intracellular and extracellular cues. For long time, these stochastic effects were
interpreted as a disturbances masking the true signals. In other words, external
stochasticity was seen as in communication engineering: a disturbance to be reduced
by modules working as low-pass filters [912].
If noises were only pure nuisances, a monostable network in presence of noise
should exhibit unbiased fluctuations around the unique deterministic equilibrium,
so that probability distribution of the total signal (noise plus deterministic signal)
should be unimodal. However, at the end of seventies the Bruxelles school of
nonlinear statistical physics seriously challenged the above-outlined correspondence
between deterministic monostability and stochastic monomodality in presence of
external noise [13].
Indeed, they showed that many systems that are monostable in absence of
external stochastic noises have, in presence of random Gaussian disturbances,
multimodal equilibrium probability densities. This counter-intuitive phenomenon
was termed noise-induced transition by Horsthemke and Lefever [13], and it has
been shown relevant also in biomolecular networks [14].
In the meantime, experimental studies revealed another and equally important
role of stochasticity in these networks by showing that many important transcription
factors, as well as other proteins and mRNA, are present in cells with a small number
of molecules [1517]. Thus, a number of investigations have focused on this internal
stochasticity effect, termed (with a slight abuse of meaning) intrinsic noise
[18, 19]. In particular, it was theoretically shown and experimentally confirmed
that also the intrinsic noise may induce multimodality in the discrete probability
distribution of proteins [20, 21]. Note, however, that since early eighties these
effects had been theoretically predicted in Statistical and Chemical Physics by
approximating the exact Chemical Master Equations with an appropriate Fokker
Planck equation [2224], and then searching for noise-induced transitions.
More recently it has finally been appreciated that noise-related phenomena
may in many cases have a constructive, functional role [25, 26]. For example,
noise-induced multimodality allows a transcription network for reaching states
unaccessible in absence of noise [20, 25, 26]. Phenotype variability in cellular
populations is probably the most important macroscopic effect of intracellular noise-
induced multimodality [25].
In Systems Biology, Swain and coworkers [16] were among the first to study
the co-presence of both intrinsic and extrinsic randomness, in the context of the
13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 203

basic linear network for the production and consumption of a single protein, in
absence of feedbacks. Important effects were shown, although nonlinear phenomena
such as multimodality were absent. The above study is also remarkable since it
stressed the role of the autocorrelation time of the external noise and, differently
from other investigations, it pointed out that modeling the external noise by means of
a Gaussian noise, either white or colored, may induce artifacts such as the temporary
negativity of a reaction kinetic parameter.
From the data analysis point of view, You and collaborators [27] and Hilfinger
and Paulsson [28] proposed interesting methodologies to infer the contributions
of extrinsic noise also in some nonlinear networks, such as a synthetic toggle
switch [27].
In [29] we investigated the co-presence of both extrinsic and intrinsic randomness
in nonlinear biomolecular networks in the important case where the external
perturbations are not only non-Gaussian but also bounded. Indeed, by imposing
the boundedness of the random perturbations the degree of realism of a model
is increased, since the external noises must not only preserve the positiveness of
reaction rates but must also be bounded (i.e. they must not be excessively large).
Moreover, it has also been shown in other contexts such as oncology and statistical
physics that: (a) bounded noises deeply impact on the transitions from unimodal to
multimodal probability distribution of state variables [3034] and (b) under bounded
noise the statistical outcome of a nonlinear system may be dependent on initial
conditions [31, 33], whereas the response to gaussian noises is globally attractive,
i.e. the stationary probability density is independent on initial conditions.
In the paper [29], we first identified a suitable mathematical framework based on
differential ChapmanKolgomorov equation (DCKE) [22, 35]to represent mass-
action biochemical networks perturbed by bounded noises (or simply left-bounded).
Once established the master equation, we proposed a combination of the Gillespies
Stochastic Simulation Algorithm (SSA) [18, 36] with a state-dependent Langevin
system, affecting the model jump rates, to simulate these systems. An important
issue was the possibility of extending, in this doubly stochastic context, the
MichaelisMenten Quasi Steady State approximation (QSSA) for enzymatic reac-
tions [37, 38]. In line with recent work by Gillespie and colleagues on systems that
are not affected by extrinsic noises [39], we numerically investigated the classical
Enzyme-Substrate-Product network. Our results suggested that it is possible to
apply QSSA under the same constraints to be fulfilled in the deterministic case.
In the first part of the present work, we review our above-outlined recent
contribute in Systems Biology. In the second part, we focus on the stochastic
dynamics of a genetic toggle switch [1], which is a fundamental motif for cellular
differentiation and for other decisions-related functions. In particular, we investigate
the interplay between intrinsic randomness and extrinsic harmonic noise, i.e.
sinusoidal perturbations that are imperfect due to noisy phase.
204 G. Caravagna et al.

13.2 Background: Stochastic Chemically Reacting Systems


in Absence of Extrinsic Noise

We refer to systems where the jump rates are time-constant as stochastic noise-free
systems. These are here modeled by the Chemical Master Equation (CME) and
the Stochastic Simulation Algorithm (SSA) [18, 36], thus allowing to account for
the intrinsic stochasticity of such systems.
A well-stirred solution of molecules is considered where the (discrete) state of
the target system is X(t) = (X1 (t), . . . , XN (t)) to count Xi (t) molecules of the ith
species at time t. A set of M chemical reactions
 R1 , . . . , RM is represented as a N M
stoichiometry matrix D = 1 2 . . . M where to each reaction R j a stoichiometric
vector j is associated. In j the vector component i, j is the change in the Xi due
to one R j reaction thus, given X(t) = x, the firing of reaction R j yields the new
state x + j . Besides, a propensity function a j (x) is associated with each R j so that
a j (x)dt is the probability of R j to fire in state x, in the infinitesimal interval [t,t +dt).
The propensity functions relate to the reaction order as follows [40]:
k
(0-th order) R j : 0/ 
A a j (X(t)) = k,
k
(1-th order) R j : A 
B a j (X(t)) = kXA (t),
k
(2-th order) R j : A + B 
C a j (X(t)) = kXA (t)XB (t),
k XA (t)[XA (t) 1]
R j : 2A 
B a j (X(t)) = k
2
where k 0 is the reaction kinetic constant.
Noise-free systems obey the so-called CME [18, 36]
M
t P[x,t | ] = P[x j ,t | ]a j (x j ) (13.1)
j=1

which describes the time-evolution of the probability of the system to occupy


each one of a set of states, i.e. P[x,t | x0 ,t0 ] given the initial condition X(t0 ) =
x0 . This is a special case the differential equations ruling the time-evolution of
Markov jump processes (i.e. Kolmogorov Equations), and its analytical solutions
are unlikely feasible for most systems. However, sampling the solution of the CME
is possible by using the DoobGillespie SSA (Algorithm 1, [18, 36]). The SSA is
an exact dynamic Monte-Carlo method describing a statistically correct trajectory
of a discrete nonlinear Markov process, whose probability density function is the
solution of the CME [41]. The SSA computes a single realization of X(t) by starting
from x0 at time t0 and up to time T ; the algorithm performs exponentially distributed
jumps, i.e. is exponentially distributed with mean 1/a0 (x). The reaction to fire
is chosen with weighted probability, i.e. R j has probability a j (x)/a0 (x), and the
system state is updated accordingly.
13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 205

Algorithm 1 SSA (t0 , x0 , T )


1: set x x0 and t t0 ;
2: while t < T do
3: a0 (x) M j=1 a j (x);
4: let r1 , r2 U[0, 1];
5: a0 (x)1 ln(r1 1 );
j1 j
6: let j such that i=1 ai (x) < r2 a0 (x) i=1 ai (x);
7: set x x + j and t t + ;
8: end while

13.3 Stochastic Chemical Systems with Extrinsic


Bounded Noise

In [29] a theory of stochastic chemically reacting systems with bounded noises in


the jump rates is introduced by combining Stochastic Differential Equations (SDEs)
and the SSA. A propensity affected by a extrinsic noise term reads as

a j (x,t) = a j (x)L j ( (t)) , (13.2)

where a j (x) is a propensity function as those previously described. In this frame-


work multiple noise sources can potentially affect a reaction, so the noisy distur-
bance is a function of a more generic -dimensional bounded noise (t).
To resemble biologically meaningful perturbations some constraints on the
disturbance apply:

j L j ( (t))  j
min , max + , min
j > 0.
max
j

So, both bounded and left-bounded noises are considered. Further, unitary mean
perturbations are considered, i.e. L j ( (t)) = 1 yielding a j (x,t) = a j (x).
In Eq. (13.2) L j : R R is a continuous function and (t) R is a colored
and, in general, state-dependent non-gaussian noise, whose dynamics is described
by a -dimensional ItoLangevin system

(t) = f ( , X(t)) + g( , X(t)) (t) . (13.3)

Here, is a -dimensional vector of unitary-intensities uncorrelated white noises,


g is a matrix and f , gh,k : R RN R .

The ChapmanKolmogorov Forward Equation

These doubly-stochastic systems are ruled by a differential ChapmanKolgomorov


equation (DCKE) [22, 35] ruling the dynamics of P[(x, ),t | (x0 , 0 ),t0 ], namely
206 G. Caravagna et al.

the probability of X(t) = x and (t) = , given X(t0 ) = x0 and (t0 ) = 0 , i.e. the
probability of being in a certain state of the joint NN R state space.
The general DCKE (for a state z and an initial condition ) reads as

1
t P[z,t | ] = z j A j (z,t)P[z,t | ] +
2
zi ,z j Bi, j (z,t)P[z,t | ]
j i, j
 . /
+ W (z | h,t)P[z,t | ] W (h | z,t)P[h,t | ] dh. (13.4)

This joint process is a particular case of the general Markov process where diffusion,
drift, and discrete finite jumps are all co-present for all state variables [22, 35].
Specifically for the systems we consider here it is shown in [29] that the
drift vector for z is A j = f ( , x) and the diffusion matrix is Bi, j (z,t) = gT g,
where gT denotes the matrix transposition operator. Also, since only finite jumps
are considered, then jump and diffusion satisfy zi z j Bi, j (z,t) = 0 and W [(x, ) |
(x, ),t] = 0 for any i, j = 1, . . . , N, and noise R . As a consequence,
Eq. (13.4) reads as

M M
1
t P[(x, ),t] = z j f j ( , x)P[(x, ),t] + 2 zi z j Bi, j ( , x)P[(x, ),t]
j=1 i, j=1

M M
+ P[(x j , ),t]a j (x j ,t) P[(x, ),t] a j (x,t) . (13.5)
j=1 j=1

where the initial condition is omitted to shorten the notation.

The SSA with Bounded Noise

Solving Eq. (13.5) is even more difficult than solving the CME; however, a
Stochastic Simulation Algorithm with Bounded Noise (SSAN) has been defined
to sample from such a distribution [29]. The SSAN merges ideas from other
SSA variants by generalizing the SSA jump equation to a time inhomogeneous
distribution [4246].
The key steps in the mathematical derivation of the SSAN are hereby recalled.
By defining the stochastic process counting the number firings of R j in [t0 ,t], i.e.
{N j (t) | t t0 } with initial condition N j (t0 ) = 0, the evolution equation for X(t) is

M
dX(t) = j N j (t) . (13.6)
j=1
13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 207

Algorithm 2 SSAN (t0 , x0 , T )


1: set x x0 and t t0 ;
2: while t < T do
3: let r1 , r2 U[0, 1];
4: generate (t) in [t,t + ] and find by means of equation (13.8), in parallel;
5: define a0 (x,t + ) = Mi=1 ai (x,t + );
6: let j min {n | r2 a0 (x,t + ) ni=1 ai (x,t + )};
7: set x x + j and t t + ;
8: end while

For Markov processes N j (t) is an inhomogeneous Poisson process satisfying

P[N j (t + dt) N j (t) = 1 | x] = a j (x,t)dt (13.7)

which evaluates as a j (x)dt for the SSA-based systems, yielding a time homoge-
neous Poisson process. In the case considered here this is a Cox process since the
intensity itself depends on the stochastic noise [47, 48].
In [29] a unitary-mean Poisson transformation is applied to a monotonic
(increasing) function of determining the putative time for R j to fire in (x,t), which
is then generalized to account for the next jump of the overall system as

M  t+  
1
a j (x, w)dw = ln
r1
(13.8)
j=1 t

with r U[0, 1] [49, 50]. This equation is the result of defining N j (t) by a
sequence of unitary-mean independent exponential random variables, and picking
the smallest jump time among all reactions [29]. In evaluating this equation term
a j (x) is constant, thus only integration of the noise is required which, we remark,
is a conventional Lebesgue since the perturbation L j ( (t)) is a colored stochastic
process. It also important to note that A j (t, ) = a j (x) for a noise-free reaction.
Given a system jump , the next reaction to fire is a random variable following

a j (x,t + )
P[ j | ; x,t] = . (13.9)
i=1 ai (x,t + )
M

The SSAN is Algorithm 2 where: step 4 is the parallel (numerical) solution


of both Eq. (13.8) and Langevin system (13.3), and step 5 samples values for j
according to Eq. (13.9). It is important to remark that given X(t) = x for any the
Langevin equation (13.3) depends only on (t) and the constant x. Also, despite the
numerical discretization of a continuous vectorial noise induces an approximation,
this is in general the only possible approach. Of course, the maximum size of the
jump in the noise realization, i.e. the noise granularity, should be much smaller
than the minimum autocorrelation time of the perturbing stochastic processes, as
discussed in [29].
208 G. Caravagna et al.

Extension to Non Mass-Action Nonlinear Kinetic Laws

Networks with large chemical concentrations, i.e. characterized by deterministic


behaviors, can be often be approximated by invoking the so-called Quasi Steady
State Assumption (QSSA) [37, 38]. Its validity conditions are very well understood
within classical deterministic frameworks, whereas the same is not true for the
corresponding stochastic models.
A noteworthy recent attempt to show that a Stochastic QSSA (SQSSA) can be
applied to a MichaelisMenten enzymatic reaction network along the same lines of
the analogous deterministic is due to Gillespie and coworkers [39]. Setting on these
premises, in [29] SQSSAs are investigated to identify possible pitfalls arising in our
doubly-stochastic setting. However, our numerical experiments extended the results
of [39], under the same restrictions.
We just mention here that the model reduction induced by applying a SQSSA
might yield propensity functions to be nonlinear not only on state variables but also
on the perturbation. In [29] it is discussed to framework extension where generalized
perturbed propensities of the form a j (x, (t)) are used. Here is a vector with
elements j = L j ( ) for j = 1, . . . , M. This allows to write a DCKE accounting for
a SQSSA and a modified SSAN can be derived.

13.4 The Harmonically Perturbed Genetic Toggle Switch

In [29] we considered, among other, the application of the above-outlined metholo-


gies to the study of a bistable toggle switch model of gene expression [5, 51]
perturbed by a SineWiener noise [30]. Also, we compared this stochastic perturba-
tion with periodic sinusoidal perturbations studied by Zhdanov [5, 51].
Here our aim is quite different: we start from the stochastic system with
periodically varying parameters by Zhdanov and consider stochastic perturbations
of the external sinusoidal signal. The most natural way to attack this problem is to
represent the extrinsic variable signal as an harmonic noise.
All the simulations we present have been performed by the JAVA implementation
of the SSAN currently available within the NOISYSIM library [52, 53].

13.4.1 The Harmonic Bounded Noise

The important engineering problem of the presence of stochastic phase disturbances


in sinusoidal forces led Dimentberg [54] and Wedig [55] to the definition of the
Harmonic bounded noise (HBN)

(t) = B sin( t + W (t) +U) (13.10)


13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 209

where U is a random variable uniformly distributed in [0, 2 ), > 0 is the pulsation


of the sinusoidal signal in absence of stochastic disturbance, > 0 is the strength
of the disturbance and W (t) is a standard Wiener process W (t) = (t), where (t) a
white noise with:  (t) (t +z) = (z). If = 0, the HBN is called the sine-Wiener
noise [30].
It can be shown that [56, 57]  (t) = 0,  2 (t) = B2 /2 and that
 
B2 |z|
 (t) (t + z) = cos( z) exp , (13.11)
2 c

 is c = 2/ . The stationary probability density of


where the autocorrelation time 2

is given by P( ) = 1/( 2 2 ).

13.4.2 Periodically Perturbed Toggle Switch

We consider a model where two genes G1 and G2 , two RNAs R1 and R2 and two
proteins P1 and P2 are considered. So, synthesis and degradation correspond to

G1
 G1 + R1 R1
 R1 + P1 R1
 P1

G2
 G2 + R2 R2
 R2 + P2 R2
 P2
 .

Such a reaction scheme is a genetic toggle switch if the formation of R1 and R2


is suppressed by P2 and P1 , respectively [5, 6, 51, 58]. This schema can be further
simplified by considering kinetically equivalent G1 and G2 , and by assuming that
the mRNA synthesis occurs only if 2 regulatory sites of either P1 or P2 are free [51].
The deterministic model of the simplified switch when synthesis is perturbed is
 2  2
K K
R1 = (t)R R R1 R2 = (t)R R R2
K + P2 K + P1
P1 = P R1 P P1 P2 = P R2 P P2 (13.12)

where the deterministic perturbation is


 
2
(t) = 1 + sin t .
T

Here R , R , P and P are the rate constants of the reactions involved, term
[K/(K + Pi )]2 is the probability that 2 regulatory sites are free and K is the
association constant for protein P. Before introducing a realistic noise, we perform
some analysis of this model.
210 G. Caravagna et al.

Table 13.1 The bistable model of gene expression in [51]: the stoichiometry
matrix (rows in order R1 , R2 , P1 , P2 ) and the propensity functions

1 1 0 0 0 0 0 0 a1 (t) = (t)R [K/(K + P2 )]2 a2 = R R1
0 0 1 1 0 0 0 0 a3 (t) = (t)R [K/(K + P1 )]2 a4 = R R2

0 0 0 0 1 1 0 0 a5 = P R1 a6 = P P1
0 0 0 0 0 0 1 1 a7 = P R2 a8 = P P2

This model can be re-setted in a stochastic framework by defining the reactions


described in Table 13.1. Notice that a1 (t) and a3 (t) modeling synthesis have a
time-dependent propensity function. In Fig. 13.1 we show single runs for Zhdanov
model where simulations are performed with the exact SSA with time-dependent
propensity function (a more precise version of the simulation algorithm used in [51]
which can be coded in NOISYSIM [52]). We considered an initial configuration with
only 10 RNAs R1 : (R1 , P1 , R2 , P2 ) = (10, 0, 0, 0). As in [51] we set R = 100 min1 ,
P = 10 min1 , R = P = 1 min1 , K = 100 and T = 100 min1 ; these parameters
are realistic since protein and mRNA degradation usually occur on the minute time-
scale [59]. We considered two possible noise intensities {0.5, 1}. As expected,
the number of switches increases with .
In Fig. 13.2 we plot the empirical evaluation of P[X(t) = x] given the considered
initial configuration, at t {900, 950, 1, 000} min as obtained by 1, 000 simulations.
Interestingly, these bi-modal probability distributions immediately evidence the
presence of stochastic bifurcations in the more expressed populations R2 and P2 .
In addition, the distributions for the protein seem to oscillate with period around
100, i.e. for = 1 they are unimodal at t {900, 1, 000} and bi-modal at t = 950.
Figure 13.3 confirms this hypothesis by showing the probability density function
of R2 for 900 t 1, 000. In that heatmap the lighter gradient denotes larger
probability values. The oscillatory behavior of the probability distributions for both
values of may be noticed, as well as the uni-modality of the distribution at t = 900
and t = 1000 in the case = 1, i.e. the higher variance of the rightmost peak at = 1
collapses the two modes. It is not shown but, as one should expect, the oscillations
of the distribution, induced by the sinusoidal perturbation, are periodic over all the
simulated time window 0 t 1, 000.

13.4.3 Toggle Switch Affected by Harmonic Bounded


Noise (HBN)

In order to investigate the effect of imperfection in the perturbing sinusoidal signal,


we investigated the effect of a HBN affecting protein synthesis, i.e. we assumed a
  
2 2
(t) = 1 + sin t+ W (t)
T
13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 211

a 200

R1

0
2000

P1

0
200

R2

0
2000

P2

0
1.5
noise
0.5
0 200 400 600 800 1000
min

b 250

R1

0
2500

P1

0
250

R2

0
2500

P2

0
2
noise
0
0 200 400 600 800 1000
min

Fig. 13.1 Toggle switch with periodic perturbation. Single simulation of model (13.12) with
= 0.5 in (a) and = 1 in (b). Parameters are R = 100 min1 , P = 10 min1 , R = P =
1 min1 , K = 100, and T = 100 min1 and the initial configuration x0 is (R1 , P1 , R2 , P2 ) =
(10, 0, 0, 0). RNAs, proteins and the noise are plotted. Taken from Ref. [33]: G Caravagna, G
Mauri, A dOnofrio PLoS ONE 8(2), e51174 (2013)
212 G. Caravagna et al.

a R2 P2
0.05 0.05

0.025 0.025

0 60 120 0 500 1000

b R2 P2
0.025 0.06

0.015 0.03

0 45 105 0 500 1000

c R2 P2
0.025 0.06

0.012 0.03

0 50 100 0 500 1000

Fig. 13.2 (continued)


13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 213

d R2 P2
0.05 0.05

0.025 0.25

0 60 120 0 750 1500

e R2 P2
0.06 0.06

0.03 0.03

0 60 120 0 750 1500

f R2 P2
0.05 0.05

0.025 0.025

0 60 120 0 500 1000

Fig. 13.2 Toggle switch with periodic perturbation. Empirical evaluation of P[x,t | x0 , 0] re-
stricted to R2 and P2 at t {900, 950, 1, 000}, as of 1, 000 simulations of model (13.12) with the
parameters as in Fig. 13.1. In (a) t = 900 and = 0.5, in (b) t = 900 and = 1, in (c) t = 950 and
= 0.5, in (d) t = 950 and = 1, in (e) t = 1, 000 and = 0.5 and in (f) t = 1, 000 and = 1.
Taken from Ref. [33]: G Caravagna, G Mauri, A dOnofrio PLoS ONE 8(2), e51174 (2013)

where 0 < 1, and W a Wiener process. Here simulations are performed by using
the SSAN where the reactions in Table 13.1 are left unchanged, and the propensity
functions a1 (t) and a3 (t) are modified with this new definition of (t).
For the sake of comparing the simulations with those in Figs. 13.113.3, we used
the same initial conditions and parameters of the previous examples.
214 G. Caravagna et al.

a b
1000 1000

980 980

960 960

940 940

920 920

900 900
0 50 100 150 250 65 130 195 259

Fig. 13.3 Toggle switch with periodic perturbation. Empirical evaluation of P[xR2 ,t | x0 , 0] in
900 t 1, 000. We used data collected with 1, 000 simulations of model (13.12) and = 0.5
in (a) and = 1 in (b), other parameters are as in Fig. 13.1. In the x-axis the concentration of R2
is represented, in the y-axis minutes are given, the light gradient denotes high probability values.
Taken from Ref. [33]: G Caravagna, G Mauri, A dOnofrio PLoS ONE 8(2), e51174 (2013)

Our simulations suggest that the scenario induced by the idealized sinusoidal
perturbation is deeply affected by the presence of the noisy phase. For example,
for the case = 0.5 in the time-series shown in Fig. 13.4 we observe that the pair
(R2 , P2 ) undergoes small stochastic fluctuation around small values, whereas the pair
(R1 , P1 ) exhibits large oscillations for large values. If one increases up to = 1,
then the time-series shown in Fig. 13.5 have the following features: (a) (R2 , P2 )
exhibits large oscillations; (b) (R1 , P1 ) undergoes large oscillations for = 10 and
= 100, and small oscillations (and around small average values) for = 1, 000.
The change of scenario can be fully appreciated when comparing Fig. 13.6,
where the heatmaps for, with the homologous Fig. 13.3. Indeed, for = 10 and =
100 the characteristic roughly periodic pattern of Fig. 13.3 has disappeared. Instead,
for ( , ) = (1000, 0.5) it is visible but it is coupled with an anti-phase pattern,
which however might only be a transient effect, given that the autocorrelation time
is here very large.

13.5 Concluding Remarks

In this paper we investigated the effects of joint extrinsic and intrinsic randomness in
nonlinear genetic networks, under the assumption of non-gaussian bounded external
perturbations. Our applications have shown that the combination of both intrinsic
and extrinsic noise-related phenomena may have a constructive functional role also
when the extrinsic noise is bounded. This is in line with other researchesonly
13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 215

a 200

R1

0
1750

P1

0
25

R2

5
0
200

P2

0
0 200 400 600 800 1000
min

b 200

R1

0
1750

P1

0
30

R2

5
0
200

P2

0
0 200 400 600 800 1000
min

Fig. 13.4 (continued)


216 G. Caravagna et al.

c 160

R1

0
1750

P1

0
30

R2

0
250

P2

0
0 200 400 600 800 1000
min

Fig. 13.4 Toggle switch with Harmonic Bounded Noise. Single simulation of model (13.12) with
HBN where = 0.5. In all cases R = 100 min1 , P = 10 min1 , R = P = 1 min1 , K = 100.
T = 100 min1 and the initial configuration is (R1 , P2 , R2 , P2 ) = (10, 0, 0, 0), as in Fig. 13.1.
In (a) = 10, in (b) = 100 and in (c) = 1, 000. RNAs and proteins are plotted

focusing on either intrinsic or extrinsic noiserecasting the classical interpretation


of noise as a disturbance more or less obfuscating the real behavior of a network.
This work required the combination of two well-known frameworks, often used
to separately describe biological systems. We combined the theory of stochastic
chemically reacting systems developed by Gillespie with the recent theory of
bounded stochastic processes described by ItoLangevin equations. The former
shall allow considering the inherent stochastic fluctuations of small numbers of
interacting entities, often called intrinsic noise, and clearly opposed to classical
deterministic models based on differential equations. The latter permits to con-
sider the influence of bounded extrinsic noises. These noises are modeled as
stochastic differential equations. For these kind of systems, although an analytical
characterization is unlikely to be feasible, we were able to derive a differential
ChapmanKolgomorov equation (DCKE) describing the probability of the system
to occupy each one of a set of states. Then, in order to analyze these models by
sampling from this equation we defined an extension of the Gillespies Stochastic
Simulation Algorithm (SSA) with a state-dependent Langevin system affecting the
model jump rates. This algorithm, despite being more costly than the classical
Gillespies SSA, allows for the exact simulation of these doubly stochastic systems.
13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 217

a 250

R1

0
2500

P1

0
250

R2

0
2000

P2

0
0 200 400 600 800 1000
min

b 250

R1

0
2500

P1

0
250

R2

0
2500

P2

0
0 200 400 600 800 1000
min

Fig. 13.5 (continued)


218 G. Caravagna et al.

c 50

R1

0
500

P1

0
250

R2

0
2500

P2

0
0 200 400 600 800 1000
min

Fig. 13.5 Toggle switch with Harmonic Bounded Noise. Single simulation of model (13.12) with
HBN where = 1. In all cases R = 100 min1 , P = 10 min1 , R = P = 1 min1 , K = 100. T =
100 min1 and the initial configuration is (R1 , P2 , R2 , P2 ) = (10, 0, 0, 0), as in Fig. 13.1. In (a) =
10, in (b) = 100 and in (c) = 1, 000. RNAs and proteins are plotted

Here we applied the proposed algorithm to an important biological example: the


genetic Toggle Switch. Namely we considered the changes in the dynamic of this
module caused by the interplay between the intrinsic stochasticity and a periodic
perturbation with time-varying stochastic phase. In other words, we assumed that
the circuit was perturbed by a bounded harmonic noise.
We observed that the HBN perturbation has an impact that is deeply different
from that induced by a sinusoidal noise, also in case of small noise strength. Roughly
speaking, we may say that the addition of the random-walk-based phase destroys
the periodic pattern, or for small noise strengths, deeply alter it, at least in the
transitory. In particular one can no more observe the alternance between bimodality
and unimodality during the evolution of the probability density.
Of course, the observed phenomena are strongly related both to the the amplitude
of theperturbatio and to the autocorrelation time of the noise (i.e. to its strength
= 2/ ).
Finally, we mention here two issues that we shall investigate in the future.
The first is the role of general periodic-like perturbations of the form:

s(t) = Bk sin(k t + k + kWk (t)),


k

where each Wk (t) is an independent random walk.


13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 219

Fig. 13.6 Toggle switch with Harmonic Bounded Noise. Empirical evaluation of P[xR2 ,t | x0 , 0]
in 900 t 1, 000. We used data collected with 1, 000 simulations of model (13.12). In (a) = 0.5
and = 10, in (b) = 0.5 and = 100 and (c) = 0.5 and = 1, 000. In (d) = 1 and = 10,
in (e) = 1 and = 100 and (f) = 1 and = 1, 000. All the parameters are as in Fig. 13.1. In
the x-axis the concentration of R2 is represented, in the y-axis minutes are given

The second is the role of the stationary density of the extrinsic noise. Indeed,
in other systems affected by bounded noises one of us showed that the effects of a
bounded extrinsic noise may depend on its model [3133, 60], and not only on its
amplitude and autocorrelation time. For example, the response of a system perturbed
by a sine-Wiener noise may be different from that induced by the CaiLin noise
[61]. This might imply that a same motif could exhibit many different functions
depending on its locations in the host organisms, because the stochastic behavior
of the module depends on fine details of extrinsic noise.

References

1. Gardner, T.S., Cantor, C.R., Collins, J.J.: Nature 403, 339 (2000)
2. Markevich, N.I., Hoek, J.B., Kholodenko, B.N.: J. Cell Biol. 164, 353 (2004)
3. Wang, K., Walker, B.L., Iannaccone, S., Bhatt, D., Kennedy, P.J., Tse, W.T.: PNAS 106(16),
6638 (2009)
4. Xiong, W., Ferrell Jr., J.E.: Nature 426, 460465 (2003)
5. Zhdanov, V.P.: Choas Solit. Fract. 45, 577 (2012)
6. Zhdanov, V.P.: J. Phys. A Math. Theor 42, 065102 (2009)
7. Griffith, J.S.: J. Theor. Biol. 20, 209 (1968)
8. Simon, Z.: J. Theor. Biol. 8, 258 (1965)
9. Detwiler, P.B., Ramanathan, S., Sengupta, A., Shraiman, B.I.: Biophys. J. 79, 2801 (2000)
10. Rao, C.V., Wolf, D., Arkin, A.P.: Nature 420, 231 (2002)
11. Becskei, A., Serrano, L.: Nature 405, 590593 (2000)
220 G. Caravagna et al.

12. Thattai, M., Van Oudenaarden, A.: Biophys. J. 82, 29432950 (2001)
13. Horsthemke, W., Lefever, R.: Noise-Induced Transitions: Theory and Applications in Physics,
Chemistry, and Biology. Springer, New York (1984)
14. Hasty, J., Pradines, J., Dolnik, M., Collins, J.J.: PNAS 97(5), 2075 (2000)
15. Becskei, A., Kaufmann, B.B., van Oudenaarden, A.E.: Nature Gen. 37, 937 (2000)
16. Elowitz, M.B., Levine, A.J., Siggia, E.D., Swain, P.S.: Science 298, 1183 (2002)
17. Ghaemmaghami, S., Huh, W., Bower, K., Howson, R.W., Belle, A., Dephoure, N., OShea,
E.K., Weissman, J.S.: Nature 425, 737 (2003)
18. Gillespie, D.T.: J. Phys. Chem. 81, 23402361 (1977)
19. Thattai, M., Van Oudenaarden, A.: Intrisic noise in Gene Regulatory Networks. PNAS 98, 8614
(2001)
20. Samoilov, M., Plyasunov, S., Arkin, A.P.: PNAS 102(7), 2310 (2005)
21. Tze-Leung, T., Mahesci, N.: Stochasticity and cell fate. Science 327, 1142 (2010)
22. Gardiner, C.W.: Handbook of Stochastic Methods, 2nd edn. Springer, New York (1985)
23. Gillespie, D.T.: J. Phys. Chem. 72, 5363 (1980)
24. Grabert, H., Hanggi, P., Oppenheim, I.: Phys. A 117, 300 (1983)
25. Eldar, A., Elowitz, M.B.: Nature 467, 167 (2010)
26. Losick, R., Desplan, C.: Science 320, 65 (2008)
27. Hallen, M., Li, B., Tanouchi, Y., Tan, C., West, M., You, L.: PLoS Comp. Biol. 7, e1002209
(2011)
28. Hilfinger, A., Paulsson, J.: PNAS 108, 1216712172 (2011)
29. Caravagna, G., Mauri, G., dOnofrio, A.: PLoS ONE 8(2), e51174 (2013)
30. Bobryk, R.V., Chrzeszczyk, A.: Phys. A 358, 263272 (2005)
31. dOnofrio, A.: Phys. Rev. E 81, 021923 (2010)
32. dOnofrio, A., Gandolfi, A.: Phys. Rev. E 82, 061901 (2010)
33. de Franciscis, S., dOnofrio, A.: Phys. Rev. E 86, 021118 (2012)
34. Wio, H.S., Toral, R.: Phys. D 193, 161 (2004)
35. Ullah, M., Wolkhenauer, O.: Stochastic Approaches for Systems Biology. Springer, New York
(2011)
36. Gillespie, D.T.: J. Comp. Phys. 22, 403 (1976)
37. Murray, J.D.: Mathematical Biology. Springer, New York (2002)
38. Segel, L.A., Slemrod, M.: SIAM Rev. 31, 4467 (1989)
39. Sanft, K.R., Gillespie, D.T., Petzold, L.R.: IET Syst. Biol. 5, 58 (2011)
40. Gillespie, D.T., Petzold, L.R.: Numerical simulation for biochemical kinetics. In: Szallasi, S.,
Stelling, J., Periwal, V. (eds.) System Modeling in Cell Biology: From Concepts to Nuts and
Bolts. MIT Press, Boston (2006)
41. Feller, W.: Trans. Am. Math. Soc. 48, 4885 (1940)
42. Anderson, D.F.: J. Chem. Phys. 127, 214107 (2007)
43. Alfonsi, A., Cances, E., Turinici, G., Di Ventura, B., Huisinga, W.: ESAIM Proc. 14, 1 (2005)
44. Alfonsi, A., Cances, E., Turinici, G., Di Ventura, B., Huisinga, W.: INRIA Tech. Report 5435
(2004)
45. Caravagna, G., dOnofrio, A., Milazzo, P., Barbuti, R.: J. Theor. Biol. 265, 336 (2010)
46. Caravagna, G., dOnofrio, A., Barbuti, R.: BMC Bioinformatics 13(Supp 4), S8 (2012)
47. Cox, D.R.: J. Roy. Stat. Soc. 17, 129 (1955)
48. Bouzas, P.R., Ruiz-Fuentes, N., Ocana, F.M.: Comput. Stat. 22, 467 (2007)
49. Daley, D.J., Vere-Jones, D.: An Introduction to the Theory of Point Processes, vol. I:
Elementary Theory and Methods of Probability and Its Applications, 2nd edn. Springer, New
York (2003)
50. Todorovic, P.: An Introduction to Stochastic Processes and Their Applications. Springer, New
York (1992)
13 Bounded Extrinsic Noises Affecting Biochemical Networks. . . 221

51. Zhdanov, V.P.: Phys. A 390, 5764 (2011)


52. NOISYSIM library (2012). Available at http://sites.google.com/site/giuliocaravagna/
53. Caravagna, G., dOnofrio, A., Mauri, G.: NOISYSIM: exact simulation of stochastic chem-
ically reacting systems with extrinsic bounded noises. In: Gabriel A. Wainer, Gregory
Zacharewicz, Pieter Mosterman, Fernando Barros (eds) Symposium on Theory of Modeling
and Simulation DEVS Integrative M&S Symposium (DEVS 2013) 2013 Spring Simulation
Multi-Conference (SpringSim13), Simulation Series ICurran Associates, Red Hook, NY, USA
45(4), 8489 (2013)
54. Dimentberg, M.F.: Statistical Dynamics of Nonlinear and Time-Varying Systems. Wiley, New
York (1988)
55. Wedig, W.V.: Analysis and simulation of nonlinear stochastic systems. In: Schiehlen, W. (ed.)
Nonlinear Dynamics in Engineering Systems, pp. 337. Springer, New York (1989)
56. Zhu, W.Q., Cai, G.Q.: On bounded stochastic processes. In: dOnofrio, A. (ed.) Bounded
Stochastic Processes in Physics, Biology and Engineering. Birkhauser, Boston, MA (2013)
57. Dimentberg, M.F.: Dynamics of systems with randomly disordered periodic excitations. In:
dOnofrio, A. (ed.) Bounded Stochastic Processes in Physics, Biology and Engineering.
Birkhauser, Boston, MA (2013)
58. Chang, H.H., Oh, Y., Ingber, D.E., Huang, S.: BMC Cell Biol. 7, 11 (2006)
59. Kaern, M., Elston, T.C., Blake, W.J., Collins, J.J.: Nat. Rev. Genet. 6, 451 (2005)
60. dOnofrio, A.: Multifaceted aspects of the kinetics of immunoevasion from tumor dormancy.
In: Enderling, H., Almog, N., Hlatky, L. (eds.) Systems Biology of Tumor Dormancy.
Advances in Experimental Medicine and Biology, vol. 734. Springer, New York (2012). ISBN
978-1461414445
61. Cai, C.Q., Lin, Y.K.: Phys. Rev. E 54, 299 (1996)
Part IV
Bounded Noises: Applications
in Engineering
Chapter 14
Almost-Sure Stability of Fractional Viscoelastic
Systems Driven by Bounded Noises

Jian Deng, Wei-Chau Xie, and Mahesh D. Pandey

Abstract The almost-sure stochastic stability of fractional viscoelastic systems,


characterized by parametric excitation of bounded noises, is investigated. The
viscoelastic material is modelled using a fractional KelvinVoigt constitutive
relation, which results in a stochastic fractional equation of motion. The method
of stochastic averaging, together with the FokkerPlank equation of the averaged
Ito stochastic differential equation, is used to determine asymptotically the top
Lyapunov exponent of the system for small damping and weak excitation. It is
found that the parametric noise excitation can have a stabilizing effect in the
resonant region. The effects of various parameters on the stochastic stability of
the system are discussed. The approximate analytical results are confirmed by
numerical simulation.

Keywords Bounded noises Fractional differential equations Viscoelastic


material Fractional KelvinVoigt model Stochastic stability Lyapunov
exponents Parametric excitation Engineering

14.1 Introduction

Deterministic dynamic stability of viscoelastic systems has been investigated by


many authors [1, 27]. However, loadings from earthquakes, explosion, wind, and
ocean waves can be described satisfactorily only by using probabilistic models,

J. Deng () W.-C. Xie M.D. Pandey


Department of Civil and Environmental Engineering, University of Waterloo,
Waterloo, ON, Canada N2L 3G1
e-mail: j76deng@gmail.com; xie@uwaterloo.ca; mdpandey@uwaterloo.ca

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 225


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 14, Springer Science+Business Media New York 2013
226 J. Deng et al.

resulting in that the equations of motion of viscoelastic systems under such


excitations are usually governed by stochastic integro-differential equations; the
responses and stability properties of these systems are difficult to obtain exactly.
Therefore, several numerical and approximate procedures have been proposed.
Potapov described the behavior of stochastic viscoelastic systems by numerical
evaluation of Lyapunov exponents of linear integro-differential equations [20] and
studied the almost-sure stability of a viscoelastic column under the excitation
of a random wide-band stationary process using Lyapunovs direct method [19].
The method of stochastic averaging, originally formulated by Stratonovich [25]
and mathematically proved by Khasminskii [8], has been widely used to solve
stochastic differential equations (SDE) containing a small parameter approximately.
Under certain conditions, stochastic averaging can reduce the dimension of some
problems to one dimension, which greatly simplifies the solution [27]. A physical
interpretation of this method, which is more appealing to engineers, is given in [12].
The popularity of stochastic averaging can be felt from the large number of papers in
the literature, e.g., Roberts and Spanos [21], Ariaratnam [2], and Sri Namachchivaya
and Ariaratnam [24]. Recently, Onu completed his doctoral thesis entitled Stochastic
averaging for mechanical systems [14].
The time-dependent behavior and strain rate effects of viscoelastic materials
have been conventionally described by constitutive equations, which include time
as a variable in addition to the stress and strain variables and their integer-order
differentials or integrals. Several conventional mechanical models for viscoelasticity
were reviewed in [6]. These models involve exponential strain creeps or exponential
stress relaxations, e.g., KelvinVoigt model exhibiting an exponential strain creep.
These exponential laws are due to the integer-order differential equation form of
constitutive models for viscoelasticity.
Recently, an increasing interest has been directed to non-integer or fractional
viscoelastic constitutive models [4, 5, 13]. In contrast to the well-established
mechanical models based on Hookean springs and Newtonian dashpots, which
result in exponential decays of the relaxation functions, the fractional models
accommodate non-exponential relaxations, making them capable of modelling
hereditary phenomena with long memory. Fractional constitutive models lead to
power law behavior in linear viscoelasticity asymptotically [3]. There is a theoretical
reason for using fractional calculus in viscoelasticity [3]: the molecular theory of
Rouse [23] gives stress and strain relationship with fractional derivative of strain.
Experiments also revealed that the viscous damping behavior can be described
satisfactorily by the introduction of fractional derivatives in stressstrain relations
[10, 16, 17].
The objective of the paper is to study the almost-sure stability of fractional
viscoelastic systems under bounded noise excitations using the method of stochastic
averaging.
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 227

14.2 Fractional Viscoelastic Constitutive Models

Viscoelastic materials exhibit stress relaxation and creep, with the former character-
ized by a decrease in the stress with time for a fixed strain and the latter characterized
by a growth of the strain under a constant stress. The KelvinVoigt model, which
simply consists of a spring and a dashpot connected in parallel, is often used to
characterize the viscoelasticity. This model can be used to form more complicated
models [1, 7].
The constitutive equation of KelvinVoigt model is

d (t)
(t) = E (t) + , (14.1)
dt

where (t) and (t) are the stress and strain, respectively, E is the spring constant or
modulus, is the Newtonian viscosity or the coefficient of viscosity. For a constant
applied stress 0 , the creep function is

0
(t) = 1 et/ , (14.2)
E

where = /E is called the retardation time. When the stress is removed, the
recovery function R (t) is

R (t) = 0 et/ , (14.3)

where 0 is the initial strain at the time of stress removal.


The fractional viscoelastic mechanical models are fractional generalizations of
the conventional constitutive models, by replacing the derivative of order 1 with the
fractional derivative of order (0 < 1) in the RiemanLiouville sense in their
constitutive models. The fractional KelvinVoigt constitutive equation is defined as

d (t)  
(t) = E (t) + = E (t) + RL0D t (t) , (14.4)
dt

 
where RL0D t is an operator for RiemannLiouville (RL) fractional derivative of
order given by
 t
RLD
  1 dm f ( )
f (t) = d , m 1  < m, (14.5)
0 t (m ) dt m 0 (t )(m )+1

where m is an integer and ( ) is the gamma function



( ) = et t 1 dt. (14.6)
0
228 J. Deng et al.

   
It can be seen that RLD 0
0 t
f (t) = f (t) and RLD m
0 t
f (t) = f (m) (t) [18]. Specifically,
for 0 <  1, assuming f (t) is absolutely continuous for t > 0, one has (see
Eq. (2.1.28) of [9])

 t
  t

 
RLD f (t) = 1 d f ( ) 1 f (0) f ( )
d = + d .
0 t
(1 ) dt 0 (t ) (1 ) t 0 (t )

(14.7)

It is interesting to note that this is actually the solution of Abels integral equation
[9]. Further assuming f (0) = 0 gives the fractional derivative in Caputo (C) form
 t
CD
  1 f ( )
f (t) = d . (14.8)
0 t (1 ) 0 (t )

For a constant applied stress 0 , the creep function is


&  '
0  t 
(t) = 1 E , = . (14.9)
E E

When the stress is removed, the recovery function R (t) is


 
 t 
R (t) = 0 E , (14.10)

where E (z) is the MittagLeffler function defined by



zn
E (z) = . (14.11)
n=0 ( n + 1)

It is seen that when = 1, the MittagLeffler function E (z) reduces to the


exponential function

zm zm
E1 (z) = (m + 1)
=
m!
= ez . (14.12)
m=0 m=0

The fractional KelvinVoigt model in Eq. (14.4) can be separated into two parts:
 
Hooke element E (t) and fractional Newton (Scott-Blair) element RL0D t (t) .
When = 0, the fractional Newton part becomes a constant , which is the
case for the Hooke model. It is further observed that, only when = 0, this model
exhibits transient elasticity at t = 0.
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 229

Fig. 14.1 Creep and recovery for a fractional viscoelastic material

When = 1, the relaxation modulus of the fractional Newton part becomes a


Dirac delta function, i.e., G(t) = (t), which is the case of the integer Newton
model.
Hence, the fractional Newton element is a continuum between the spring model
and the Newton model. A typical relationship of creep and recovery for a fractional
viscoelastic material is displayed in Fig. 14.1, which clearly shows that the extra
degree-of-freedom from using the fractional order can improve the performance
of traditional viscoelastic elements. Specifically, when = 1, this fractional model
reduces to the ordinary integer KelvinVoigt model. The fractional-order operator
in Eq. (14.5) is a global operator having a memory of all past events, making
it adequate for modeling the memory and hereditary effects in most materials.
However, this time dependency is a double-edged sword, which directly results in
the disadvantage of fractional constitutive models. The issue is that evaluating the
stress at time t requires an integral over all previous time steps. This in turn requires
that a history of all previous values of the viscous strain tensor be retained at all
points where the constitutive relation is evaluated [15]. Therefore, any normalization
during the computation would bring forth a serious mistake in the computation
of fractional derivatives. This means that we must store all the values in history
without any distortion, which makes the methods with normalization [26] invalid.
A new method without normalization is proposed to calculate Lyapunov exponents
in Sect. 14.5.
230 J. Deng et al.

14.3 Formulation

Consider the stability of a column of uniform cross section under dynamic axial
compressive load F(t). The equation of motion is given by [27]

2 M(x,t) 2 v(x,t) v(x,t) 2 v(x,t)


= A + + F(t) , (14.13)
x2 t2 0
t x2
where is the mass density per unit volume of the column, A is the cross-sectional
area, v(x,t) is the transverse displacement of the central axis, 0 is the damping
constant. The moment M(x,t) at the cross-section x and the geometry relation are

2 v(x,t)
M(x,t) = (x,t) z dA, (x,t) = z. (14.14)
A x2

The viscoelastic material is supposed to follow fractional constitutive model in


Eq. (14.4), which can be recast as
 

(t) = E + RL0D t (t). (14.15)

Substituting Eq. (14.15) into (14.14) and (14.13) yields


 
2 v(x,t) v(x,t) 4 v(x,t) RLD v(x,t) + F(t) v(x,t) = 0.
4 2
A + + EI + I
t2 0
t x4 0 t x4 x2
(14.16)
If the column of length L is simply supported, the transverse deflection can be
expressed as

n x
v(x,t) = qn (t) sin L
. (14.17)
n=1

Substituting Eq. (14.17) into (14.16) leads to the equations of motion


 
F(t) RL
qn (t) + 2 qn (t) + n 1
2
+ D q (t) = 0, (14.18)
Pn E 0 t n

where

0 EI  n 4  n 2
= , n2 = , Pn = EI . (14.19)
2 A A L L

If only the nth mode is considered, and the damping, viscoelastic effect, and the
amplitude of load are all small, and if the function F(t)/Pn is taken to be a stochastic
process (t), the equation of motion of a single degree-of-freedom system can be
written as, by introducing a small parameter 0 < 1,
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 231

 

+ 2 q(t)
q(t) + 2 1 + (t) + RL0D t q(t) = 0, = . (14.20)
E

The presence of small parameter is reasonable since damping and noise


perturbation are small in many engineering applications. Here the viscoelasticity
is also considered to be a kind of weak damping.
Mathematically, random excitations can be described as stochastic processes. For
engineering applications, the stochastic loadings have been modeled as Gaussian
white noise processes, real noise processes, or bounded noise processes.
A white noise process is a weakly stationary process that is delta-correlated and
mean zero. Its power spectral density is constant over the entire frequency range,
which is obviously an idealization.
A real noise (t) is often characterized by an OrnsteinUhlenbeck process and
is given by
d (t) = (t)dt + dW (t), (14.21)

where W (t) is a standard Wiener process. It is well known that (t) is a normally
distributed random variable, which is not bounded and may take arbitrarily large
values with small probabilities, and hence may not be a realistic model of noise in
many engineering application.
A bounded noise (t) is a more realistic and versatile model of stochastic
fluctuation in engineering applications and is normally represented as
 
(t) = cos t + 1/2 W (t) + , (14.22)

where is the noise amplitude, is the noise intensity, W (t) is the standard Wiener
process, and is a random variable uniformly distributed in the interval [0, 2 ]. The
inclusion of the phase angle makes the bounded noise (t) a stationary process.
Equation (14.22) may be written as
(t) = cos Z(t), dZ(t) = t + dW (t), (14.23)

where the initial condition of Z(t) is Z(0) = . The small circle denotes the term in
the sense of Stratonovich. This process is bounded between and + for all time
t and hence is a bounded stochastic process. The auto-correlation function of (t)
is given by
1  1 
R( ) = E[ (t) (t + ) ] = 2 cos e 2 | | , (14.24)
2 2
and the spectral density function of (t) is
 +
2 2 2 + 2 + 14 4
S( ) = R( )ei d =   , (14.25)
2 ( + )2 + 14 4 ( )2 + 14 4

which is shown in Fig. 14.2.


232 J. Deng et al.

Fig. 14.2 Power spectral density of a bounded noise process

When the noise intensity is small, the bounded noise can be used to model
a narrow-band process about frequency . In the limit as approaches zero, the
bounded noise reduces to a deterministic sinusoidal function. On the other hand,
in the limit as approaches infinite, the bounded noise becomes a white noise of
constant spectral density. However, since the mean-square value is fixed at 12 , this
constant spectral density level reduces to zero in the limit.
In investigation of stochastic systems, one is generally most interested in the
almost-sure sample behavior of the response process. The largest Lyapunov expo-
nent is one of the most important characteristic numbers in the modern theory of the
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 233

dynamic stability of a stochastic dynamical system. This is because it determines


not only whether the system is almost-surely stable but also the exponential rate at
which the response of the system grows or decays.
The Lyapunov exponent of system (14.20) may be defined as

1  1 1/2
= lim log q2 (t) + 2 q2 (t) . (14.26)
t t
In this paper, the method of stochastic averaging is used to obtain the Lyapunov
exponents of fractional viscoelastic systems and then the stability property is
studied.

14.4 Stochastic Averaging

The equation of motion in Eq. (14.20) is a stochastic fractional integro-differential


equation, which is difficult to solve exactly. In order to apply the averaging method,
a transformation is made to the amplitude and phase variables a and by means of
the relations
1
q(t) = a(t) cos (t), = a(t) sin (t),
q(t) (t) = t + (t). (14.27)
2
Substituting Eq. (14.27) into (14.20) yields

a cos (t) a sin (t) = a sin (t),

a sin (t) + a cos (t) = a cos (t) 2 a sin (t) (14.28)



+ (t)a cos (t) + RL0D t q(s),

where = 12 . Solving Eq. (14.28) yields

 1 
= 2 a(t) sin2 (t) + (t)a(t) sin 2(t) + U ss ,
a(t)
2
  (14.29)

(t) = sin 2(t) + cos2 (t) (t) + U cs ,
a(t)

where
 t  t
sin (t) a(s) sin (s) cos (t) a(s) sin (s)
U ss = ds, U cs = ds,
(1 ) 0 (t s) (1 ) 0 (t s)
234 J. Deng et al.

and, for ease of presentation, the fractional derivative in Eq. (14.8) is rewritten as
 t
RLD
  g(t) f (s)
f (s)g(t) = ds. (14.30)
0 t (1 ) 0 (t s)

The bounded noise can be written as, by assuming that the magnitude is small and
then introducing a small parameter 1/2 ,
 
(t) = cos t + (t) , (t) = 1/2 W (t) + . (14.31)

Substituting (t) into Eq. (14.29) leads to


. a   /
= a(1 cos 2) +
a(t) cos t + (t) sin 2 + U ss ,
2
.   /
(t) = sin 2 + cos t + (t) cos2 + U cs , (14.32)
a(t)

(t) = 1/2 W (t).

Equation (14.32) are exactly equivalent to (14.20) and cannot be solved exactly.
It is fortunate, however, that the right-hand side is small because of the presence of
the small parameter . This means that both a and change slowly. Therefore one
can expect to obtain reasonably accurate results by averaging the response over one
period. This may be done by applying the averaging operator given by
 +T
1
M () = lim () d .
T T

When applying the averaging operator, the integration is performed over explicitly
appearing only.
The averaging method of Larionov [11] can be applied to obtain the averaged
equations as follows, without distinction between the averaged and the original non-
averaged variables a and ,
 1 
= a + a sin(2 ) + M {U ss } ,
a(t)
4 t
  (14.33)
1
(t) = + cos(2 ) + M {U cs } ,
4 a t
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 235

where some averaged identities have been used,

M (cos 2) = M (sin 2) = 0,
t t
.   / 1
M cos t + (t) sin 2 = sin(2 ),
t 2
.   / 1
M cos t + (t) cos2 = cos(2 ). (14.34)
t 4

Applying the transformation = t s and changing the order of integration


lead to
 T  t
a 1
M {U } =ss
lim (t s) sin (s) sin (t)dsdt
t (1 ) T T t=0 s=0

 T  t
a 1
= lim sin (t) sin (t )d dt
(1 ) T T t=0 =0


a a c  
= cos d = H , (14.35)
2(1 ) 0 2 2 2

where (t ) = (t) 12 is used. Similarly, it can be shown that

a s  
M {U cs } = H , (14.36)
t 2 2

where
     1
1
Hc = cos d = sin ,
2 (1 ) 0 2 2 2
(14.37)
     1
1
Hs = sin d = cos .
2 (1 ) 0 2 2 2

Substituting Eqs. (14.35) and (14.36) into (14.33) gives


 1   1 
= + sin(2 ) a(t),
a(t) (t) = + cos(2 ) ,
4 4
(14.38)
where
1   1  
= + 2 H c , = + 2 H s . (14.39)
2 2 2 2

Upon the transformation = log a and = 12 and using (t) = 1/2 W (t),
Eq. (14.38) results in two Ito stochastic differential equations
236 J. Deng et al.

 1 
d (t) = + sin 2(t) dt, (14.40)
4
 1  1
d(t) = + cos(2 ) dt 1/2 W (t). (14.41)
4 2
Substituting Eq. (14.27) into (14.26) yields the Lyapunov exponent

1  1  1
= lim log q2 (t) + 2 q2 (t) = lim (t). (14.42)
t t t t

Integrating Eq. (14.40)


 t
1
(t) (0) = sin 2(t)dt t, (14.43)
4 0

and substituting into Eq. (14.42) yield


 t
1 1 1
= lim (t) = lim sin 2(t)dt . (14.44)
t t 4 t t 0

The stochastic process (t) defined by Eq. (14.41) can be shown to be ergodic, in
which case one can write
 t  
1
lim sin 2(t)dt = E sin 2(t) , w.p.1, (14.45)
t t 0

where E[ ] denotes the expectation operator. Thus, with probability 1,

1  
= E sin 2(t) . (14.46)
4
 
The remaining task is to evaluate E sin 2(t) in order to obtain . For this purpose,
the FokkerPlank equation governing the stationary probability density function
p() is set up

1  1/2 2 d 2 p() d   1  
+ cos 2 p() = 0. (14.47)
2 2 d2 d 4

Because the coefficients of the FokkerPlank equation are periodic functions in


of period , p() satisfies the periodicity condition p() = p( + ). The solution
of Eq. (14.47) is
 +
p() = C1 e f () e f ( ) d , 0 , (14.48)

14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 237

where

2
f () = 2 + r sin 2, = , r= , (14.49)
2 2
and the normalization constant C is given by

C = 2 e Ii (r) Ii (r), (14.50)

Ii (r) being the Bessel function of imaginary



argument and imaginary order. Using
Eq. (14.48), the expected value E sin 2(t) is given by
 
E sin 2(t) = FI ( , r), (14.51)

where
1 d  
FI ( , r) = log Ii (r) + log Ii (r) ,
2 dr
which can be written as, by making use of the property of the Bessel function,
 
1 I1+i (r) I1i (r)
FI ( , r) = + .
2 Ii (r) Ii (r)

Hence the Lyapunov exponent given by Eq. (14.46) becomes

1
= FI ( , r) . (14.52)
4

The stability boundary, which corresponds to = 0, is given by

1
FI ( , r) = . (14.53)
4
Depending on the relations among the parameters , r, and unity, various
asymptotic expansions of the Bessel functions involved in FI ( , r) can be employed
to simplify Eq. (14.52). For example, when the noise amplitude 1/2 1 is so
small that
1 and r > , one can obtain

1  4 2 2
= 1   . (14.54)
4  4 2
8 1

238 J. Deng et al.

When = 0, i.e., the excitation is purely harmonic, the Lyapunov exponent is



1  4 2
= 1 . (14.55)
4

When = 0, i.e., the excitation frequency = 2 H c 2 , the asymptotic
result is

1 2
= . (14.56)
4 8

14.5 Numerical Determination of Lyapunov Exponents

In order to assess the accuracy of the approximate analytical result of (14.52) of


the Lyapunov exponent, numerical determination of the Lyapunov exponent of
the original fractional viscoelastic system (14.20) is performed. For this purpose,
RiemanLiouville fractional derivative must be determined numerically.
The fractional stochastic equation of motion (14.20) is discretized; then the
numerical procedure for determination of Lyapunov exponents from small data
sets, which was proposed by Rosenstein et al. [22], is used to obtain the Lyapunov
exponents.
Suppose the time interval concerned is [0,t] and the time step is h; one has
t = tn = nh and tnk = (n k)h, k = 0, 1, . . . , n. Since the integral in Eq. (14.7) is a
convolution integral, it can also be written as
  t

 
RLD q(t) = 1 q(0) n )
n q(t
+ d
0 t n (1 ) tn 0
 
n1  (
1 q(0) j+1)h n )
q(t
(1 ) (nh) j
= + d . (14.57)
= 0 jh

Using first-order difference to approximate the differential in the interval jh


( j + 1)h [10]
   
q (n j)h q (n j 1)h
n ) = q(nh
q(t ) = (14.58)
h
and
 ( j+1)h  
1 1
d = h1 ( j + 1)1 j1 (14.59)
jh 1
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 239

leads to
&
  1 q(0) 1
RLD q(t) = +
0 tn (1 ) (nh) 1
'
n1 .    /  
1

h
q (n j)h q (n j 1)h ( j + 1)1 j1
j=0
 
(1 )q(0) n1
= + aj , (14.60)
n j=0

where
1 .    /  
= , a j = q (n j)h q (n j 1)h ( j + 1)1 j1 .
h (2 )
(14.61)
In a quadrature form, Eq. (14.60) can be written as

  n
RLD 1
0 tn
q(t) =
h
j q( jh), (14.62)
j=0

where the quadrature weights are

1   1
0 = (n 1)1 n1 + (1 )n , n = ,
(2 ) (2 )
(14.63)
1  
j = ( j + 1)1 2 j1 + ( j 1)1 , 1 j n 1.
(2 )

Letting

 
q1 (t) = q(t), q2 (t) = q(t),
q3 (t) = t + 1/2 W (t) + , Q(t) =RL0D t q(t) ,
(14.64)
the equation of motion (14.20) can be written as a three-dimensional system
 
q1 (t) = q2 , q2 (t) = 2 q2 2 1 + cos q3 q1 + Q ,

q3 (t) = + 1/2 W (t). (14.65)


240 J. Deng et al.

These equations can be discretized using the Euler scheme

qk+1
1 = qk1 + qk2 t,
.  /
qk+1
2 = q k
2 2 q k
2 + 2
1 + cos qk k
s 1q + Qk
t,
(14.66)
qk+1
3 = qk3 + t + 1/2 W k ,
n+1
1
Qk =
h j q1j .
j=1

After the discretization, a time series of the response variable q(t) can be obtained
for given initial conditions. It is clear that the fractional equation of motion (14.20)
depends on all historical data of q(t).

14.6 Results and Discussions

Consider two special cases first. From the equation of motion (14.20), suppose
= 0 and = 0, it becomes the damped Mathieu equation. If further = 0 is
assumed, the equation of motion reduces to undamped Mathieu equation.
From Eq. (14.55), the boundary for the case of = 0, = 0, = 0 is
 
= 1 , (14.67)
2 2
which is the same as the first-order approximation of the boundary for the
undamped Mathieu equation obtained in Eq. (2.4.11) of [27]. However, if damping
is considered, the equation of motion in Eq. (14.20)
 becomes
 the damped Mathieu
equation. Substituting Eq. (14.39) and = 1 12 into (14.55) leads to the
stability boundaries
 2 2 2 
1 = 2 , (14.68)
2 16 2
which is similar to the first-order approximation of the boundary for the damped
Mathieu equation [27] in the vicinity of = 2 . This is due to that when the
intensity approaches 0, the bounded noise becomes a sinusoidal function.
Consider the effect of bounded noises on system stability. From Eq. (14.56), it is
found that, by introducing noise ( = 0) in the system, stability of the viscoelastic
system is improved in the vicinity of = 0, because the term containing is
negative. This result is also confirmed by Fig. 14.3, where in the resonant region,
with the increase of noise intensity 1/2 , the unstable area of the system dwindles
down and so becomes more stable. One probable explanation is that, from the
power spectrum density function of bounded noise, the larger the value of , the
wider the frequency band of the power spectrum of the bounded noise, as shown in
Fig. 14.2. When approaches infinite, the bounded noise becomes a white noise.
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 241

Fig. 14.3 Stability


boundaries of the viscoelastic
system

Fig. 14.4 Lyapunov exponents of a viscoelastic system

As a result, the power of the noise is not concentrated in the neighborhood of the
central frequency , which reduces the effect of the primary parametric resonance.
The effect of noise amplitude on Lyapunov exponents is shown in Fig. 14.4.
The results of two noise intensities, 1/2 = 0.8 and 1/2 = 0.2, are compared for
242 J. Deng et al.

Fig. 14.5 Lyapunov exponents of the viscoelastic system

various values of . It is seen that, in the resonant region, increasing the noise
amplitude destabilizes the system. The maximum resonant point is not exactly
at = 2 , but in the neighborhood of = 2 . This may be partly due to the
viscoelasticity, partly due to the noise.
In the numerical simulation of Lyapunov exponents, the embedding dimension
is m = 50, the reconstruction delay J = 30, the number of data points is N = 20, 000,
and the time step is t = 0.01, which yields the total time period T = Nt = 200.
Typical results are shown in Fig. 14.5 along with the approximate analytical results.
It is found that the approximate analytical result in Eq. (14.52) agrees with the
numerical result very well.
Finally, consider the effect of fractional order and damping on system stability.
The fractional order of the system has a stabilizing effect, which is illustrated in
Fig. 14.6. This is due to the fact that when changes from 0 to 1, the property of the
material changes from elastic to viscous, as shown in Fig. 14.1. The same stabilizing
effect of damping on stability is shown in Fig. 14.7.

14.7 Conclusions

The stochastic stability of a viscoelastic column under the excitation of a bounded


noise is investigated by using the method of stochastic averaging. The viscoelastic
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 243

Fig. 14.6 Effect of fractional


order on system stability

Fig. 14.7 Effect of damping


on stability boundaries

material is assumed to follow the fractional KelvinVoigt constitutive relation,


which is capable of modelling hereditary phenomena with long memory. Since a
RiemannLiouville fractional derivative is involved in the viscoelastic term, the
method of stochastic averaging due to Larionov is applied to obtain the averaged
equation of motion, which is then used to obtain the approximate Lyapunov
244 J. Deng et al.

exponents by solving the FokkerPlanck equation. A numerical algorithm is put


forward to determine the largest Lyapunov exponent from the fractional stochastic
equation of motion, which is then used to confirm the approximate analytical result.
The instability region, which corresponds to positive values of the largest Lyapunov
exponent, is obtained for various values of system parameters.
It is found that, under bounded noise excitation, the parameters of damping ,
the noise intensity , and the model fractional order have stabilizing effects on
the almost-sure stability. These results are useful in engineering applications.
Having obtained the averaged Ito stochastic differential equation, the moment
Lyapunov exponent can be determined by solving an eigenvalue problem, from
which the Lyapunov exponents can be obtained. This will be studied in a separate
paper.

Acknowledgments The research for this paper was supported, in part, by the Natural Sciences
and Engineering Research Council of Canada.

References

1. Ahmadi, G., Glocker, P.G.: J. Eng. Mech. 109(4), 990999 (1983)


2. Ariaratnam, S.T.: Stochastic stability of viscoelastic systems under bounded noise excitation.
In: Naess, A., Krenk, S. (eds.) IUTAM Symposium on Advances in Nonlinear Stochastic
Mechanics, pp. 1118. Kluwer Academic, Dordrecht (1996)
3. Bagley, R.L., Torvik, P.J.: J. Rheol. 27(3), 201210 (1983)
4. Debnath, L.: Int. J. Math. Math. Sci. 54, 34133442 (2003)
5. Di Paola, M., Pirrotta, A.: Meccanica dei Materiali e delle Strutture 1(2), 5262 (2009)
6. Findley, W.N., Lai, J.S., Onaran, K.: Creep and Relaxation of Nonlinear Viscoelastic Materials
with an Introduction to Linear Viscoelasticity. North-Holland, New York (1976)
7. Floris, C.: Mech. Res. Comm. 38, 5761 (2011)
8. Khasminskii, R.Z.: Theor Probab. Appl. (English translation) 11, 390406 (1966)
9. Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differen-
tial Equations. Elsevier, New York (2006)
10. Koh, C.G., Kelly, J.M.: Earthquake Eng. Struct. Dynam. 19, 229241 (1990)
11. Larionov, G.S.: Mech. Polymers (English translation) 5, 714720 (1969)
12. Lin, Y.K., Cai, Q.G.: Probabilistic Structural Dynamics: Advanced Theory and Applications.
McGraw-Hill, New York (1995)
13. Mainardi, R.: Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to
Mathematical Models. Imperial College Press, London (2010)
14. Onu, K.: Stochastic averaging for mechanical systems, Ph.D. thesis, University of Illinois at
Urbana-Champaign, Urbana, Illinois (2010)
15. Papoulia, K.D., Panoskaltsis, V.P., Kurup, N.V., Korovajchuk, I.: Rheol. Acta 49(4), 381400
(2010)
16. Pfitzenreiter, T.: ZAMM J. Appl. Math. Mech. 84(4), 284287 (2004)
17. Pfitzenreiter, T.: ZAMM J. Appl. Math. Mech. 88(7), 540551 (2008)
18. Podlubny, I.: Fractional Differential Equations. Academic Press, San Diego (1999)
19. Potapov, V.D.: J. Sound Vib. 173, 301308 (1994)
20. Potapov, V.D.: Appl. Numer. Math. 24, 191201 (1997)
21. Roberts, J.B., Spanos, P.D.: Int. J. NonLinear Mech. 21, 111134 (1986)
22. Rosenstein, M.T., Collins, J.J., De Luca, C.J.: Phys. D 65, 117134 (1993)
14 Almost-Sure Stability of Fractional Viscoelastic Systems Driven... 245

23. Rouse, Jr., P.E.: J. Chem. Phys. 21(7), 12721280 (1953)


24. Sri Namachchivaya, N., Ariaratnam, S.T.: Mech. Struct. Mach. 15(3), 323345 (1987)
25. Stratonovich, R.L.: Topics in the Theory of Random Noise. Gordon and Breach Science
Publishers, New York (1963)
26. Wolf, A., Swift, J., Swinney, H., Vastano, A.: Phys. D 16, 285317 (1985)
27. Xie, W.C.: Dynamic Stability of Structures. Cambridge University Press, Cambridge (2006)
Chapter 15
Model Selection for Random Functions
with Bounded Range: Applications
in Science and Engineering

R.V. Field, Jr. and M. Grigoriu

Abstract Differential, integral, algebraic, and other equations can be used to


describe the many types of systems encountered in applied science and engineering.
Because of uncertainty, the specification of these equations often requires proba-
bilistic models to describe the uncertainty in input and/or system properties. Since
the available information on input and system properties is typically limited, there
may be more than one model that is consistent with the available information. The
collection of these models is referred to as the collection of candidate models C .
A main objective in model selection is the identification of the member of C which
is optimal in some sense.
Methods are developed for finding optimal models for random functions under
limited information. The available information consists of: (a) one or more samples
of the function and (b) knowledge that the function takes values in a bounded set, but
whose actual boundary may or may not be known. In the latter case, the boundary
of the set must be estimated from the available samples. The methods are developed
and applied to the special case of non-Gaussian random functions referred to as
translation random functions. Numerical examples are presented to illustrate the
utility of the proposed approach for model selection, including optimal continuous
time stochastic processes for structural reliability, and optimal random fields for
representing material properties for applications in mechanical engineering.

Keywords Decision theory Model selection Random fields Shock and


vibration Stochastic processes

R.V. Field, Jr. ()


Sandia National Laboratories, Albuquerque, NM, USA
e-mail: rvfield@sandia.gov
M. Grigoriu
Cornell University, Ithaca, NY, USA
e-mail: mdg12@cornell.edu

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 247


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 15, Springer Science+Business Media New York 2013
248 R.V. Field, Jr. and M. Grigoriu

15.1 Introduction

Stochastic processes, random fields, and other random functions are often used
to model phenomena that occur randomly in nature. Properties of real, physical
systems, whether deterministic or random, always take values in bounded sets.
For example, material properties and time-varying inputs to, and outputs from, a
physical system cannot be infinitely large. Time series of financial, geological, and
other physical systems do not exhibit arbitrarily large jumps. Other examples from
nature include experimental measurements of wind forces on structures [19, 29],
ocean wave elevation [24], soil particle size [2], highway/railway elevation [20, 25],
and Euler angles of atomic lattice orientation (see [4] and [16], Sect. 8.6.2.2).
While each of these quantities is known to be bounded, the values for the
bounds themselves are often unknown. Gaussian models, which have unbounded
support and therefore cannot accurately represent real, physical phenomena, are
often used in practice. The Gaussian model can be an adequate choice for many
applications, but this is not always the case. For example, engineering systems
designed to conform with the Gaussian assumption may be needlessly over-
conservative. Herein, we limit the discussion to non-Gaussian stochastic models
that take values in a bounded set, where the boundary itself may or may not be
known. In the case of the latter, the boundary must be estimated, together with other
model parameters, from the available information.
Let X denote a random function describing a particular physical quantity, and
suppose that information on X is limited to one or more samples of this function,
as well as some features of it, e.g., its second moment properties and/or support.
A ranking procedure is used for selecting the optimal model for X from a finite
collection of model candidates, i.e., models that are consistent with all available
information and, therefore, cannot be neglected. The model candidates we consider
in the report are translation random functions, that is, memoryless mappings of
Gaussian random functions. Our objectives are to: (a) find a probabilistic model
for X that is optimal in some sense and (b) illustrate the proposed method for model
selection by example. Applications include optimal models for stationary stochastic
processes taking values on a bounded interval, where the bounds may or may not
be known, and optimal models for homogeneous random fields used to represent
material property variability within an aerospace system.
It is shown that the solution of the model selection problem for random functions
X with a bounded support differs in a significant way from that of functions with
unbounded support. For example, the performance of the optimal model for X
depends strongly on the accuracy of the estimated range of this function. Predictions
of various properties of X can be inaccurate even if its range is only slightly in error.
Satisfactory estimates for the range require much larger samples of X than those
needed to estimate, for example, the parameters of the correlation function or the
marginal distribution of X.
In Sect. 15.2 we review essentials of translation random functions, including how
to calibrate these models to available data. Two general methods for model selection
are briefly summarized in Sect. 15.3, then applied to a series of example problems
in Sect. 15.4.
15 Model Selection for Random Functions with Bounded Range 249

15.2 The Translation Model

Let X denote a translation model, a particular type of non-Gaussian random function


defined by a memoryless transformation of a Gaussian random function with
specified second-moment properties. This class of stochastic model has been used,
for example, to assess the dynamic response of a micro-electrical-mechanical
system (MEMS) switch to random excitation [9], for the seismic analysis of civil
engineering structures [23], for representing aggregates in concrete [17], and as a
surrogate model for assessing climate change [8]. Generalities of translation models
are discussed in Sect. 15.2.1; methods for calibrating these models to available data
are discussed in Sect. 15.2.2.

15.2.1 Generalities

Let G(v) Rd , v D, be a homogeneous Gaussian random function, that is,


arbitrary linear forms of G(vk ), vk D, are Rd -valued Gaussian random variables.
We refer to G as a Gaussian stochastic process or Gaussian random field for the

case of D = [0, ) or D Rd , respectively, where d 1 is an integer. Further,
the argument of G is viewed as time for stochastic processes and space for random
fields. The coordinates of G have zero mean, unit variance, and covariance functions
i j ( ) = E[Gi (v + ) G j (v)], i, j = 1, . . . , d, where E[ ] denotes expectation.
Consider a continuous mapping h : Rd Rd . The non-Gaussian random func-
tion defined by

X(v) = h[G(v)] (15.1)

is called a translation random function. It is common to define the coordinates of


X by the transformations Xi = hi [Gi (v)] = Fi1 [Gi (v)], i = 1, . . . , d, where {Fi }
are some cumulative distribution functions (CDFs), and denotes the CDF of a
standard Gaussian random variable with zero mean and unit variance. Function G is
commonly referred to as the Gaussian image of X. Further, because G is stationary
and h is v-invariant, X is stationary in the strict sense [15].
The second-moment properties and marginal CDFs of X, the translation model
defined by Eq. (15.1), can be expressed in terms of mapping h and the covariance
function of its Gaussian image. For example, the marginal CDFs and correlation
functions of X = (X1 , . . . , Xd ) are given by

P (Xi (v) x) = P Fi1 [Gi (v)] x

= P Gi (v) 1 Fi (x) = Fi (x), i = 1, . . . , d, (15.2)
250 R.V. Field, Jr. and M. Grigoriu

and

E [Xi (v + ) X j (v)] = E [hi (Gi (v + )) h j (G j (v))]



= hi (x) h j (y) 2 (x, y; i j ( ))dxdy, i, j = 1, . . . , d, (15.3)
R2

respectively, where 2 (x, y; ) denotes the joint density of a standard bivariate


Gaussian vector with correlation coefficient , i.e.,

1
e(x +y 2 xy)/(2(1 )) .
2 2 2
2 (x, y; ) =  (15.4)
2 1 2

An alternate and more convenient representation of the second-moment proper-


ties of X is given by a normalized version of the corresponding covariance function,
that is

E [Xi (v + ) X j (v)] i j
i j ( ) = , (15.5)
i2j

where E [Xi (v + ) X j (v)] is given by Eq. (15.3), i = E[Xi ], and i2j= E[Xi (v)X j (v)]
i j . By Eq. (15.5), i j ( ) takes values in [1, 1] and, by Eq. (15.3), depends on
i j ( ). Closed-form expressions for i j in terms of i j are difficult to obtain, but
it can be shown that (see [15], Sect. 3.1.1): (a) i j ( ) = 0 and i j ( ) = 1 imply
i j ( ) = 0 and i j ( ) = 1, respectively; (b) i j is an increasing function of i j ;
(c) i j ( ) and i j ( ) satisfy |i j ( )| |i j ( )|, D; and (d) i j is bounded from
below by ij 1, where

E [hi (Gi (v)) h j (G j (v))] i j


ij = . (15.6)
i2j

Random functions with covariance functions smaller than ij cannot be represented
by translation models. Further, even if all covariance functions for X are in
the range [ij , 1], it does not mean that its image in the Gaussian space is a
covariance function, since ii may not be positive definite (see [15], Sect. 3.1.1).
Hence arbitrary combinations of marginal CDFs {Fi } and covariance functions
{i j } are not permissible. If, however, we postulate {i j }, the resulting {i j } are
always proper covariance functions.
The definition of the translation random function in Eq. (15.1) holds for distri-
butions {Fi } with probability mass in bounded intervals, intervals bounded to the
left/right, or the entire real line. If all components of X(v) take values in bounded
intervals, the function is said to have a bounded range. There is conceptually
no difference between the treatment of translation function with and without
bounded range.
15 Model Selection for Random Functions with Bounded Range 251

15.2.2 Calibration

As demonstrated by Sect. 15.2.1, the probability law of the translation model defined
by Eq. (15.1) is completely defined by marginal CDFs {Fi } and the covariance
functions {i j } of its Gaussian image. Motivated by the discussion above, we do not
specify the covariance functions of X directly, but rather the covariance of G so as to
guarantee X has a proper covariance function. The calibration of the marginal CDF
and covariance function are discussed in Sects. 15.2.2.1 and 15.2.2.2, respectively.
For clarity, we will assume for the remainder of the discussion that: (a) X(v) =
X(v), v D, is a scalar-valued random function; (b) D R so that X(v) = X(v),
< v < ; and (c) X is an ergodic process, so that model calibration can be
performed using a single sample, denoted by x = (x1 , x2 , . . . , xm ) , where xk = x(vk )
and v = vk+1 vk , k = 1, . . . , m 1, is assumed constant. The generalization of the

results of this section to the case of Rd -valued random functions where D Rd ,

d, d > 1, is straightforward.

15.2.2.1 Marginal Probability Law

Let X be a translation random function with marginal CDF F that depends on a set of
parameters . We denote this dependence by F(x; ); the corresponding marginal
PDF of X is f (x; ) = dF(x; )/dx. Calibration of the marginal probability law
for X to the available data x = (x1 , x2 , . . . , xm ) requires two steps: (a) choose the
functional form for F and (b) calibrate , the associated parameters of F.
The objective of step (a) is to select a marginal distribution function F that
is sufficiently flexible to capture any desired behavior observed in the data x and is
consistent with the known physics. For example, in climate modeling, if X models
precipitation rate, the distribution function must have support on the positive real
line with positive skewness; the lognormal distribution satisfies these constraints
and is often used to model precipitation rate [28]. Herein we consider the class of
beta translation models to represent random phenomena with bounded range. Hence,
mapping h defined by Eq. (15.1) is such that the marginal distributions of X are that
of a beta random variable, meaning that for each fixed v D, random variable X(v)
is equal in distribution with a beta random variable, i.e.
d
X(v) = a + (b a)Y, v D, (15.7)

where Y is a standard beta random variable taking values in [0, 1]. The probability
density function (PDF) of Y is [3]
1
f (y; q, r) = yq1 (1 y)r1 , 0 y 1, (15.8)
B(q, r)
where q, r > 0 are deterministic shaping parameters, and B(q, r) = (q) (r)/ (q+
r) and () denote the beta and gamma functions, respectively (see [1], Sects. 6.1
and 6.2).
252 R.V. Field, Jr. and M. Grigoriu

The marginal CDF of process X(v) is given by

F(x; ) = P (X(v) x)
   y
xa 1
=P Y = vq1 (1 v)r1 dv = Iy (q, r), (15.9)
ba B(q, r) 0

for x [a, b], where y = (x a)/(b a) and Iy (, ) denotes the incomplete beta
function ratio (see [21], Sect. 25.1). By Eq. (15.9), the marginal probability law of
X is completely defined by parameters = (a, b, q, r) . A wide variety of symmetric
(q = r) and asymmetric (q = r) distributions are possible; the flexibility of the
PDF defined by Eq. (15.8) makes the beta distribution a very useful model for
representing random phenomena with bounded range.
Suppose first that parameters a and b defining the range of X are known.
Maximum likelihood estimators q and r for parameters q and r are readily available
(see [21], Sect. 25.4). If the range of X is unknown, the identical estimators can be
used for a collection of trial values for a and b. For example, consider

ai = a1 (i 1)
bi = b1 + (i 1) (15.10)

for i = 2, . . . , n, where a1 = min1km (xk ), b1 = max1km (xk ), = (b1 a1 )/2,


and > 0 is a deterministic parameter. By Eq. (15.10), {[ai , bi ], i = 1, . . . , n} is
a monotone increasing sequence of intervals, i.e., [a1 , b1 ] [an , bn ], and the
sample minimum and maximum are contained within each interval. As the value
for increases, each interval within the collection will get wider; the value for
therefore reflects how conservative we are when estimating the range of X. The
corresponding estimates qi and ri for shape parameters qi and ri can be obtained by
the standard approach discussed in [22], Sect. 6.1, with (a, b) replaced by (ai , bi ).

15.2.2.2 Covariance Function

Suppose that: (a) the marginal distribution of the translation model X defined by
Eq. (15.1) is known so that its Gaussian image is G(v) = 1 F(X(v); ); and
(b) ( ; ) = E[G(v) G(v + )], the covariance function of the Gaussian image of X,
has known functional form but unknown parameter vector . We next provide two
methods to estimate values for .
For method #1, we choose vector for that minimizes the following error


e1 ( ) = 0 ( ) [ ( ) ( ; )]2 d , (15.11)

where ( ) denotes an estimate of the covariance function of G (see [5], Sect. 11.4)
obtained from vector g = (g1 , . . . , gm ) with elements
15 Model Selection for Random Functions with Bounded Range 253

gk = 1 F(xk ; ), (15.12)

( ) 0 is a deterministic weighting function, and < (m 1) v is a constant.


One choice for the weighting function is given by
&
1
( ) = (15.13)
0 else

where (0, 1) is a constant. By Eq. (15.13), ( ) is a rectangular pulse of unit


height and width so that error e1 depends only on the difference between
and at lags ; alternatives to ( ) defined by Eq. (15.13) can be used if
appropriate.
By Eq. (15.12), g is the Gaussian image of x = (x1 , x2 , . . . , xm ) , the available
sample of X. Further, g is one sample of a zero-mean Gaussian random vector with
covariance matrix ( ), where the elements of ( ) are given by

( )i, j = (|i j| v; ), i, j = 1, . . . , m. (15.14)

By Eq. (15.14), ( ) is positive definite and invertible. The likelihood that g was
drawn from a zero-mean Gaussian vector with covariance matrix ( ), for a fixed
, is given by (see [30], Chap. 8)
 
1
 (g | (; )) = [(2 )m det( ( ))]1/2 E p g ( )1 g (15.15)
2

where det( ( )) > 0 denotes the determinant of matrix ( ). For method #2, we
choose for that minimizes

e2 ( ) = 2 ln  (g | (; ))
m
= m ln 2 + ln (k ) + g ( )1 g, (15.16)
k=1

where k > 0, k = 1, . . . , m, denote the eigenvalues of matrix ( ). By Eq. (15.16),


parameter vector will maximize the likelihood function defined by Eq. (15.15).
Throughout this section, we have assumed the functional forms for both the
marginal CDF of X and the covariance function of its Gaussian image are perfectly
known, and we provided methods to estimate values for the unknown parameters
of these functions. In general, the functional forms for F and are also unknown.
In this case, the methods defined by Eqs. (15.11) and (15.16) can be repeated for
a sequence of n 1 candidate marginal CDFs and p 1 candidate covariance
functions, resulting in the following collection {1 ( ; 1,1 ), . . . , 1 ( ; n,1 ), . . . ,
p ( ; n,p }, where i, j denotes an estimate for i, j , the parameter vector for
covariance function j , j = 1, . . . , p, assuming marginal CDF Fi , i = 1, . . . , n.
254 R.V. Field, Jr. and M. Grigoriu

15.3 Model Selection

It is rare in applications for there to be sufficient information to uniquely define a


model for a physical quantity, particularly in the presence of model and/or exper-
imental uncertainty. Rather, two or more models can be found that are consistent
with the available information, and the issue becomes how to systematically choose
one model from the collection for studies in prediction, design, and/or optimization.
Model selection (see [12, 13, 18]) is a procedure to achieve this objective, and
involves three ingredients: (a) the available information on X; (b) a collection of
candidate models for X, denoted by C ; and (c) a procedure by which we rank the
members of C . The optimal model is then defined by the candidate model with the
highest ranking.
We assume the available information on X is given by the following: (a) one
sample of X, denoted by x = (x1 , . . . , xm ) ; and (b) some prior knowledge about the
physics of X. For example, physics may dictate that X is positive or takes values in
a bounded interval. Motivated by Sect. 15.2, the collection of candidate models for
X we will consider is given by
, -
C = Xi, j (v) = Fi1 [G j (v)] , i = 1, . . . , n, j = 1, . . . , n , (15.17)

where each Fi = Fi (x; i ) is consistent with any prior knowledge on X, and G j is a


zero-mean, stationary Gaussian process with covariance function j ( ; i, j ) = E[G j
(v) G j (v + )], where i, j denotes an estimate for i, j as discussed in Sect. 15.2.2.2.
For the case of bounded range, Fi = F(x; ai , bi , qi , ri ), (ai , bi ) and (qi , ri ) denote trial
values and estimators, respectively, for range (a, b) and shape parameters (q, r) as
discussed in Sect. 15.2.2.1, and CDF F is defined by Eq. (15.9).
Let gi denote vector g defined by Eq. (15.12) with F replaced by Fi , and define

pi, j =  gi | j (; i, j ) , i = 1, . . . , n, j = 1, . . . , n , (15.18)

where (gi | j (; i, j )), defined by Eq. (15.15), is a measure of the likelihood that
sample gi was drawn from a zero-mean Gaussian process with covariance function
j (; i, j ), and

n n
1 =  gi | j (; i, j ) (15.19)
i=1 j=1

is a scaling factor. We can interpret pi, j to be the probability that candidate model
Xi, j C is the best available model for X since, by Eqs. (15.18) and (15.19), each
pi, j 0 and i, j pi, j = 1.
Our objective is to rank the members of C and select the candidate model for
X with the highest rank; the winning model is referred to as the optimal model and
denoted by X  C . We present two procedures for ranking the candidate models in
Sects. 15.3.1 and 15.3.2. Both methods make use of the model probabilities, {pi, j }
defined by Eq. (15.18), to assign rankings.
15 Model Selection for Random Functions with Bounded Range 255

15.3.1 Optimal Model by Classical Method

One technique to select X  C is to consider only the model probabilities defined


by Eq. (15.18); we refer to this approach as the classical method for model selection.
By the classical method, candidate model Xi, j C is optimal and denoted by X  C
if, and only if,

pi, j pk,s , k = 1, . . . , n, s = 1, . . . , n . (15.20)

The optimal model under the classical method for model selection depends on the
available information on X, as well as the collection of candidate models considered
in the analysis, C . It has been demonstrated that estimates for pi, j can be unstable
when the available data is limited [12].

15.3.2 Optimal Model by Decision-Theoretic Method

An alternative technique, consistent with the approach developed in [12, 18], is to


instead assess the utility of each candidate model for its intended purpose, then
select X  C based on its expected utility. Principles of decision theory [6] are
used; we therefore refer to this approach as the decision-theoretic method for model
selection. This approach has proved successful when applied to high risk systems
with a fair understanding of the consequences of unsatisfactory system behavior.
Example applications include the shock response mitigation of a flexible structure
during penetration of a hard target [7], the detection and monitoring of vehicles in
the vicinity of critical national assets [10], and aerodynamics models for turbulent
pressure fluctuations during atmospheric re-entry of a spacecraft [11].
Let U(Xi, j , Xk,s ) 0 denote the utility of model Xi, j C assuming model Xk,s C
is the best available. Candidate model Xi, j C is optimal if, and only if

ui, j uk,s , k = 1, . . . , n, s = 1, . . . , n , (15.21)

where

n n
ui, j = E[U(Xi, j , C )] = U(Xi, j , Xk,s ) pk,s (15.22)
k=1 s=1

denotes the expected utility of candidate model Xi, j , and pk,s is defined by
Eq. (15.18). Hence, by Eq. (15.22), the winning model achieves a good fit to the
available data while being most appropriate for the intended purpose. The utility,
U, is sometimes referred to as the opportunity loss (see [27], p. 60) so that the
solution to Eq. (15.21) agrees with intuition, i.e., X  C minimizes the expected
loss. Utility is the traditional term used in the literature, so we will use it herein.
256 R.V. Field, Jr. and M. Grigoriu

The optimal model under the decision-theoretic method depends on the available
information on X and the collection of candidate models, C ; unlike the classical
method, X  also depends on U, the utility function. Hence, we expect X  to be
different when different utility functions are used, i.e., when the model purpose
changes. We note that there can be significant uncertainty in the definition of
the utility function when the consequences of unsatisfactory system behavior are
not well understood; the decision-theoretic method for model selection may be
inappropriate in this case [12].

15.4 Applications

Optimal models for a stationary stochastic process taking values on a bounded


interval, where the bounds may or may not be known, are discussed in Sect. 15.4.1.
The purpose of this example is to demonstrate the principles of model selection
in a rigorous and concise manner for the case of bounded range. Abundant data is
assumed so that classical methods for model selection can be used.
We then present a more realistic application of model selection in Sect. 15.4.2,
where the response of an aerospace system to an applied mechanical shock is
considered. Significant spatial variation in system material properties is known to
occur, but measurements of these variations are quite limited. The system fails
if certain measures of response of a critical component exceed known thresholds,
and we assume a fair understanding of the consequences of unsatisfactory system
behavior. We therefore model material property spatial variation with random fields
taking values on known bounded intervals and apply the decision-theoretic method
for model selection outlined by Sect. 15.3.2 to assess system reliability.

15.4.1 Optimal Model for Continuous-Time Stochastic Process

Next let X(t), t 0, be a stationary, real-valued stochastic process taking values


on bounded interval [a, b], where the bounds may or may not be known. Suppose
that one sample of X(t), denoted by x(t), is available. In Sect. 15.4.1.1 we assume
that, in addition to the available sample, the covariance function of the Gaussian
image of X is provided but the parameters defining the marginal distribution of
X remain unknown. In Sect. 15.4.1.2, the marginal distribution of X is provided
but the covariance function of G remains unknown. The general case where both
the marginal distribution and covariance functions are unknown is considered in
Sect. 15.4.1.3.
For calculations, x(t) is drawn from a beta translation process with the following
two properties. First, the image of X is a stationary Gaussian process G with zero
mean, unit variance, and covariance function ( ) = E p(| |), R. Second, the
marginal distribution of X is that of a beta random variable with parameters a = 0,
b = 1, q = 2, and r = 3. The available data, sampled with time step t = 0.05 s, is
illustrated in Fig. 15.1.
15 Model Selection for Random Functions with Bounded Range 257

Fig. 15.1 Available data, x(t), for stochastic process model selection. Taken from [13] 
c Elsevier
Science Ltd (2009)

Table 15.1 Calibrated model parameters for stochastic process with known
covariance assuming = 1/10 and m = 200
Candidate model,
Xi C ai bi qi ri
X1 0 1 2.9261 3.8930
X2 0.11592 0.81877 0.7785 0.9071
X3 0.08077 0.85391 1.6641 1.9960
X4 0.04563 0.88906 2.1051 2.4996
X5 0.01049 0.92420 2.5687 3.0193
X6 0.02466 0.95934 3.0627 3.5662

15.4.1.1 Known Covariance

Suppose X is a beta translation process of G, a perfectly known zero-mean, sta-


tionary Gaussian process with covariance function ( ) = E p(| |); the parameters
defining the translation, however, are unknown. In this case, Eq. (15.17) reduces to
, -
C = Xi (t) = Fi1 [G(t)] , i = 1, . . . , n , (15.23)

where Fi is the distribution of a beta random variable on interval [ai , bi ] with shape
parameters qi , ri , for i = 1, . . . , n. Five intervals are considered for the range of X
using the trial values defined by Eq. (15.10) in Sect. 15.2.2.1 assuming = 1/10
and m = 200. Added to beginning of this collection of intervals is the correct range,
[a, b] = [0, 1], so that a total of n = 6 candidate models are considered. The values
for all model parameters are listed in Table 15.1. Candidate model X1 has the correct
range; we therefore refer to X1 as the correct model for X. The ranges of candidate
models X2 , . . . , X6 depend on the available data and form a monotone increasing
sequence of intervals.
The likelihoods and model probabilities, defined by Eqs. (15.15) and (15.18),
respectively, are illustrated in Fig. 15.2 for each candidate model in C . Note that
258 R.V. Field, Jr. and M. Grigoriu

Fig. 15.2 Log-likelihoods (a) and model probabilities (b) for each candidate model in C as a
function of sample length, m. Taken from [13] 
c Elsevier Science Ltd (2009)

the natural log of the likelihood is shown in Fig. 15.2a for clarity. Both results are
shown as a function of the length m of the available sample. For example, m =
100 and m = 500 correspond to the first 100 t = 5 s and the first 500 t = 25 s,
respectively, of x(t) illustrated in Fig. 15.1. Three important observations can be
made. First, for short samples (m < 100), the estimates for the range of X are highly
inaccurate and can change dramatically when the sample size increases. As a result,
the optimal model for short samples, i.e., the model with the greatest probability, can
be any Xi C . Second, the log-likelihood of candidate model X2 is very small when
compared to the log-likelihoods of the other candidate models for X. The values
computed are out of the range of Fig. 15.2a; the corresponding model probability,
p2 , is near zero for all values for m. We therefore conclude that candidate model X2
defined by range [a2 , b2 ] = [min x(t), max x(t)] is a poor choice for all values of m
considered. Third, as the sample length increases, the model with the correct range,
X1 , becomes optimal.
Results are very sensitive to our estimates for the range of X. Overly large or
small values for defined by Eq. (15.10) can deliver unsatisfactory results. Consider
Fig. 15.3, which shows the probability p1 of model X1 C as a function of sample
length, m, and interval size . Recall that the range of model X1 is correct, i.e.,
[a1 , b1 ] = [a, b]. For large values for , the correct model for X has a very low
probability of being selected since p1 is near zero. Small values for can also be
problematic since they can result in very inaccurate estimates for the range of X.
In this case, the image of x(t), denoted by g(t) and defined by Eq. (15.12), can
be highly non-Gaussian and a poor model for X will result. To illustrate, consider
Fig. 15.4a, which shows the image of x(t) assuming the range of X is given by
[min x(t) /1, 000, max x(t) + /1, 000], where = (max x(t) min x(t))/2; this
corresponds to = 1/1, 000 as defined by Eq. (15.10). A normalized histogram
of g(t) is shown in Fig. 15.4b together with the distribution for a N(0, 1) random
variable. The sample coefficient of kurtosis of g(t) is 4 = 4.8. It is clear from
Fig. 15.4a,b that with = 1/1, 000, image g(t) is far from Gaussian.
15 Model Selection for Random Functions with Bounded Range 259

Fig. 15.3 Probability p1 of model X1 C as a function of sample length m and interval size
parameter . Contours of p1 are shown in panel (b). Taken from [13] 
c Elsevier Science Ltd
(2009)

Fig. 15.4 The image of x(t), panel (a), assuming range [min x(t) /1, 000, max x(t) + /1, 000],
and the corresponding normalized histogram of g(t), panel (b). Taken from [13]  c Elsevier
Science Ltd (2009)

15.4.1.2 Known Marginal Probability Law

Suppose complete knowledge on the marginal CDF of X is available, i.e., F(x; ) =


F(x; a, b, q, r) is a beta distribution on known interval [a, b] with known shape
parameters q and r. The covariance of G, the Gaussian image of X defined by
Eq. (15.1), is unknown. In this case, Eq. (15.17) reduces to
, -
C = X j (t) = F 1 [G j (t)] , j = 1, . . . , n , (15.24)
where G j is a zero-mean, stationary Gaussian process with covariance function
j ( ) = E [G j (t) G j (t + )]. We consider n = 4 one-parameter covariance functions
as listed in Table 15.2, where j > 0, j = 1, 2, 3, 4.
We use estimates j for parameters j , j = 1, 2, 3, 4, based on method #1
discussed in Sect. 15.2.2.2; for calculations, we apply Eq. (15.11) with = 1.
Covariance functions with two or more parameters can also be considered but are
omitted from this study to simplify the discussion. Estimates for covariance function
parameters j , j = 1, 2, 3, 4, are illustrated in Fig. 15.5 as a function of sample
length, m.
260 R.V. Field, Jr. and M. Grigoriu

Table 15.2 Candidate Candidate model,


one-parameter covariance Xj C Covariance function, j ( ; j )
functions for stochastic
process model selection X1 1 ( ; 1 ) = E p(1 | |)
with known marginal X2 2 ( ; 2 ) = (1 + 2 | |) E p(2 | |)
probability law X3 3 ( ; 3 ) = E p(3 2 )
X4 4 ( ; 4 ) = (1 | |/4 ) 1(| | 4 )

Fig. 15.5 Estimates of


covariance function
parameters j , j = 1, 2, 3, 4,
as a function of sample
length, m. Taken from [13]  c
Elsevier Science Ltd (2009)

Fig. 15.6 Log-likelihoods (a) and model probabilities (b) for each candidate model in C as a
function of sample length m. Taken from [13] 
c Elsevier Science Ltd (2009)

The log-likelihoods and model probabilities, defined by Eqs. (15.15) and (15.18),
respectively, are illustrated in Fig. 15.6 for each candidate model in C . For short
samples (m < 250), the length of the available sample is of the same order as the
estimated correlation length of X. In this case, highly inaccurate estimates for j
can occur and the optimal model, i.e., the model with the greatest probability, can
be any X j C . It is therefore critical in this case to consider covariance models that
are sufficiently flexible to describe a broad range of dependence structures. As m
grows large, the length of the sample is much longer than the estimated correlation
length of X, and accurate estimates for j , j = 1, . . . , 4, are possible. The model
with the correct covariance function, i.e., model X1 , becomes optimal in this case.
There is no requirement that the correct covariance function be a member of C ; it is
included in the example to demonstrate that, assuming a large enough sample size,
the correct model will be selected if available.
15 Model Selection for Random Functions with Bounded Range 261

Fig. 15.7 Model probabilities for each candidate model in C obtained by a sample of length:
(a) m = 200, and (b) m = 1, 000. The probability of the optimal model is shaded. Taken from [13]
c Elsevier Science Ltd (2009)

15.4.1.3 General Case

We next consider the general case where the parameters defining the range and shape
of the marginal distribution of X, as well as the covariance function of its Gaussian
image, are unknown. The candidate models for this case are defined by Eq. (15.17)
with n = 5 and n = 4; we therefore consider n n = 20 candidate models for X.
Parameters = 0.1 and = 1, defined by Eqs. (15.10) and (15.13), respectively,
are used for calculations. Figure 15.7a,b illustrate the model probabilities defined
by Eq. (15.18) assuming a sample of length m = 200 and m = 1, 000, respectively.
As indicated by the shaded bars, models X5,1 and X3,1 have the greatest probability
for m = 200 and m = 1, 000, respectively, and are therefore optimal.
The model probabilities illustrated in Fig. 15.7 define the optimal model for X
without regard to any specific purpose. Suppose now that we are interested in models
for X that provide accurate estimates of the following properties
 t
1
W = max X(t) and Z = X(u)du, (15.25)
t0 t 0

which correspond to the extreme and time-average values of X, respectively. If X


denotes the time-varying output from a structural system, properties W and Z can
be used to quantify, for example, structural reliability and damage accumulation,
respectively. Let fW and fZ denote the PDFs for random variables W and Z defined
by Eq. (15.25); appropriate measures of how accurately candidate model Xi, j C
represents random variables W and Z are given by
  2
W (Xi, j ) = fW |Xi, j (w) fW (w) dw,

  2
Z (Xi, j ) = fZ|Xi, j (z) fZ (z) dz, (15.26)

where fW |Xi, j and fZ|Xi, j denote the PDFs of random variables W and Z, respectively,
with X replaced by Xi, j .
262 R.V. Field, Jr. and M. Grigoriu

Fig. 15.8 Performance of each candidate model in C for sample size m = 200 (panels (a) and (c))
and m = 1, 000 (panels (b) and (d)). Taken from [13] 
c Elsevier Science Ltd (2009)

Values for W and Z are illustrated in Fig. 15.8 for short (m = 200) and long
(m = 1, 000) samples. Results from 250 independent Monte Carlo samples of each
Xi, j C were used to estimate PDFs fW |Xi, j and fZ|Xi, j . Sampling was also used
to estimate densities fW and fZ ; an approximation for fW can be obtained by the
crossing theory for translation processes [14]. Four important observations can be
made. First, the model with the best performance under either measure defined
by Eq. (15.26) is not necessarily optimal as defined by the model probabilities
illustrated in Fig. 15.7. Hence, the use of the model probabilities defined by
Eq. (15.18) to rank the candidate models in C does not necessarily yield the most
accurate models for the extremes and/or time-average of X; this is especially true
when available data is limited. Second, metric W is sensitive to estimates for the
range of X for m = 200 and m = 1, 000, meaning that accurate estimates for [a, b]
are essential to achieve accurate estimates of the extreme of X. Third, metric Z
is sensitive to estimates for the covariance function for short samples. Fourth, the
sensitivity of W and Z decreases with increasing sample size, m. For example,
with m = 1, 000 any model in C can be used to provide accurate representations
for the time average of X. These observations provide motivation for the decision-
theoretic method for model selection presented in Sect. 15.3.2 and used in the
following example.
15 Model Selection for Random Functions with Bounded Range 263

Fig. 15.9 Finite element


model of foam-encased
electronics component. Taken
from [13] c Elsevier Science
Ltd (2009)

15.4.2 Optimal Model for Random Field

In this section, we consider the survivability of a critical electronics component


internal to an aerospace system during a mechanical shock event. The component
is surrounded by foam material designed to mitigate the damaging effects of the
shock. Certain properties of the foam exhibit significant spatial variability and are
therefore represented by random fields. We apply methods from Sect. 15.3.2 to
select an optimal beta translation model for the foam material properties.

15.4.2.1 Problem Definition

A two-dimensional finite element (FE) model of an aerospace system is illustrated


in Fig. 15.9, showing an electronics component encased within an epoxy foam mesh
and surrounding steel frame. The system is square with dimension 2 c = 25 cm
and is fixed along its bottom edge defined by v2 = c. A perfectly known and
deterministic high-frequency external force, denoted by z(t), is applied to the corner
of the frame; the applied force is a symmetric triangular pulse lasting 0.2 ms with a
peak amplitude of 4.5 N.
Suppose we are interested in the following two output properties
 2 
 d Y (t) 
W = max   and Z = max | (t)| , (15.27)
t0 dt 2  t0

where Y (t) and (t) denote the vertical displacement and in-plane rotation of the
center of the internal electronics component, respectively. By Eq. (15.27), W and
Z are random variables that correspond to the maximum vertical acceleration and
264 R.V. Field, Jr. and M. Grigoriu

Fig. 15.10 Measurements of foam density at 8 locations taken from five nominally-identical
specimens. Taken from [13] 
c Elsevier Science Ltd (2009)

maximum in-plane rotation of the internal component due to the applied load z(t).
Herein, we assume that the survivability of the internal component illustrated in
Fig. 15.9 directly depends on output properties W and/or Z, i.e., system failure
occurs if one or both of these properties exceed known thresholds.

15.4.2.2 Candidate Models

Let D R2 denote the domain of the epoxy foam illustrated in Fig. 15.9, and let
v = (v1 , v2 ) be a vector in D. We assume the following random field model for the
density of the epoxy foam,

X(v) = F 1 [G(v)] , v D, (15.28)

where F = F(v; a, b, q, r) is the CDF of a beta random variable with parameters a, b,


q, and r as defined by Eq. (15.9), and G is a zero-mean, unit variance, homogeneous
Gaussian random field with the following covariance function

( ) = E [G(v) G(v + )] = E p(  ), v D, v + D, (15.29)

where   denotes the 2-norm of vector D, and 0 is a known deterministic


parameter related to the inverse of the correlation distance of G. Previous studies for
a similar problem [12] have indicated that = 1/3 is appropriate.
Information on random field X defined by Eq. (15.28) is limited to a collection
of experimental measurements of foam density from five nominally identical foam
specimens. Each specimen is divided into m = 8 cells of equal volume, providing a
total of 40 experimental measurements of foam density. The available data, denoted
by x|sl = (x1 , . . . , x8 ) |sl , are illustrated in Fig. 15.10, where symbol sl is used to
denote specimen l, l = 1, . . . , 5. The specimens are assumed to be statistically
independent so that we interpret the data set to be five independent samples of
X, taken at 8 consistent spatial locations. The effects of measurement error and/or
measurement uncertainty are assumed negligible.
15 Model Selection for Random Functions with Bounded Range 265

Fig. 15.11 Two samples of candidate random field model X1 C . Taken from [13] 
c Elsevier
Science Ltd (2009)

We consider the following collection of candidate models for foam density X:

C = {Xi (v) = Fi1 [G(v)] , i = 1, . . . , n}, (15.30)

where each Xi (v) is a beta translation process defined by Eq. (15.28) with marginal
CDF Fi equal to the distribution of a beta random variable on interval [ai , bi ] with
shape parameters qi , ri . For this study, we consider n = 10 candidate models for
X, let [a1 , b1 ] = [mink,l {xk |sl }, maxk,l {xk |sl }], and follow the procedure discussed in
Sect. 15.2.2.1 to provide trial values for the range of X with = 1/2. However,
available data is extremely limited so that estimates for the shape parameters
provided by standard maximum likelihood estimators can be extremely unreliable.
We therefore set qi = ri = 1, i = 1, . . . , n, to reflect this. Two independent samples
of candidate random field X1 (v) C are illustrated in Fig. 15.11.
As discussed in Sect. 15.3.1, the model probabilities defined by Eq. (15.18)
provide one means to rank the members of C . Let pi |sl denote the probability
associated with candidate model Xi C , when calibrated to data provided by
specimen sl ; pi |sl is the solution to Eq. (15.18) with data gi replaced by gi |sl
and j = defined by Eq. (15.29). Values for pi |sl are illustrated in Fig. 15.12
demonstrating that, because of the limited data set, all candidate models have nearly
identical ranking, regardless of the specimen we choose. Assuming each specimen
to be equally likely, pi = 1/5 5l=1 pi |sl 1/10 is the unconditional probability that
candidate model Xi C is the best available model for X in the collection. Hence,
the classical method for model selection discussed in Sect. 15.3.1 cannot distinguish
among the candidate models for X and, therefore, cannot provide an optimal model
for foam density in this case.
266 R.V. Field, Jr. and M. Grigoriu

Fig. 15.12 Conditional model probabilities for each candidate model in C . Taken from [13] 
c
Elsevier Science Ltd (2009)

15.4.2.3 Optimal Model

The output properties of interest, namely the maximum vertical acceleration and
maximum in-plane rotation of the internal component defined by Eq. (15.27), are
sensitive to the model we use for X. This sensitivity can be observed by Fig. 15.13,
where histograms of 250 samples of W defined by Eq. (15.27) are illustrated in
Fig. 15.13a,b assuming the random field X is represented by candidate models X1
C and X10 C , respectively. Similar histograms of 250 independent samples of Z
are illustrated in Fig. 15.13c,d. In general, as the assumed range for X increases, so
does the range of outputs W and Z. The finite element code Salinas [26], which has
the capability to accept as input realizations of the foam density, was used for all
calculations.
Recall that internal component survivability depends on its maximum accel-
eration and rotation during the shock event. The results illustrated in Fig. 15.13
demonstrate that it is therefore critical for this application to achieve estimates for
the range of X that are optimal in some sense. Let

W (Xi ) = P (W w | X = Xi )
Z (Xi ) = P (Z z | X = Xi )
W,Z (Xi ) = P (W w Z z | X = Xi ) (15.31)

denote three metrics of system performance. Metrics W (Xi ) and Z (Xi ) correspond
to the probabilities that the component responses defined by Eq. (15.27) indepen-
dently do not exceed thresholds w and z, assuming candidate model Xi C for the
foam density; metric W,Z (Xi ) is the joint probability that both outputs do not exceed
15 Model Selection for Random Functions with Bounded Range 267

Fig. 15.13 Sensitivity of internal component response to model for foam density: histograms of
250 samples of (a) W |(X = X1 ), (b) W |(X = X10 ), (c) Z|(X = X1 ), and (d) Z|(X = X10 ). Taken
from [13] 
c Elsevier Science Ltd (2009)

their respective thresholds. Our objective is to select a model for random field X such
that we achieve accurate but conservative estimates for the three metrics defined by
Eq. (15.31).
In Sect. 15.4.1, we applied the classical method for model selection to choose
optimal models for a stochastic process under limited information. For this applica-
tion, we instead apply the decision-theoretic method for model selection discussed
in Sect. 15.3.2, which is useful when considering high risk systems under limited
information, where a fair understanding of the consequences of unsatisfactory
system behavior is available. These consequences are quantified via the following
utility function
&
1 [ (Xi ) (X j )]2 if (Xi ) (X j ),
U(Xi , X j ) = (15.32)
2 [ (Xi ) (X j )]2 if (Xi ) > (X j ),

where (Xi ) is one of the metrics defined by Eq. (15.31). For example, if we assume
internal component survival depends only on its acceleration response, = W is
appropriate. For the general case where survival depends on both acceleration and
rotation response, we use = W,Z . By Eqs. (15.31) and (15.32), non-conservative
predictions of component survival are penalized, and overly conservative predic-
tions are also subject to penalty. For 2
1 , non-conservative predictions of
component survival are heavily penalized with respect to conservative predictions;
268 R.V. Field, Jr. and M. Grigoriu

Fig. 15.14 Model selection for foam density based on metric W : (a) log of expected utility, u, as
a function of acceleration threshold w, and (b) optimal model, X  , as a function of w. Taken from
[13] c Elsevier Science Ltd (2009)

as 2 1 , the penalty for using conservative and non-conservative models


becomes identical. The utility defined by Eq. (15.32) is therefore an appropriate
measure of the performance of model Xi C because it directly accounts for
the model purpose, i.e., accurate but conservative estimates of internal component
survivability.
The optimal model for foam density can be determined by Eqs. (15.21) and
(15.22) where, for this example, Eq. (15.22) reduces to a single sum over i, and pk,s
is replaced by the unconditional model probabilities pi discussed in Sect. 15.4.2.2.
Three cases are considered, corresponding to the three performance metrics defined
by Eq. (15.31); parameters 1 = 1 and 2 = 10 are used for all calculations. Consider
first the case where = W . A surface defining the expected utility of each
candidate model in C for 30 m/s2 w 110 m/s2 is illustrated in Fig. 15.14a; the
natural log of the expected utility is plotted for clarity. The optimal model for X,
denoted by X  C , is defined by the minimum of this surface for any fixed value
for w. X  is illustrated as a function of w by the dark line on the contour plot shown
in Fig. 15.14b. In general, results indicate that wider estimates for the range of X are
preferred as the value for threshold w increases. However, the relationship between
X  and w is complex and non-intuitive due to the presence of local maxima in
Fig. 15.14a. For example, u1 , u2 , u3 exhibit local maxima for w 70 m/s2 , meaning
that candidate models X1 , X2 , X3 C with small range estimates have large expected
utility and, therefore, very low ranking. In contrast, u9 , u10 exhibit local maxima for
w 50 m/s2 . These features cannot be observed using results from the classical
method for model selection (see Fig. 15.12).
Similar results for the case of = Z are illustrated in Figs. 15.15. Conservative
estimates for the range of X are preferred as the value for threshold z increases, but
there is only one local maximum shown in Fig. 15.15a. Further, accurate estimates
for the range are not necessary for small values for threshold z. The optimal model
for random field X assuming metric = W,Z is illustrated in Fig. 15.16 as a function
of thresholds w and z. These results are for the most general case, where component
survivability depends on both the acceleration and rotation response during the
shock event.
15 Model Selection for Random Functions with Bounded Range 269

Fig. 15.15 Model selection for foam density based on metric Z : (a) log of expected utility, u, as
a function of rotation angle threshold z, and (b) optimal model, X  , as a function of z. Taken from
[13] c Elsevier Science Ltd (2009)

Fig. 15.16 Optimal model


for foam density based on
metric W,Z . Taken from [13]
c Elsevier Science Ltd
(2009)

15.5 Conclusions

Methods were developed for finding optimal models for random functions under
limited information. The available information consists of: (a) one or more samples
of the function and (b) knowledge that the function takes values in a bounded set, but
whose actual boundary may or may not be known. In the latter case, the boundary
of the set must be estimated from the available samples. Numerical examples were
presented to illustrate the utility of the proposed approach for model selection,
including optimal continuous time stochastic processes for structural reliability,
and optimal random fields for representing material properties for applications in
mechanical engineering.
The class of beta translation processes, a particular type of non-Gaussian stochas-
tic process or random field defined by a memoryless transformation of a Gaussian
process or field with specified second-moment properties, was demonstrated to
270 R.V. Field, Jr. and M. Grigoriu

be a very useful and flexible model for representing physical quantities that take
values in bounded intervals. In practice, the range of possible values of these
quantities can be unknown and therefore must be estimated, together with other
model parameters, from the available information. This information consisted of
one or more measurements of the quantity under consideration, as well as some
knowledge about its features and/or purpose.
It was shown that the solution of the model selection problem for random
functions with a bounded support differed significantly from that of functions with
unbounded support. These differences depended on the intended purpose for the
model. For example, the performance of the optimal model depended strongly on the
accuracy of the estimated range of this function, particularly when the extremes of
the random function are of interest. The use of the sample minimum and maximum
for the range was clearly inadequate in this case; overly wide estimates for the range
were also problematic. However, when accurate time-averages of the process were
needed, for example, to quantify damage accumulation within a structural system,
the range became less important.

Acknowledgments Sandia National Laboratories is a multi-program laboratory managed and


operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation,
for the U.S. Department of Energys National Nuclear Security Administration under Contract
DE-AC04-94AL85000. A preliminary version of this work was published in [13].

References

1. Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions. Dover Publications, New
York, NY (1972)
2. Andrews, D.F., Herzberg, A.M.: Data: A Collection of Problems and Data from Many Fields
for the Student and Research Worker. Springer, New York, NY (1985)
3. Ang, A., Tang, W.: Probability Concepts in Engineering Planning and Design: Vol. 1 - Basic
Principles. Wiley, New York, NY (1975)
4. Arwade, S.R., Grigoriu, M.: J. Eng. Mech. 130(9), 9971005 (2004)
5. Bendat, J.S., Piersol, A.G.: Random Data: Analysis and Measurement Procedures, 2nd edn.
Wiley, New York, NY (1986)
6. Chernoff, H., Moses, L.E.: Elementary Decision Theory. Dover Publications, New York, NY
(1959)
7. Field, Jr., R.V.: J. Sound Vib. 311(35), 13711390 (2008)
8. Field, Jr., R.V., Constantine, P., Boslough, M.: Statistical surrogate models for prediction of
high-consequence climate change. Int. J. Uncertainty Quant. 3(4), 341355 (2013)
9. Field, Jr., R.V., Epp, D.S. J. Sensor Actuator A Phys. 134(1), 109118 (2007)
10. Field, Jr., R.V., Grigoriu, M.: Probabilist. Eng. Mech. 21(4), 305316 (2006)
11. Field, Jr., R.V., Grigoriu, M.: J. Sound Vib. 290(35), 9911014 (2006)
12. Field, Jr., R.V., Grigoriu, M.: J. Eng. Mech. 133(7), 780791 (2007)
13. Field, Jr., R.V., Grigoriu, M.: Probabilist. Eng. Mech. 24(3), 331342 (2009)
14. Grigoriu, M.: J. Eng. Mech. 110(4), 610620 (1984)
15. Grigoriu, M.: Applied Non-Gaussian Processes. PTR Prentice-Hall, Englewood Cliffs, NJ
(1995)
15 Model Selection for Random Functions with Bounded Range 271

16. Grigoriu, M.: Stochastic Calculus: Applications in Science and Engineering. Birkhauser,
Boston, MA (2002)
17. Grigoriu, M., Garboczi, E., Kafali, C.: Powder Tech. 166(3), 123138 (2006)
18. Grigoriu, M., Veneziano, D., Cornell, C.A.: J. Eng. Mech. 105(4), 585596 (1979)
19. Gurley, K., Kareem, A.: Meccanica 33(3), 309317 (1998)
20. Iyengar, R.N., Jaiswal, O.R.: Probabilist. Eng. Mech. 8(34), 281287 (1993)
21. Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous Univariate Distributions, Vol. 2, 2nd
edn. Wiley, New York, NY (1995)
22. Keeping, E.S.: Introduction to Statistical Inference. Dover Publications, New York, NY (1995)
23. Nour, A., Slimani, A., Laouami, N., Afra, H.: Soil Dynam. Earthquake Eng. 23(5), 331348
(2003)
24. Ochi, M.K.: Probabilist. Eng. Mech. 1(1), 2839 (1986)
25. Perea, R.W., Kohn, S.D.: Road profiler data analysis and correlation, Research Report 92-30,
The Pennsylvania Department of Transportation and the Federal Highway Administration
(1994)
26. Reese, G., Bhardwaj, M., Segalman, D., Alvin, K., Driessen, B.: Salinas: Users notes.
Technical Report SAND99-2801, Sandia National Laboratories (1999)
27. Robert, C.P.: The Bayesian Choice, 2nd edn. Springer Texts in Statistics. Springer, New York
(2001)
28. Sauvageot, H.: J. Appl. Meteorol. 33(11), 12551262 (1994)
29. Stathopoulos, T.: J. Struct. Div. 106(ST5), 973990 (1980)
30. Zellner, A.: An Introduction to Bayesian Inference in Econometrics. Wiley, New York, NY
(1971)
Chapter 16
From Model-Based to Data-Driven Filter Design

M. Milanese, F. Ruiz, and M. Taragna

Abstract This paper investigates the filter design problem for linear time-invariant
dynamic systems when no mathematical model is available, but a set of initial
experiments can be performed where also the variable to be estimated is measured.
Two-step and direct approaches are considered within both a stochastic and a
deterministic framework and optimal or suboptimal solutions are reviewed.

Keywords Bounded noises Bounded disturbances Optimal filtering Set


Membership estimation Filter design from data

16.1 Introduction

This paper examines different approaches for designing a filter that, operating on the
measured output of a linear time-invariant (LTI) dynamic system, gives a (possibly
optimal in some sense) estimate of some variable of interest. In particular, a discrete-
time, finite-dimensional, LTI dynamic system S is considered, for example described
in state-space form as:

M. Milanese
Modelway srl, Via Livorno 60, Torino, I-10144, Italy
e-mail: mario.milanese@modelway.it; mario.milanese@polito.it
F. Ruiz
Pontificia Universidad Javeriana, Jefe Seccion de Control Automatico, Departamento de
Electronica, Carrera 7 No. 40-62, Bogota D.C., Colombia
e-mail: ruizf@javeriana.edu.co
M. Taragna ()
Politecnico di Torino, Dipartimento di Automatica e Informatica, Corso Duca degli
Abruzzi 24, I10129, Torino, Italy
e-mail: michele.taragna@polito.it

A. dOnofrio (ed.), Bounded Noises in Physics, Biology, and Engineering, 273


Modeling and Simulation in Science, Engineering and Technology,
DOI 10.1007/978-1-4614-7385-5 16, Springer Science+Business Media New York 2013
274 M. Milanese et al.

xt+1 = Axt + Bwt


yt = C1 xt + Dwt
zt = C2 xt

where for a given time instant t N: xt X Rnx is the unknown system state;
yt Y Rny is the known system output, measured by noisy sensors; zt Z Rnz
is the variable to be estimated; w t Rnw is an unknown multivariate signal that
collects all the process disturbances and measurement noises affecting the system;
A, B, C1 , C2 and D are constant matrices of suitable finite dimensions.
Such an estimation problem has been extensively investigated in the literature
over the last five decades, since it plays a crucial role in control systems and signal
processing, and optimal solutions have been derived under different assumptions
on noise and optimality criteria. In the beginning, a probabilistic description of
disturbances and noises has been adopted and a stochastic approach has been
followed, leading to the standard Kalman filtering where the estimation error
variance is minimized, see, e.g., [1, 7, 12, 14]. Later, assuming that the noise and
the variable to be estimated belong to normed spaces, the subject of worst-case or
Set Membership filtering has been treated and three well-established approaches
have been developed, aiming to minimize the worst-case gain from the noise signal
to the estimation error, measured in  p and q -norm, respectively: the H filtering,
in the case p = q = 2, see, e.g., [5, 911, 23, 28, 35]; the generalized H2 filtering, in
the case p = 2 and q = , see, e.g., [8, 33]; the 1 filtering, in the case p = q = ,
see, e.g., [22, 3032].
The previously mentioned methodologies relied initially on the exact knowledge
of the process S under consideration and later were extended to uncertain systems,
thus leading to the so-called robust filtering techniques. These works substantially
follow a model-based approach, assuming systems with state-space descriptions,
possibly affected by norm-bounded or polytopic uncertainties in the system matrices
or uncertainties described by integral quadratic constraints, see, e.g., [6, 34] and the
references therein.
However, in most practical situations, the system S is not completely known and
a data-driven approach to the filter design problem is usually obtained by adopting
a two-step procedure:
1. An approximate model of the process S is identified from prior information
(physical laws,. . . ), making use of a sufficiently informative noisy dataset;
2. On the basis of the identified model, a filter is designed whose output is an
estimate of the variable of interest.
Note that, except for peculiar cases (i.e., C2 actually known), the first step typi-
cally requires measurements y and z = z + v collected during an initial experiment
of finite length N, being v an additive noise on z.
This procedure is in general far from being optimal, because only an approximate
model can be identified from measured data and a filter which is optimal for the
identified model may display a very large estimation error when applied to the
16 From Model-Based to Data-Driven Filter Design 275

actual system. Evaluating how this approximation source affects the filter estimation
accuracy is a largely open problem. Note that robust filtering does not provide at
present an efficient solution to the problem. Indeed, the design of a robust filter
is based on the knowledge of an uncertainty model, e.g., a nominal model plus a
description of the parametric uncertainty. However, identifying reliable uncertainty
models from experimental data is still an open problem.
To overcome all these issues for such general situations, an alternative data-
driven approach has been proposed in [15, 16, 19, 20, 2426], where the initial data
y and z needed in step 1 of the two-step procedure are used for the direct design
of the filter, thus avoiding the model identification. Indeed, the desired solution
of the filtering problem is a causal filter mapping y zt , t, producing as
output an estimate zt of zt , enjoying some optimality property of the estimation
error zt zt . Thus, the idea is to directly design a filter from the available data,
via identification of a filter that, using yt as input, gives an output zt which
minimizes the desired criterion for evaluating the estimation error zt zt . Such a
filter is indicated as Direct Virtual Sensor (DVS) and allows to overcome critical
problems such as the model uncertainty. In [15, 24], the advantages of such a
direct design approach with respect to the two-step procedure have been put in
evidence within a parametric-statistical framework, assuming stochastic noises, a
parametric filter structure and the minimization of the estimation error variance as
optimality criterion, and using the Prediction Error (PE) method [13] to design
the DVS. It has been proven that even in the most favorable situations, e.g., if
the system S is stable and no modeling error occur, the filter designed through
a two-step procedure performs no better than the DVS. Moreover, in the case of
no modeling errors, the DVS is optimal even if S is unstable, while this is not
guaranteed by the two-step filter. More importantly, in the presence of modeling
errors, the DVS, although not optimal, is the minimum variance estimator among
the selected filter class. A similar result is not assured by the two-step design,
whose performance deterioration caused by modeling errors may be significantly
larger. In [19, 20], the direct design approach has been investigated within a linear
Set Membership framework, assuming norm-bounded disturbances and noises. For
classes of filters with exponentially decaying impulse response, approximating sets
that guarantee to contain all the solutions to the optimal filtering problem are
determined, considering experimental data only. A linear almost-optimal DVS is
designed, i.e., with guaranteed estimation errors not greater than twice the minimum
achievable ones. The previously listed advantages of the direct design approach
over the two-step procedure are still preserved in this case, since the two-step filter
design does not guarantee similar optimality properties, due to the discrepancies
between the actual process and the identified model. A complete design procedure
is developed, allowing the user to tune the filter parameters, in order to achieve
the desired estimation performances in terms of worst-case error. In [25, 26],
the direct approach has been developed within a Set Membership framework for
LPV (linear parameter-varying) systems. In [16], the direct design approach has
been investigated in a nonlinear Set Membership setting, considering as optimality
criterion the minimization of the worst-case estimation error. Under some prior
276 M. Milanese et al.

assumptions, directly designed filters in nonlinear regression form are derived that
not only give bounded estimation errors, but are also almost-optimal. Some practical
DVS applications in the automotive field can be found in [2, 4, 1719, 27].

16.2 Data-Driven Filter Design: Stochastic Approaches

In this section, a statistical framework is considered and the two-step and the direct
approaches to the data-driven filtering problem are described and compared.
Basic Assumptions:
The matrices A, B, C1 , C2 and D defining the system S are not known.
The couple (A,C1 ) is observable.
A finite dataset {yt , zt = zt + vt , t = 0, 1, . . . , N 1} is available.
The noises wt and vt are unmeasured stochastic variables.
.
t =
Let Er limN N1 t=0 N1
Ert , where E is the mean value (or expectation)
symbol and it is assumed that the limit exists whenever the symbol E is
used. 
Under these assumptions, the filter design problem can be formulated as follows.
Statistical Filtering Problem: Design a causal filter that, operating on y , t,
gives an estimate zt of zt , having minimum estimation error variance E zt zt 2
for any t. 
The two-step design consists in model identification from data and filter design
from the identified model. In the model identification step, a parametric model
structure

M(M ) : M M

is selected, where M RnM and nM is the number of parameters of the model


structure. This model structure defines the following model set:
.
M = {M(M ) : M M }

Then, a model M M of the system S is identified from the dataset


.
DM = { yt , zt , t = 0, 1, . . . , N 1}

where (yt , zt ) are considered as the outputs of the autonomous model M(M ). The
obtained as
Prediction Error (PE) method, see, e.g., [13], is used to identify M,

M = M(M )
M = arg min JN (M )
M M
N1
JN (M ) = 2N
1
et (M )
2
t=0
16 From Model-Based to Data-Driven Filter Design 277

where et (M ) = (yt , zt ) (ytM , ztM ) is the prediction error of the model M(M ), being
(ytM , ztM ) the prediction given by M(M ), and  is the 2 norm.
In the filter design step, a (steady-state) minimum variance filter

K K(M )

is designed to estimate zt on the basis of the identified model M = M(M ). The filter
K gives as output an estimate ztK of zt , using measurements y , t, thus providing a
Model-based Virtual Sensors (MVS). Note that the filter structure cannot be chosen
in the two-step procedure, since it depends on the structure of the identified model.
The alternative approach to the data-driven filtering problem is based on the
direct identification of the filter from data. In such an approach, a linear parametric
structure (e.g., ARX, OE, ARMAX, state-space)

V (V ) : V V

is selected for the filter to be designed, where V RnV and nV is the number of
parameters of the filter structure. This filter structure defines the following filter set:
.
V = {V (V ) : V V }

A filter V V is then designed by means of the PE method from the dataset


.
DV = {yt , zt , t = 0, 1, . . . , N 1}

where yt is considered as the input of the filter V (V ) and zt as its output. Thus, V
is obtained by means of the PE method as

V = V (V )
V = arg min JN (V )
V V
N1
JN (V ) = 2N1
et (V )
2
t=0

where et (V ) = zt zVt is the estimation error of the filter V (V ), which has input yt
and output zVt . The filter V = V (V ) can be used as a virtual sensor to generate an
estimate zVt of zt from measurements y , t. Thus, V is a Direct Virtual Sensor
(DVS), designed directly from data without identifying a model of the system S, and

zVt = V (zVt1 , . . . , zVtnV , yt , . . . , ytnV )

where V Rnz nV (nz +ny ) and denotes the dot product.


To perform a comparison between direct and two-step filter design approaches,
the following further assumptions are need.
278 M. Milanese et al.

Statistical Framework Assumptions:


The signal yt is bounded.
The noises wt and vt are i.i.d. stochastic variables with zero mean and bounded
moments of order 4 + , for some > 0.
In the two-step approach, a uniformly stable linear model structure M(M ) is
selected in the identification phase. Assuming that the identified model M is
observable from the output yt , the filter K is the (linear steady-state) Kalman
filter designed to estimate zt on the basis of the model M.
In the direct approach, a uniformly stable linear filter structure V (V ) is selected.

Note that, for given linear observable model M(M ) of order nM , the corresponding
Kalman filter K(M ) is a linear stable filter of order nM . Thus, if a filter structure
V (V ) of order nM is selected, it results that K(M ) V .

Result 1 [15, 24]. The following results hold with probability (w.p.) 1 as N :
i) V = arg minV (V ) E zt zVt 2 .
ii) If K V , then E zt zVt 2 E zt ztK 2 .
iii) If S = M(M o ) M and K( o ) V , then V
M
is a minimum variance filter among

all linear causal filters mapping y z , t.
t

iv) If S = M(M o ) M , K( o ) V , M( ) is globally identifiable, S is stable and


M M
the dataset is informative enough, then E zt zVt 2 = E zt ztK 2 . 
This result shows that the solution of the data-driven filtering problem provided
by the direct procedure presents better features than the one provided by the
two-step procedure. Indeed, at best (e.g. under the assumption S M , i.e. no
undermodeling), the filter K is proven to be asymptotically optimal provided that
the system S is stable, while the DVS V gives minimum variance estimation error
even in case that the system S is unstable.
Even more favorable features of the direct approach over the two-step procedure
are obtained in the more realistic situation that S / M , since in general only
approximate model structures are used. For example, consider that the system S
is of order nx (not known) and a model structure of order nM < nx is selected. Then,
it is not ensured that the corresponding Kalman filter K gives the minimal variance
estimate of zt among all causal filters of the same order nM . On the contrary, such
an optimality feature holds for the DVS V designed by selecting a filter structure of
order nV = nM < nx . Indeed, the accuracy deterioration of the MVS K with respect
to the DVS V of the same order may be significant, see, e.g., [18, 19].

16.3 Data-Driven Filter Design: Deterministic Approaches

In this section, a deterministic description of disturbances and noises is adopted,


considering that the signal w is unknown but bounded in a given  p -norm, and
the aim is to design a filter that provides an estimate of z that minimizes the
16 From Model-Based to Data-Driven Filter Design 279

worst-case gain from w to the estimation error, measured in some q -norm. To this
purpose, let us recall the definition of  p -norm for a one-sided discrete-time signal
s = {s0 , s1 , . . .}, st Rns and p N:
 1/p
ns p  
. .
s p = sti  , 1 p < ; s = max max sti 
t=0,.., i=1,..,ns
t=0 i=1

and the (q ,  p )-induced norm of a linear operator T :

T q,p = sup T (s)q , p, q N


s p =1

Without loss of generality, the variable z to be estimated is considered unidimen-


sional in the sequel. In fact, the case nz > 1 may be dealt with by decoupling the
overall filtering problem into nz independent univariate subproblems.
Deterministic Framework Assumptions:
The system S is unknown and initially at rest (i.e., x0 = 0, wt = 0 t < 0, w0 = 0).
The dimensions nx and nw are unknown but finite.
The couple (A,C1 ) is observable.
A finite dataset {yt , zt = zt + vt , t = 0, 1, . . . , N 1} is available and the
measurements are collected in the following column vectors:

Y = [y0 ; y1 ; . . . ; yN1 ] RNny , Z = [z0 ; z1 ; . . . ; zN1 ] RN

The disturbance and the measurement noise column vectors

W = [w0 ; w1 ; . . . ; wN1 ] RNnw , V = [v0 ; v1 ; . . . ; vN1 ] RN

are unknown but with known bounds:

W  p , V q


It has to be pointed out that, without loss of generality, W  p 1 can be assumed if
the matrices B and D of the dynamic system S are properly scaled. For this reason,
= 1 will be considered in the sequel.
In order to allow the user to suitably design the filter, the following H subsets of
filters with bounded and exponentially decaying impulse response are considered:
, 2 2 2 2 -
K (L, , ) = F H : 2htF 2 L t [0, ], 2htF 2 L t t ,t N
, -
K m (L, , ) = F K (L, , ) : htF = 0, t > m K (L, , )
280 M. Milanese et al.

where the triplet (L, , ) is a design parameter,, with L > -0, 0 < < 1, N,
the order m N is such that m and hF = h0F , h1F , . . . is the filter impulse
response with htF Rny . These sets represent a filter design choice, allowing the
user to require acceptable effects of the fast dynamics of the filter, occurring in the
first instants of the impulse response, and an exponentially decaying bound on the
slow dynamics due to the dominating poles.
Within the above context, the following filtering problem can be defined.
Optimal Worst-Case Filtering Problem: Given scalars L > 0, 0 < < 1 and
integers , p and q, design an optimal filter Fo K (L, , ) such that the estimate
zFo = Fo (y)
achieves a finite gain

o = inf sup z zFo q


Fo K (L, , ) w =1
p

A data-driven approach to solve this problem is proposed in [19, 20], where the
noisy dataset (Y , Z) and the noise bound are suitably exploited.
Let us first define the Feasible Filter Set FSS that contains all the filters consistent
with the bounds on disturbances and noises, the information coming from the dataset
and the design triplet (L, , ):
(Y , Z)
. 2 2 /
FFS = F K (L, , ) : 2Z Z F 2q o +

where Z F = [z0F ; z1F ; . . . ; zN1


F ] R is the estimate vector provided by F when
N

applied to data Y .
The worst-case gain o is unknown, since the system matrices are not known. In
order to choose a suitable value of o , an hypothesis validation problem is initially
solved where one asks if, for given filter class K (L, , ) and finite data length N,
the assumption on o leads to a non-empty FFS. However, the only test that can
be actually performed is if such an assumption is invalidated by the available data,
checking if no filter consistent with the overall information exists. This leads to the
following definition.
the scalars L, , and the integers , p, q be
Definition 1. Let the dataset (Y , Z),
given. Prior assumption on o is considered validated if FFS = 0./ 
The fact that the prior assumption is validated by the present dataset (Y , Z)
does not
exclude that it may be invalidated by future data. Indeed, values much lower than
the true o may be validated if the actual disturbance realization occurred during the
initial experiment is far from the worst-case one. The following result is a validation
test that allows to determine an estimate of o .
the scalars L, , and the integers , p, q, m
Result 2 [20]. Let the dataset (Y , Z),

be given, with m . Let be the solution to the optimization problem:
2 2
= min 2Z Ty HF 2 (16.1)
FK m (L, , ) q
16 From Model-Based to Data-Driven Filter Design 281

being Ty HF the estimate of the column vector [z0 ; z1 ; . . . ; zN1 ] RN provided by


the filter F, HF = [h0F ; h1F ; . . . ; hm (m+1)ny the column vector of the first m + 1
F] R
coefficients of F and Ty R N(m+1)n y defined as follows:
if m < N, then Ty = Tym is the block-Toeplitz matrix formed by the samples yt ,
t = 0, 1, . . . , N 1, defined below:

y0 0 0 0
1 0
y y 0 0
1 0
Ty =
m

y2 y y 0

.. .. .. .. ..
. . . . .
N1 N2 N3 Nm1
y y y y

if m N, then Ty = [TyN1 0N(m+1N)ny ].


(i) A sufficient condition for prior assumption being validated is

o +

(ii) A necessary condition for prior assumption being validated is

L m+1
o + + ny  Y q
1

(iii) If m N 1 is chosen, then a necessary and sufficient condition for prior


assumption being validated is

o +

Note that the gap between the two conditions (i) and (ii) can be made as small as
m+1
desired by increasing m and becomes negligible when ny L1 Y q . Indeed,
no gap exists just for m = N 1.
Result 2 can be used for choosing the filter class K m (L, , ). In fact, if the gap
between the conditions (i) and (ii) is negligible, the function
2 2
(L, , ) = min 2Z Ty HF 2
FK m (L, , ) q

individuates, for a given value of N, a surface in the space (L, , o + )


separating validated values of (L, , o + ) from falsified ones. Clearly, the triplet
(L, , o + ) has to be chosen in the validated region with some caution (i.e., not
too near the separation surface) and exploiting the information on the experimental
setting. Useful information on L, and values is provided by the impulse
responses of filters designed by means of untuned algorithms which do not make
282 M. Milanese et al.

use of prior assumptions, such as standard prediction error methods or projection


algorithms (see, e.g., [21]). Moreover, the value of can be obtained by evaluating
the instrumentation accuracy.
When a filter F has been obtained by means of a design algorithm, it is obviously
of interest to evaluate, for any measured output y, the difference between the
estimate zF provided by F and the estimate zFo provided by an optimal filter Fo .
From the Set Membership theory (see, e.g., [29]), the tightest upper bound of this
difference is given by the worst-case filtering error of the filter F, defined as:

E(F) = sup G Fq,q


GFFS

and a filter Fo is called optimal if


.
E(Fo ) = inf E(F) = r(FFS)
FK (L, , )

where r(FFS) is the so-called radius of information and represents the smallest
worst-case filtering error that can be guaranteed on the basis of the overall
information and the design choice.
It is well known that any central filter FC defined as the Chebyshev center of
FFS, i.e.

FC = arg inf sup G Fq,q


FK (L, , ) GFFS

is an optimal filter for any q -norm, see, e.g., [29]. However, methods for finding the
Chebyshev center of FFS either are unknown or, when known, are computationally
hard to be determined. This motivated the interest in deriving algorithms having
lower complexity, at the expense of some degradation in the accuracy of the
designed filter. A good compromise is provided by the following family of filters.
Definition 2. A filter FI is interpolatory if FI FFS. 
Any interpolatory filter is consistent with the overall information. An important
well-known property of these filters is that E(FI ) 2 r(FFS) for any q -norm,
see, e.g., [29]. Due to such a property, these filters are called 2-optimal or almost-
optimal.
Let us then consider the finite impulse response (FIR) filter F whose coefficients
are given by the following algorithm:
2 2
HF = arg min 2Z Ty HF 2 (16.2)
q
HF R(m+1)ny

such that
 t 
hF,i  L, t = 0, . . . , ; i = 1, . . . , ny
 t 
hF,i  L t , t = + 1, . . . , m; i = 1, . . . , ny
16 From Model-Based to Data-Driven Filter Design 283

where htF,i R denotes the i-th row element of htF . Note that F is the filter
class element that provides as solution to the optimization problem (16.1). The
following result shows the properties of F that hold for any  p and q -norms.
Result 3 [20].
(i) If o + , then the filter F is interpolatory and almost-optimal.
(ii) If in addition the system S is asymptotically stable, then the estimate zF =
F (y)
guarantees
2 2 2 2
sup z zF q o + E(F ) 2Sy 2q,p o + 2 r(FFS) 2Sy 2q,p
w p =1

where Sy is the LTI dynamic subsystem of S such that y = Sy (w). 


The algorithm (16.2) involves an q -norm approximation problem with linear
constraints on the FIR filter coefficients, resulting in a convex problem for any q -
norm that can be solved by standard convex programming techniques (see, e.g.,
[3]). In particular, in the case q = 2, the cost function is quadratic and the problem
has a unique solution that can be efficiently found using quadratic programming
techniques. In the case q = , the problem is a minimax that can be efficiently
solved using linear programming techniques.
Finally, it has to be noted that the computation of the worst-case filtering error
E(F) is in general a challenging and difficult task. However, in the H case (i.e.,
p = q = 2) with ny = 1, Theorem 6 in [20] allows to efficiently compute convergent
bounds on the worst-case filtering error.

16.4 Conclusions

This paper investigates the problem of filter design for LTI dynamic systems, both
in the stochastic setting where the aim is the minimization of the estimation error
variance, both in the deterministic setting where the aim is the minimization of the
worst-case gain from the process and measurement noises to the estimation error,
measured in  p and q -norm, respectively.
Most part of the existing literature focuses on problems that can be denoted as
filter design from known systems, indicating that the filter is designed assuming
the knowledge of equations describing the system generating the signals to be
filtered. In this paper, a more general filtering problem is considered, denoted as
filter design from data, where the system is not known but a set of measured
data is available. Clearly, a solution to this problem can be obtained by identifying
from measurements a model, whose equations are then used by any of the available
methods for filter design from known systems. However, this two-step procedure
is in general not optimal. Indeed, finding optimal solutions to the filter design from
data problem appears to be a not easy task, but this paper reviews methodologies that
284 M. Milanese et al.

allow to design directly from measurements filters which are shown to be optimal (in
the stochastic framework) or almost-optimal (in the deterministic framework), i.e.,
with a worst-case filtering error not greater than twice the minimal one. Moreover,
results are given for the evaluation of the resulting worst-case filtering error, while
such evaluation appears to be a largely open problem in the case a two-step design
procedure is adopted.

References

1. Anderson, B.D.O., Moore, J.B.: Optimal Filtering. Prentice-Hall, Englewood Cliffs, NJ (1979)
2. Borodani, P.: Virtual sensors: an original approach for dynamic variables estimation in
automotive control systems. In: Proc. of the 9th International Symposium on Advanced Vehicle
Control (AVEC), Kobe, Japan (2008)
3. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge
(2004)
4. Canale, M., Fagiano, L., Ruiz, F., Signorile, M.: A study on the use of virtual sensors in vehicle
control. In: Proc. of the 47th IEEE Conference on Decision and Control and European Control
Conference, Cancun, Mexico (2008)
5. Colaneri, P., Ferrante, A.: IEEE Trans. Automat. Contr. 47(12), 2108 (2002)
6. Duan, Z., Zhang, J., Zhang, C., Mosca, E.: Automatica 42(11), 1919 (2006)
7. Gelb, A.: Applied Optimal Estimation. MIT Press, Cambridge, MA (1974)
8. Grigoriadis, K.M., Watson Jr., J.T.: IEEE Trans. Aero. Electron. Syst. 33(4), 1326 (1997)
9. Grimble, M.J., El Sayed, A.: IEEE Trans. Acoust. Speech Signal Process. 38(7), 1092 (1990)
10. Hassibi, B., Sayed, A.H., Kailath, T.: IEEE Trans. Automat. Contr. 41(1), 18 (1996)
11. Hassibi, B., Sayed, A.H., Kailath, T.: IEEE Trans. Signal Process. 44(2), 267 (1996)
12. Jazwinski, A.H.: Stochastic Processes and Filtering Theory, Mathematics in Science and
Engineering, vol. 64. Academic Press, New York (1970)
13. Ljung, L.: System Identification: Theory for the User, 2nd edn. Prentice Hall PTR, Upper
Saddle River, NJ (1999)
14. Maybeck, P.S.: Stochastic Models, Estimation, and Control, Mathematics in Science and
Engineering, vol. 141. Academic Press, New York (1979)
15. Milanese, M., Novara, C., Hsu, K., Poolla, K.: Filter design from data: direct vs. two-step
approaches. In: Proc. of the American Control Conference, Minneapolis, MN, 4466 (2006)
16. Milanese, M., Novara, C., Hsu, K., Poolla, K.: Automatica 45(10), 2350 (2009)
17. Milanese, M., Regruto, D., Fortina, A.: Direct virtual sensor (DVS) design in vehicle sideslip
angle estimation. In: Proc. of the American Control Conference, New York, 3654 (2007)
18. Milanese, M., Ruiz, F., Taragna, M.: Linear virtual sensors for vertical dynamics of vehicles
with controlled suspensions. In: Proc. of the 9th European Control Conference ECC2007, Kos,
Greece, 1257 (2007)
19. Milanese, M., Ruiz, F., Taragna, M.: Virtual sensors for linear dynamic systems: structure
and identification. In: 3rd International IEEE Scientific Conference on Physics and Control
(PhysCon 2007), Potsdam, Germany (2007)
20. Milanese, M., Ruiz, F., Taragna, M.: Automatica 46(11), 1773 (2010)
21. Milanese, M., Vicino, A.: Automatica 27(6), 997 (1991)
22. Nagpal, K., Abedor, J., Poolla, K.: IEEE Trans. Automat. Contr. 41(1), 43 (1996)
23. Nagpal, K.M., Khargonekar, P.P.: IEEE Trans. Automat. Contr. 36, 152 (1991)
24. Novara, C., Milanese, M., Bitar, E., Poolla, K.: Int. J. Robust Nonlinear Contr. 22(16), 1853
(2012)
25. Novara, C., Ruiz, F., Milanese, M.: Direct design of optimal filters from data. In: Proc. of the
17th IFAC Triennial World Congress, Seoul, Korea, 462 (2008)
16 From Model-Based to Data-Driven Filter Design 285

26. Ruiz, F., Novara, C., Milanese, M.: Syst. Contr. Lett. 59(1), 1 (2010)
27. Ruiz, F., Taragna, M., Milanese, M.: Direct data-driven filter design for automotive controlled
suspensions. In: Proc. of the 10th European Control Conference ECC2009 Budapest, Hungary,
4416 (2009)
28. Shaked, U., Theodor, Y.: H -optimal estimation: A tutorial. In: Proc. of the IEEE Conference
on Decision and Control, vol. 2, 2278 (1992)
29. Traub, J.F., Wasilkowski, G.W., Wozniakowski, H.: Information-Based Complexity. Academic
Press, New York (1988)
30. Vincent, T., Abedor, J., Nagpal, K., Khargonekar, P.P.: Discrete-time estimators with guaran-
teed peak-to-peak performance. In: Proc. of the 13th IFAC Triennial World Congress, vol. J,
San Francisco, CA, 43 (1996)
31. Voulgaris, P.G.: Automatica 31(3), 489 (1995)
32. Voulgaris, P.G.: IEEE Trans. Automat. Contr. 41(9), 1392 (1995)
33. Watson, Jr., J.T., Grigoriadis, K.M.: Optimal unbiased filtering via linear matrix inequalities.
In: Proc. of the American Control Conference, vol. 5, 2825 (1997)
34. Xie, L., Lu, L., Zhang, D., Zhang, H.: Automatica 40(5), 873 (2004)
35. Yaesh, I., Shaked, U.: IEEE Trans. Automat. Contr. 36(11), 1264 (1991)

S-ar putea să vă placă și