Sunteți pe pagina 1din 554

Step function

In mathematics, a function on the real numbers is called a step function (or staircase function) if it can be written as a finite linear combination of indicator functions of intervals. Informally speaking, a step function is a piecewise constant function having only finitely many pieces.

Example of a step function (the red graph). This particular step function is right-continuous.

[edit] Definition and first consequences


A function is called a step function if it can be written as

for all real numbers

x
is the indicator

where function of

i are real numbers, Ai are intervals, and A:

In this definition, the intervals properties:


1.

Ai can be assumed to have the following two


for

The intervals are disjoint,

2.

The union of the intervals is the entire real line,

Indeed, if that is not the case to start with, a different set of intervals can be picked for which these assumptions hold. For example, the step function

can be written as

[edit] Examples

The Heaviside step function is an often used step function. A constant function is a trivial example of a step function. Then there is only one interval,

The Heaviside function H(x) is an important step function. It is the mathematical concept behind some test signals, such as those used to determine the step response of a dynamical system.

The rectangular function, the next simplest step function. The rectangular function, the normalized boxcar function, is the next simplest step function, and is used to model a unit pulse.

[edit] Non-examples
The integer part function is not a step function according to the definition of this article, since it has an infinite number of intervals. However, some authors define step functions also with an infinite number of intervals.[1]

[edit] Properties
The sum and product of two step functions is again a step function. The product of a step function with a number is also a step function. As such, the step functions form an algebra over the real numbers. A step function takes only a finite number of values. If the intervals

A i,

in the above definition of the step function are disjoint and their union is the real line, then for all is

The Lebesgue integral of a step function where is the length of the interval

A, and it is assumed here that all intervals Ai

have finite length. In fact, this equality (viewed as a definition) can be the first step in constructing the Lebesgue integral.[2]

Heaviside step function

The Heaviside step function, using the half-maximum convention The Heaviside step function, or the unit step function, usually denoted by H (but sometimes u or ), is a discontinuous function whose value is zero for negative argument and one for positive argument. It seldom matters what value is used for H(0), since H is mostly used as a distribution. Some common choices can be seen below. The function is used in the mathematics of control theory and signal processing to represent a signal that switches on at a specified time and stays switched on indefinitely. It is also used in structural mechanics together with the Dirac delta function to describe different types of structural loads. It was named after the English polymath Oliver Heaviside. It is the cumulative distribution function of a random variable which is almost surely 0. (See constant random variable.) The Heaviside function is the integral of the Dirac delta function: H = . This is sometimes written as

although this expansion may not hold (or even make sense) for x = 0, depending on which formalism one uses to give meaning to integrals involving .

[edit] Discrete form


An alternative form of the unit step, as a function of a discrete variable n:

where n is an integer. Unlike the usual (not discrete) case, the definition of H[0] is significant. The discrete-time unit impulse is the first difference of the discrete-time step

This function is the cumulative summation of the Kronecker delta:

where

is the discrete unit impulse function.

[edit] Analytic approximations


For a smooth approximation to the step function, one can use the logistic function

where a larger k corresponds to a sharper transition at x = 0. If we take H(0) = , equality holds in the limit:

There are many other smooth, analytic approximations to the step function.[1] Among the possibilities are:

These limits hold pointwise and in the sense of distributions. In general, however, pointwise convergence need not imply distributional convergence, and vice-versa distributional convergence need not imply pointwise convergence. In general, any cumulative distribution function (c.d.f.) of a continuous probability distribution that is peaked around zero and has a parameter that controls for variance can serve as an approximation, in the limit as the variance approaches zero. For example, all three of the above approximations are c.d.f.s of common probability distributions: The logistic, Cauchy and normal distributions, respectively.

[edit] Integral representations


Often an integral representation of the Heaviside step function is useful:

[edit] Zero argument


Since H is usually used in integration, and the value of a function at a single point does not affect its integral, it rarely matters what particular value is chosen of H(0). Indeed when H is considered as a distribution or an element of (see Lp space) it does not even make sense to talk of a value at zero, since such objects are only defined almost everywhere. If using some analytic approximation (as in the examples above) then often whatever happens to be the relevant limit at zero is used. There exist, however, reasons for choosing a particular value. H(0) = is often used since the graph then has rotational symmetry; put another way, H- is then an odd function. In this case the following relation with the sign function holds for all x:

H(0) = 1 is used when H needs to be right-continuous. For instance cumulative distribution functions are usually taken to be right continuous,

as are functions integrated against in LebesgueStieltjes integration. In this case H is the indicator function of a closed semi-infinite interval:

H(0) = 0 is used when H needs to be left-continuous. In this case H is an indicator function of an open semi-infinite interval:

[edit] Antiderivative and derivative


The ramp function is the antiderivative of the Heaviside step function:

The distributional derivative of the Heaviside step function is the Dirac delta function: dH(x) / dx = (x).

[edit] Fourier transform


The Fourier transform of the Heaviside step function is a distribution. Using one choice of constants for the definition of the Fourier transform we have

Here

is the distribution that takes a test function

to the Cauchy

principal value of The limit appearing in the integral is also taken in the sense of (tempered) distributions.

[edit] Algebraic representation


If n is a decimal number with no more than d decimal digits, the Heaviside step function can be represented by means of the following algebraic expression:

where p and q are arbitrary integers that satisfy Kronecker delta function.

, and

n0 is a

For instance, if n is integer, the simplest choice is: p = 2, q = 1. On the other hand, if n belongs to a set of decimal numbers with d decimal digits, the d+1 simplest choice is: p = 10 , q = 1.[citation needed]

[edit] Hyperfunction representation


This can be represented as a hyperfunction as

Unit doublet
In mathematics, the unit doublet is the derivative of the Dirac delta function. It can be used to differentiate signals in electrical engineering:[1] If u1 is the unit doublet, then

where * refers to the convolution operator. The function is zero for all values except zero, where its behaviour is interesting. Its integral over any interval enclosing zero is zero. However, the integral of its absolute value over any region enclosing zero goes to infinity. The function can be thought of as the limiting case of two rectangles, one in the second quadrant, and the other in the fourth. The length of each rectangle is k, whereas their breadth is 1/k2, where k tends to zero.

Triangular function

Triangular function

The triangular function (also known as the triangle function, hat function, or tent function) is defined either as:

or, equivalently, as the convolution of two identical unit rectangular functions:

The triangular function can also be represented as the product of the rectangular and absolute value functions:

The function is useful in signal processing and communication systems engineering as a representation of an idealized signal, and as a prototype or kernel from which more realistic signals can be derived. It also has applications in pulse code modulation as a pulse shape for transmitting digital signals and as a matched filter for receiving the signals. It is also equivalent to the triangular window sometimes called the Bartlett window.

[edit] Scaling
For any parameter, :

[edit] Fourier transform


The transform is easily determined using the convolution property of Fourier transforms and the Fourier transform of the rectangular function:

Rectangular function

Rectangular function

The rectangular function (also known as the rectangle function, rect function, Pi function, gate function, unit pulse, or the normalized boxcar function) is defined as:[1]

Alternate definitions of the function define

to be 0, 1, or undefined.

[edit] Relation to the Step Function


The rectangular function may be expressed in terms of the Heaviside step function as [1]

or

More generally:

or

[edit] Relation to the Boxcar Function


The rectangular function is a special case of the more general boxcar function:

Where the function is centred at X and has duration Y.

Boxcar function

A graphical representation of a boxcar function.

In mathematics, a boxcar function is any function which is zero over the entire real line except for a single interval where it is equal to a constant, A; it is a simple step function. The boxcar function can be expressed in terms of the uniform distribution as

where f(a,b;x) is the uniform distribution of x for the interval [a, b]. As with most such discontinuous functions, there is a question of the value at the transition points. These values are probably best chosen for each individual application. When a boxcar function is selected as the impulse response of a filter, the result is a moving average filter.

[edit] Fourier transform of the rectangular function


The unitary Fourier transforms of the rectangular function are:[1]

and:

where

sinc is the normalized form.

Note that as long as the definition of the pulse function is only motivated by the time-domain experience of it, there is no reason to believe that the oscillatory interpretation (i.e. the Fourier transform function) should be intuitive, or directly understood by humans. However, some aspects of the theoretical result may be understood intuitively, such as the infinite bandwidth requirement incurred by the indefinitely-sharp edges in the time-domain definition.

[edit] Relation to the Triangular Function


We can define the triangular function as the convolution of two rectangular

functions:

[edit] Use in probability


Viewing the rectangular function as a probability density function, it is a special

case of the continuous uniform distribution with characteristic function is:

. The

and its moment generating function is:

where

sinh(t) is the hyperbolic sine function.

[edit] Rational approximation


The pulse function may also be expressed as a limit of a rational function:

Demonstration of validity
First, we consider the case where positive for integer n. However, 2t for large n. It follows that: . Notice that the term (2t) is always and hence (2t)^{2n} approaches zero
2n

<1

Second, we consider the case where always positive for integer n. However, large for large n. It follows that:

. Notice that the term (2t) is 2t > 1 and hence (2t)^{2n} grows very

2n

Third, we consider the case where equation:

. We may simply substitute in our

We see that it satisfies the definition of the pulse function.

Sign function

Signum function y = sgn(x) In mathematics, the sign function is an odd mathematical function that extracts the sign of a real number. To avoid confusion with the sine function, this function is often called the signum function (from signum, Latin for "sign"). In mathematical expressions the sign function is often represented as sgn.

[edit] Definition
The signum function of a real number x is defined as follows:

[edit] Properties
Any real number can be expressed as the product of its absolute value and its sign function:

From equation (1) it follows that whenever x is not equal to 0 we have

The signum function is the derivative of the absolute value function (up to the indeterminacy at zero): Note, the resultant power of x is 0, similar to the

ordinary derivative of x. The numbers cancel and all we are left with is the sign of x.

. The signum function is differentiable with derivative 0 everywhere except at 0. It is not differentiable at 0 in the ordinary sense, but under the generalised notion of differentiation in distribution theory, the derivative of the signum function is two times the Dirac delta function,

The signum function is related to the Heaviside step function H1/2(x) thus:

where the 1/2 subscript of the step function means that H1/2(0) = 1/2. The signum can also be written using the Iverson bracket notation:

For

, a smooth approximation of the sign function is

Another approximation is

which gets sharper as , note that it's the derivative of . This is inspired from the fact that the above is exactly equal for all nonzero x if , and has the advantage of simple generalization to higher dimensional analogues of the sign function (for example, the partial derivatives of ).

[edit] Complex signum


The signum function can be generalized to complex numbers as

for any z except z = 0. The signum of a given complex number z is the point on the unit circle of the complex plane that is nearest to z. Then, for z 0,

where arg is the complex argument function. For reasons of symmetry, and to keep this a proper generalization of the signum function on the reals, also in the complex domain one usually defines sgn 0 = 0. Another generalization of the sign function for real and complex expressions is csgn,[1] which is defined as:

where

is the real part of z,

is the imaginary part of z.

We then have (except for z = 0):

[edit] Generalized signum function


At real values of signum function, , it is possible to define a generalized functionversion of the

(x), such that

everywhere, including at the point

(unlike , for which ). This generalized signum allows construction of the algebra of generalized functions, but the price of such generalization is the loss of commutativity. In particular, the generalized signum anticommutes with the delta-function,[2]

in addition, cannot be evaluated at necessary to distinguish it from the function

; and the special name, is . ((0) is not defined, but

.)

[edit] Algebraic representation


If n is a decimal number with no more than d decimal digits, the signum function can be represented by means of the following algebraic expression:

where p and q are arbitrary integers that satisfy , and n0 is a Kronecker delta function.

For instance, if n is integer, the simplest choice is: p = 2, q = 1. On the other hand, if n belongs to a set of decimal numbers with d decimal digits, the simplest d+1 choice is: p = 10 , q = 1.

Sigmoid function
From Wikipedia, the free encyclopedia

Jump to: navigation, search This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (May 2008)

The logistic curve

Plot of the error function

Many natural processes, including those of complex system learning curves, exhibit a progression from small beginnings that accelerates and approaches a climax over time. When a detailed description is lacking, a sigmoid function is often used. A sigmoid curve is produced by a mathematical function having an "S" shape. Often, sigmoid function refers to the special case of the logistic function shown at right and defined by the formula

Another example is the Gompertz curve. It is used in modeling systems that saturate at large values of t. Another example is the ogee curve as used in the spillway of some dams. A wide variety of sigmoid functions have been used as the activation function of artificial neurons, including the logistic function and tanh(x).

Contents
[hide]

1 2 3 4

Properties Examples See also References

[edit] Properties
In general, a sigmoid function is real-valued and differentiable, having either a non-negative or non-positive first derivative which is bell shaped. There are also a pair of horizontal asymptotes as . The logistic functions are sigmoidal and are characterized as the solutions of the differential equation[1]

[edit] Examples

Some sigmoid functions compared. In the drawing all functions are normalized in such a way that their slope at 0 is 1. Besides the logistic function, sigmoid functions include the ordinary arctangent, the hyperbolic tangent, and the error function, but also the generalised logistic function and algebraic functions like .

The integral of any smooth, positive, "bump-shaped" function will be sigmoidal, thus the cumulative distribution functions for many common probability distributions are sigmoidal. The most famous such example is the error function.

Logistic function
From Wikipedia, the free encyclopedia
Jump to: navigation, search

For the recurrence relation, see Logistic map.

Standard logistic sigmoid function

A logistic function or logistic curve is a common sigmoid curve, given its name in 1844 or 1845 by Pierre Franois Verhulst who studied it in relation to population growth. It can model the "S-shaped" curve (abbreviated S-curve) of growth of some population P. The initial stage of growth is approximately exponential; then, as saturation begins, the growth slows, and at maturity, growth stops. A simple logistic function may be defined by the formula

where the variable P might be considered to denote a population and the variable t might be thought of as time.[1] For values of t in the range of real numbers from to +, the S-curve shown is obtained. In practice, due to the nature of the exponential function et, it is sufficient to compute t over a small range of real numbers such as [6, +6]. The logistic function finds applications in a range of fields, including artificial neural networks, biology, biomathematics, demography, economics, chemistry, mathematical psychology, probability, sociology, political science, and statistics. It has an easily calculated derivative:

It also has the property that

In other words, the function P 1/2 is odd.

[edit] Logistic differential equation


The logistic function is the solution of the simple first-order nonlinear differential equation

where P is a variable with respect to time t and with boundary condition P(0) = 1/2. This equation is the continuous version of the logistic map. The qualitative behavior is easily understood in terms of the phase line: the derivative is 0 at P = 0 or 1 and the derivative is positive for P between 0 and 1, and negative for P above 1 or less than 0 (though negative populations do not generally accord with a physical model). This yields an unstable equilibrium at 0, and a stable equilibrium at 1, and thus for any value of P greater than 0 and less than 1, P grows to 1. One may readily find the (symbolic) solution to be

Choosing the constant of integration ec = 1 gives the other well-known form of the definition of the logistic curve

More quantitatively, as can be seen from the analytical solution, the logistic curve shows early exponential growth for negative t, which slows to linear growth of slope 1/4 near t = 0, then approaches y = 1 with an exponentially decaying gap. The logistic function is the inverse of the natural logit function and so can be used to convert the logarithm of odds into a probability; the conversion from the loglikelihood ratio of two alternatives also takes the form of a logistic curve. The logistic sigmoid function is related to the hyperbolic tangent, A.p. by

[edit] In ecology: modeling population growth

Pierre-Franois Verhulst (18041849)

A typical application of the logistic equation is a common model of population growth, originally due to Pierre-Franois Verhulst in 1838, where the rate of reproduction is proportional to both the existing population and the amount of available resources, all else being equal. The Verhulst equation was published after Verhulst had read Thomas Malthus' An Essay on the Principle of Population. Verhulst derived his logistic equation to describe the self-limiting growth of a biological population. The equation is also sometimes called the Verhulst-Pearl equation following its rediscovery in 1920. Alfred J. Lotka derived the equation again in 1925, calling it the law of population growth. Letting P represent population size (N is often used in ecology instead) and t represent time, this model is formalized by the differential equation:

where the constant r defines the growth rate and K is the carrying capacity. In the equation, the early, unimpeded growth rate is modeled by the first term +rP. The value of the rate r represents the proportional increase of the population P in one unit of time. Later, as the population grows, the second term, which multiplied out is rP2/K, becomes larger than the first as some members of the population P interfere with each other by competing for some critical resource, such as food or living space. This antagonistic effect is called the bottleneck, and is modeled by the value of the parameter K. The competition diminishes the combined growth rate, until the value of P ceases to grow (this is called maturity of the population). Dividing both sides of the equation by K gives

Now setting equation

x = P / K gives the differential

For r = 1 we have the particular case with which we started. In ecology, species are sometimes referred to as r-strategist or K-strategist depending upon the selective processes that have shaped their life history strategies. The solution to the equation (with P0 being the initial population) is

where

Which is to say that K is the limiting value of P: the highest value that the population can reach given infinite time (or come close to reaching in finite time). It is important to stress that the carrying capacity is asymptotically reached independently of the initial value P(0) > 0, also in case that P(0) > K.
[edit] Time-varying carrying capacity

Since the environmental conditions influence the carrying capacity, as a consequence it can be time-varying: K(t) > 0, leading to the following mathematical model:

A particularly important case is that of carrying capacity that varies periodically with period T:

It can be shown that in such a case, independently from the initial value P(0) > 0, P(t) will tend to a unique periodic solution P*(t), whose period is T. A typical value of T is one year: in such case K(t) reflects periodical variations of weather conditions. Another interesting generalization is to consider that the carrying capacity K(t) is

a function of the population at an earlier time, capturing a delay in the way population modifies its environment. This leads to a logistic delay equation,[2] which has a very rich behavior, with bistability in some parameter range, as well as a monotonic decay to zero, smooth exponential growth, punctuated unlimited growth (i.e., multiple S-shapes), punctuated growth or alternation to a stationary level, oscillatory approach to a stationary level, sustainable oscillations, finite-time singularities as well as finitetime death.

[edit] In neural networks


Logistic functions are often used in neural networks to introduce nonlinearity in the model and/or to clamp signals to within a specified range. A popular neural net element computes a linear combination of its input signals, and applies a bounded logistic function to the result; this model can be seen as a "smoothed" variant of the classical threshold neuron. A common choice for the activation or "squashing" functions, used to clip for large magnitudes to keep the response of the neural network bounded[3] is

which we recognize to be of the form of the logistic function. These relationships result in simplified implementations of artificial neural networks with artificial neurons. Practitioners caution that sigmoidal functions which are antisymmetric about the origin (e.g. the hyperbolic tangent) lead to faster convergence when training networks with backpropagation.[4]

[edit] In statistics
Logistic functions are used in several roles in statistics. Firstly, they are the cumulative distribution function of the logistic family of distributions. Secondly they are used in logistic regression to model how the probability p of an event may be affected by one or more explanatory variables: an example would be to have the model

where x is the explanatory variable and a and b are model parameters to be fitted.

An important application of the logistic function is in the Rasch model, used in item response theory. In particular, the Rasch model forms a basis for maximum likelihood estimation of the locations of objects or persons on a continuum, based on collections of categorical data, for example the abilities of persons on a continuum based on responses that have been categorized as correct and incorrect.

[edit] In medicine: modeling of growth of tumors


Another application of logistic curve is in medicine, where the logistic differential equation is used to model the growth of tumors. This application can be considered an extension of the above mentioned use in the framework of ecology. Denoting with X(t) the size of the tumor at time t, its dynamics are governed by:

which is of the type:

where F(X) is the proliferation rate of the tumor. If a chemotherapy is started with a log-kill effect, the equation may be revised to be

where c(t) is the therapyinduced death rate. In the idealized case of very long therapy, c(t) can be modeled as a periodic function (of period T) or (in case of continuous infusion therapy) as a constant function, and one has that

i.e. if the average therapy-

induced death rate is greater than the baseline proliferation rate then there is the eradication of the disease. Of course, this is an oversimplified model of both the growth and the therapy (e.g. it does not take into account the phenomeno n of clonal resistance).

[edit] In chemist ry: reaction models


The concentrati on of reactants and products in autocatalyti c reactions

follow the logistic function.

[edit] In physics: Fermi distribut ion


The logistic function determines the statistical distribution of fermions over the energy states of a system in thermal equilibrium. In particular, it is the distribution of the probabilitie s that each possible energy level is occupied by a fermion, according to Fermi Dirac statistics.

[edit] In linguisti cs: languag e change


In linguistics, the logistic function can be used to model language change[5]: an innovation that is at first marginal begins to spread more quickly with time, and then more slowly as it becomes more universally adopted.

[edit] In econom ics:

diffusio n of innovati ons


The logistic function can be used to illustrate the progress of the diffusion of an innovation through its life cycle. This method was used in papers by several researchers at the Internationa l Institute of Applied Systems Analysis (IIASA). These papers deal with the diffusion of various innovations, infrastructur es and energy source

substitution s and the role of work in the economy as well as with the long economic cycle. Long economic cycles were investigate d by Robert Ayres (1989).[6] Cesare Marchetti published on long economic cycles and on diffusion of innovations .[7][8] Arnulf Grblers book (1990) gives a detailed account of the diffusion of infrastructur es including canals, railroads, highways and airlines , showing that their

diffusion followed logistic shaped curves.[9] Carlota Perez used a logistic curve to illustrate the long (Kondratiev) business cycle with the following labels: beginning of a technologic al era as irruption, the ascent as frenzy, the rapid build out as

synergy
and the completion as maturity.[10]

[edit] Double logistic function

Double logistic sigmoid curve

The double logistic is a function similar to the logistic function with numerous application s. Its general formula is:

where d is its centre and s is the steepne ss factor. Here "sgn" represe nts the

sign function. It is based on the Gaussia n curve and graphic ally it is similar to two identical logistic sigmoid s bonded together at the point x = d. One of its applicati ons is nonlinear normaliz ation of a sample, as it has the property of eliminati ng outliers.

Ramp function
From Wikipedia, the free encyclopedia
Jump to: navigation, search

The ramp function is an elementary unary real function, easily computable as the mean of its independent variable and its absolute value. This function is applied in engineering (e.g., in the theory of DSP). The name ramp function can be derived by the look of its graph.

Contents
[hide] 1 Definitions 2 Analytic properties o o o o o 2.1 Non-negativity 2.2 Derivative 2.3 Fourier transform 2.4 Laplace transform 3.1 Iteration invariance

3 Algebraic properties 4 References

[edit] Definitions

Graph of the ramp function

The ramp function ( Possible definitions are:

) may be defined analytically in several ways.

The mean of a straight line with unity gradient and its modulus:

this can be derived by noting the following definition of

for which

a = x and b = 0

The Heaviside step function multiplied by a straight line with unity gradient:

The convolution of the Heaviside step function with itself:

The integral of the Heaviside step function:

[edit] Analytic properties


[edit] Non-negativity

In the whole domain the function is non-negative, so its absolute value is itself, i.e.

and

Proof: by the mean of definition [2] it is non-negative in the I. quarter, and zero in the II.; so everywhere it is non-negative.

[edit] Derivative

Its derivative is the Heaviside function:

From this property definition [5]. goes.


[edit] Fourier transform

Where (x) is the Dirac delta (in this formula, its derivative appears).
[edit] Laplace transform

The single-sided Laplace transform of follows,

R(x) is given as

[edit] Algebraic properties


[edit] Iteration invariance

Every iterated function of the ramp mapping is itself, as


.

Proof: .

We applied the non-negative property.

Dirac delta function


From Wikipedia, the free encyclopedia
Jump to: navigation, search

Schematic representation of the Dirac delta function by a line surmounted by an arrow. The height of the arrow is usually used to specify the value of any multiplicative constant, which will give the area under the function. The other convention is to write the area next to the arrowhead.

The Dirac delta function as the limit (in the sense of distributions) of the sequence of Gaussians

as

The Dirac delta function, or function, is (informally) a generalized function depending on a real parameter such that it is zero for all values of the parameter except when the parameter is zero, and its integral over the parameter from to is equal to one.[1][2] It was introduced by theoretical physicist Paul Dirac. In the context of signal processing it is often referred to as the unit impulse function. It is a continuous analog of the Kronecker delta function which is usually defined on a finite domain, and takes values 0 and 1. From a purely mathematical viewpoint, the Dirac delta is not strictly a function, because any extended-real function that is equal to zero everywhere but a single point must have total integral zero.[3] While for many purposes the Dirac delta can be manipulated as a function, formally it can be defined as a distribution that is also a measure. In many applications, the Dirac delta is regarded as a kind of limit (a weak limit) of a sequence of functions having a tall spike at the origin. The approximating functions of the sequence are thus "approximate" or "nascent" delta functions.

Contents
[hide]

1 Overview 2 Definitions o o o 2.1 As a measure 2.2 As a distribution 2.3 Generalizations 3.1 Scaling and symmetry 3.2 Algebraic properties 3.3 Translation 3.4 Composition with a function 3.5 Properties in n dimensions

3 Properties o o o o o

4 Fourier transform 5 Distributional derivatives o o o o o o o o o 5.1 Higher dimensions 6.1 Approximations to the identity 6.2 Probabilistic considerations 6.3 Semigroups 6.4 Oscillatory integrals 6.5 Plane wave decomposition 6.6 Fourier kernels 6.7 Hilbert space theory 6.8 Infinitesimal delta functions 6 Representations of the delta function

7 Dirac comb 8 SokhatskyWeierstrass theorem 9 Relationship to the Kronecker delta 10 Applications to probability theory 11 Application to quantum mechanics 12 Application to structural mechanics 13 See also 14 Notes 15 References 16 External links

[edit] Overview

The graph of the delta function is usually thought of as following the whole xaxis and the positive y-axis. (This informal picture can sometimes be misleading, for example in the limiting case of the sinc function.) Despite its name, the delta function is not truly a function, at least not a usual one with domain in reals. For example, the objects f(x) = (x) and g(x) = 0 are equal everywhere except at x = 0 yet have integrals that are different. According to Lebesgue integration theory, if f and g are functions such that f = g almost everywhere, then f is integrable if and only if g is integrable and the integrals of f and g are identical. Rigorous treatment of the Dirac delta requires measure theory, the theory of distributions, or a hyperreal framework. The Dirac delta is used to model a tall narrow spike function (an impulse), and other similar abstractions such as a point charge, point mass or electron point. For example, to calculate the dynamics of a baseball being hit by a bat, one can approximate the force of the bat hitting the baseball by a delta function. In doing so, one not only simplifies the equations, but one also is able to calculate the motion of the baseball by only considering the total impulse of the bat against the ball rather than requiring knowledge of the details of how the bat transferred energy to the ball. In applied mathematics, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions centered at the origin with variance tending to zero. An infinitesimal formula for an infinitely tall, unit impulse delta function (infinitesimal version of Cauchy distribution) explicitly appears in an 1827 text of Augustin Louis Cauchy.[4] Simon Denis Poisson considered the issue in connection with the study of wave propagation as did Gustav Kirchhoff somewhat later. Kirchhoff and Hermann von Helmholtz also introduced the unit impulse as a limit of Gaussians, which also corresponded to Lord Kelvin's notion of a point heat source. At the end of the 19th century, Oliver Heaviside used formal Fourier series to manipulate the unit impulse.[5] The Dirac delta function as such was introduced as a "convenient notation" by Paul Dirac in his influential 1927 book Principles of Quantum Mechanics.[6] He called it the "delta function" since he used it as a continuous analogue of the discrete Kronecker delta.

[edit] Definitions
The Dirac delta can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite,

and which is also constrained to satisfy the identity

[7]

This is merely a heuristic definition. The Dirac delta is not a true function, as no function has the above properties.[6] Moreover there exist descriptions of the delta function which differ from the above conceptualization. For example, sinc(x/a)/a becomes the delta function in the limit as a 0,[8] yet this function does not approach zero for values of x outside the origin, rather it oscillates between 1/x and 1/x more and more rapidly as a approaches zero. The Dirac delta function can be rigorously defined either as a distribution or as a measure.
[edit] As a measure

One way to rigorously define the delta function is as a measure, which accepts as an argument a subset A of the real line R, and returns (A) = 1 if 0 A, and (A) = 0 otherwise.[9] If the delta function is conceptualized as modeling an idealized point mass at 0, then (A) represents the mass contained in the set A. One may then define the integral against as the integral of a function against this mass distribution. Formally, the Lebesgue integral provides the necessary analytic device. The Lebesgue integral with respect to the measure satisfies

for all continuous compactly supported functions . The measure is not absolutely continuous with respect to the Lebesgue measure in fact, it is a singular measure. Consequently, the delta measure has no RadonNikodym derivative no true function for which the property

holds.[10] As a result, the latter notation is a convenient abuse of notation, and not a standard (Riemann or Lebesgue) integral. As a probability measure on R, the delta measure is characterized by its cumulative distribution function, which is the unit step function[11]

This means that H(x) is the integral of the cumulative indicator function 1(, x] with respect to the measure ; to wit,

Thus in particular the integral of the delta function against a continuous function can be properly understood as a Stieltjes integral:[12]

All higher moments of are zero. In particular, characteristic function and moment generating function are both equal to one.
[edit] As a distribution

In the theory of distributions a generalized function is thought of not as a function itself, but only in relation to how it affects other functions when it is "integrated" against them. In keeping with this philosophy, to define the delta function properly, it is enough to say what the "integral" of the delta function against a sufficiently "good" test function is. If the delta function is already understood as a measure, then the Lebesgue integral of a test function against that measure supplies the necessary integral. A typical space of test functions consists of all smooth functions on R with compact support. As a distribution,

the Dirac delta is a linear functional on the space of test functions and is defined by[13] (1) for every test function . For to be properly a distribution, it must be "continuous" in a suitable sense. In general, for a linear functional S on the space of test functions to define a distribution, it is necessary and sufficient that, for every positive integer N there is an integer MN and a constant CN such that for every test function , one has the inequality[14]

With the distribution, one has such an inequality (with CN = 1) with MN = 0 for all N. Thus is a distribution of order zero. It is, furthermore, a distribution with compact support (the support being {0}). The delta distribution can also be defined in a number of equivalent ways. For instance, it is the distributional derivative of the Heaviside step function. This means that, for every test function , one has

Intuitively, if integration by parts were permitted, then the latter integral should simplify to

and indeed, a form of integration by parts is permitted for the Stieltjes integral, and in that case one does have

In the context of measure theory, the Dirac measure gives rise to a distribution by integration. Conversely, equation (1) defines a Daniell integral on the space of all compactly supported continuous functions which, by the Riesz representation theorem, can be represented as the Lebesgue integral of with respect to some Radon measure.
[edit] Generalizations

The delta function can be defined in n-dimensional Euclidean space Rn as the measure such that

for every compactly supported continuous function . As a measure, the n-dimensional delta function is the product measure of the 1-dimensional delta functions in each variable separately. Thus, formally, with x = (x1,x2,...,xn), one has[15] (2) The delta function can also be defined in the sense of distributions exactly as above in the one-dimensional case.[16] However, despite widespread use in engineering contexts, (2) should be manipulated with care, since the product of distributions can only be

defined under quite narrow circumstances.[17] The notion of a Dirac measure makes sense on any set whatsoever.[9] Thus if X is a set, x0 X is a marked point, and is any sigma algebra of subsets of X, then the measure defined on sets A by

is the delta measure or unit mass concentrated at x0. Another common generalization of the delta function is to a differentiable manifold where most of its properties as a distribution can also be exploited because of the differentiable structure. The delta function on a manifold M centered at the point x0 M is defined as the following distribution: (3) for all compactly supported smooth realvalued functions on M.[18] A common special case of this construction is when M is an open set in the Euclidean space Rn. On a locally compact Hausdorff space X, the Dirac delta measure

concentrated at a point x is the Radon measure associated with the Daniell integral (3) on compactly supported continuous functions . At this level of generality, calculus as such is no longer possible, however a variety of techniques from abstract analysis are available. For instance, the mapping is a continuous embedding of X into the space of finite Radon measures on X, equipped with its vague topology. Moreover, the convex hull of the image of X under this embedding is dense in the space of probability measures on X.[19]

[edit] Properties
[edit] Scaling and symmetry

The delta function satisfies the following scaling property for a non-zero scalar :[20]

and so (4)

In particular, the delta function is an even distribution, in the sense that

( x) = (x)
which is homogeneous of degree 1.
[edit] Algebraic properties

The distributional product of with x is equal to zero:

x(x) = 0.
Conversely, if xf(x) = xg(x), where f and g are distribution s, then

f(x) = g(x) + c(x)


for some constan t c.
[edit] Translation

The integral of the time-

delayed Dirac delta is given by:

This is som etim es refer red to as the

siftin g prop erty[


21]

or the

sam pling prop erty.


The delta funct ion is said to "sift out" the valu e at .

It follo ws that the effec t of conv olvin ga funct ion (t) with the time dela yed Dira c delta is to time dela y (t) by the sam e amo unt:

(using (4):

( x) = (x))

T h is h o l d s u n d e r t h e p r e c is e c o n d iti o n t h a t

f
b e a t e m p

e r e d d is tr i b u ti o n ( s e e t h e d is c u s si o n o f t h e F o u ri e r tr a n s

f o r m b el o w ). A s a s p e c ia l c a s e, f o r i n s t a n c e, w e h a v e t h e

i d e n ti t y ( u n d e r s t o o d i n t h e d is tr i b u ti o n s e n s e)

[edit] Composition with a function

M o r e g e n e r a l l y , t h e d e l t a d i s t r i b u t i o n m a y

b e c o m p o s e d w i t h a s m o o t h f u n c t i o n

g
(

x
) i n

s u c h a w a y t h a t t h e f a m i l i a r c h a n g e o f v a r i

a b l e s f o r m u l a h o l d s , t h a t

p r o v i d e d t h a t

i s a c o n t i n u o u s l y d i f f e r e n t i a b l e f u n c t i o n

w i t h

g
n o w h e r e z e r o .
[ 2 2 ]

T h a t i s , t h e r e i s a

u n i q u e w a y t o a s s i g n m e a n i n g t o t h e d i s t r i b

u t i o n

s o t h a t t h i s i d e n t i t y h o l d s f o r a l l c

o m p a c t l y s u p p o r t e d t e s t f u n c t i o n s

. T h i s d i

s t r i b u t i o n s a t i s f i e s (

g
(

x
) ) = 0 i f

g
i s n o w

h e r e z e r o , a n d o t h e r w i s e i f

g
h a s a r e a l r o o

t a t

x
0

, t h e n

I t i s n a t u r a l t h e r e f o r e t o

d e f i n e
t h e c o m p o s i t i o n (

g
(

x
) ) f o r c o n t i n u o u

s l y d i f f e r e n t i a b l e f u n c t i o n s

g
b y

w h e r e

t h e

s u m

e x t e n d s

o v e r

a l l

r o o t s

o f

g
(

) , w h i c

a r e

a s s u m e d

t o

b e

s i m p l e .
[ 2 2 ]

T h u s , f o r

e x a

m p l e

[edit] Properties in n dimensions

'[] = ['] = '(0).

(k)[] = ( 1)k(k)(0).

(hS)[] = S[ h].

x'(x) = (x).

' * f = * f' = f',

[edit] Higher dimensions

a[] = (a)

(5)

[edit] Approximations to the identity

(x) = 1max(1 | x / | ,0)

[edit] Probabilistic considerations

[edit] Semigroups

* = +

[edit] Oscillatory integrals

[edit] Plane wave decomposition

[edit] Fourier kernels

n (on the interval [,]) with the Dirichlet kernel:

methods in order to produce convergence. The method of Cesro summation leads to the

[edit] Hilbert space theory

ns is well-defined. In many applications, it is possible to identify subspaces of L2 and to

auchy integral formula continues to hold. In particular for z D, the delta function z is a

most cases of practical interest, the orthonormal basis comes from an integral or differential

[edit] Infinitesimal delta functions

initesimal in Cours d'Analyse (1827) in terms of a sequence tending to zero. Namely, such a

defined in terms of a suitable ultrafilter. The article by Yamashita (2007) contains a

ete time signal analysis. The Dirac comb is given as the infinite sum, whose limit is

obability density function (which is normally used to represent fully continuous distributions).

en as

ontinuous, partly discrete mixture distribution). The density function of this distribution can

ocess B(t) is given by

article within a given region of space. Wave functions are assumed to be elements of the function squared over the interval. A set {n} of wave functions is orthonormal if they are

n can be expressed as a combination of the n:

uantum mechanics that measures the energy levels, which are called the eigenvalues. The entity:

the position observable, Q(x) = x(x). The spectrum of the position (in one dimension) is ventional way to overcome this shortcoming is to widen the class of available functions by he position operator has a complete set of eigen-distributions, labeled by the points y of the

space, provided the spectrum of P is continuous and there are no degenerate eigenvalues. In

e parts, then the resolution of the identity involves a summation over the discrete spectrum

single and double potential well.

ning equation of a simple mass-spring system excited by a sudden force impulse

I at time t

a beam is loaded by a point force

F at x = x0, the load distribution is written

er beam subject to multiple point loads is described by a set of piecewise polynomials.

a distance d apart. They then produce a moment ng at x = 0, is written

M = Fd acting on the beam. Now, let the

ain results in piecewise polynomial deflection.

Unit function
From Wikipedia, the free encyclopedia
Jump to: navigation, search

In number theory, the unit function is a completely multiplicative function on the positive integers defined as:

It is called the unit function because it is the identity element for Dirichlet convolution. It may be described as the "indicator function of 1" within the set of positive integers. It is also written as u(n) (not to be confused with (n)).

Multiplicative function
From Wikipedia, the free encyclopedia
Jump to: navigation, search

Outside number theory, the term multiplicative function is usually used for completely multiplicative functions. This article discusses number theoretic multiplicative functions.

In number theory, a multiplicative function is an arithmetic function f(n) of the positive integer n with the property that f(1) = 1 and whenever a and b are coprime, then
f(ab) = f(a) f(b).

An arithmetic function f(n) is said to be completely multiplicative (or totally multiplicative) if f(1) = 1 and f(ab) = f(a) f(b) holds for all positive integers a and b, even when they are not coprime.

Contents

[hide] 1 Examples 2 Properties 3 Convolution o 3.1 Dirichlet series for some multiplicative functions 4 See also 5 References 6 External links

[edit] Examples
Examples of multiplicative functions include many functions of importance in number theory, such as:

(n): Euler's totient function , counting the positive integers coprime to (but not
bigger than) n

(n): the Mbius function, related to the number of prime factors of square-free
numbers gcd(n,k): the greatest common divisor of n and k, where k is a fixed integer.

d(n): the number of positive divisors of n,

(n): the sum of all the positive divisors of n, k(n): the divisor function, which is the sum of the k-th powers of all the positive
divisors of n (where k may be any complex number). In special cases we have

0(n) = d(n) and o 1(n) = (n), a(n): the number of non-isomorphic abelian groups of order n.
o 1(n): the constant function, defined by 1(n) = 1 (completely multiplicative)

1C(n) the indicator function of the set C of squares (or cubes, or fourth powers, etc.)
Id(n): identity function, defined by Id(n) = n (completely multiplicative) Idk(n): the power functions, defined by Idk(n) = nk for any natural (or even complex) number k (completely multiplicative). As special cases we have o Id0(n) = 1(n) and o Id1(n) = Id(n),

(n): the function defined by

(n) = 1 if n = 1 and = 0 otherwise, sometimes called

multiplication unit for Dirichlet convolution or simply the unit function; sometimes
written as u(n), not to be confused with

(n) (completely multiplicative).

(n/p), the Legendre symbol, where p is a fixed prime number (completely multiplicative).

(n): the Liouville function, related to the number of prime factors dividing n
(completely multiplicative). (n), defined by (n)=(-1)(n), where the additive function distinct primes dividing n. All Dirichlet characters are completely multiplicative functions.

(n) is the number of

An example of a non-multiplicative function is the arithmetic function r2(n) - the number of representations of n as a sum of squares of two integers, positive, negative, or zero, where in counting the number of ways, reversal of order is allowed. For example:
1 = 12 + 02 = (-1)2 + 02 = 02 + 12 = 02 + (-1)2

and therefore r2(1) = 4 1. This shows that the function is not multiplicative. However, r2(n)/4 is multiplicative. In the On-Line Encyclopedia of Integer Sequences, sequences of values of a multiplicative function have the keyword "mult". See arithmetic function for some other examples of nonmultiplicative functions.

[edit] Properties
A multiplicative function is completely determined by its values at the powers of prime numbers, a consequence of the fundamental theorem of arithmetic. Thus, if n is a product of powers of distinct primes, say n = pa qb ..., then f(n) = f(pa) f(qb) ... This property of multiplicative functions significantly reduces the need for computation, as in the following examples for n = 144 = 24 32:
d(144) =

0(144) = 0(24)0(32) = (10 + 20 + 40 + 80 + 160)(10 + 30 + 90) = 5 3 = 15,

(144) = 1(144) = 1(24)1(32) = (11 + 21 + 41 + 81 + 161)(11 + 31 + 91) = 31 13 = 403, *(144) = *(24)*(32) = (11 + 161)(11 + 91) = 17 10 = 170.
Similarly, we have:

(144)=(24)(32) = 8 6 = 48
In general, if f(n) is a multiplicative function and a, b are any two positive integers, then
f(a) f(b) = f(gcd(a,b)) f(lcm(a,b)).

Every completely multiplicative function is a homomorphism of monoids and is completely determined by its restriction to the prime numbers.

[edit] Convolution
If f and g are two multiplicative functions, one defines a new multiplicative function f * g, the Dirichlet convolution of f and g, by

where the sum extends over all positive divisors d of n. With this operation, the set of all multiplicative functions turns into an abelian group; the identity element is . Relations among the multiplicative functions discussed above include:

* 1 = (the Mbius inversion formula) ( * Idk) * Idk = (generalized Mbius inversion) * 1 = Id


d=1*1

= Id * 1 = * d k = Idk * 1 Id = * 1 = * Idk = k *

The Dirichlet convolution can be defined for general arithmetic functions, and yields a ring structure, the Dirichlet ring.

[edit] Dirichlet series for some multiplicative functions

More examples are shown in the article on Dirichlet series.

Indicator function
From Wikipedia, the free encyclopedia

Jump to: navigation, search

The graph of the indicator function of a two-dimensional subset of a square. In mathematics, an indicator function or a characteristic function is a function defined on a set X that indicates membership of an element in a subset A of X, having the value 1 for all elements of A and the value 0 for all elements of X not in A.

Contents
[hide]

1 Definition 2 Remark on notation and terminology 3 Basic properties

4 Mean, variance and covariance 5 Characteristic function in recursion theory, Gdel's and Kleene's representing function 6 Characteristic function in fuzzy set theory 7 See also 8 References

[edit] Definition
The indicator function of a subset

A of a set X is a function

defined as

The Iverson bracket allows the equivalent notation, instead of

, to be used

The function is sometimes denoted or or even just . (The Greek letter appears because it is the initial letter of the Greek word characteristic.)

[edit] Remark on notation and terminology


The notation The notation analysis.

may signify the identity function. may signify the characteristic function in convex

A related concept in statistics is that of a dummy variable (this must not be confused with "dummy variables" as that term is usually used in mathematics, also called a bound variable). The term "characteristic function" has an unrelated meaning in probability theory. For this reason, probabilists use the term indicator function for the function defined here almost exclusively, while mathematicians in other fields are more likely to use the term characteristic function to describe the function which indicates membership in a set.

[edit] Basic properties


The indicator or characteristic function of a subset maps elements of X to the range {0,1}.

A of some set X,

This mapping is surjective only when A is a proper subset of X. If , then . By a similar argument, if then

In the following, the dot represents multiplication, 11 = 1, 10 = 0 etc. "+" and "" represent addition and subtraction. " " and " " is intersection and union, respectively. If

A and B are two subsets of X, then

and the indicator function of the complement of A i.e. AC is:

More generally, suppose of X. For any ,

is a collection of subsets

is clearly a product of 0s and 1s. This product has the value 1 at precisely those which belong to none of the sets Ak and is 0 otherwise. That is

Expanding the product on the left hand side,

where | F | is the cardinality of F. This is one form of the principle of inclusion-exclusion.

As suggested by the previous example, the indicator function is a useful notational device in combinatorics. The notation is used in other places as well, for instance in probability theory: if X is a probability space with probability measure and A is a measurable set, then becomes a random variable whose expected value is equal to the probability of A:

This identity is used in a simple proof of Markov's inequality. In many cases, such as order theory, the inverse of the indicator function may be defined. This is commonly called the generalized Mbius function, as a generalization of the inverse of the indicator function in elementary number theory, the Mbius function. (See paragraph below about the use of the inverse in classical recursion theory.)

[edit] Mean, variance and covariance


Given a probability space the indicator random variable defined by if with is otherwise ,

Mean: Variance: Covariance:

[edit] Characteristic function in recursion theory, Gdel's and

Kleene's representing

function
Kurt Gdel described the representing function in his 1934 paper "On Undecidable Propositions of Formal Mathematical Systems". (The paper appears on pp. 41-74 in Martin Davis ed. The Undecidable): "There shall correspond to each class or relation R a representing function (x1, . . ., xn) = 0 if R(x1, . . ., xn) and (x1, . . ., xn)=1 if ~R(x1, . . ., xn)." (p. 42; the "~" indicates logical inversion i.e. "NOT") Stephen Kleene (1952) (p. 227) offers up the same definition in the context of the primitive recursive functions as a function of a predicate P, takes on values 0 if the predicate is true and 1 if the predicate is false. For example, because the product of characteristic functions 1*2* . . . *n = 0 whenever any one of the functions equals 0, it plays the role of logical OR: IF 1=0 OR 2=0 OR . . . OR n=0 THEN their product is 0. What appears to the modern reader as the representing function's logicalinversion, i.e. the representing function is 0 when the function R is "true" or satisfied", plays a useful role in Kleene's definition of the logical functions OR, AND, and IMPLY (p. 228), the bounded- (p. 228) and unbounded- (p. 279ff) mu operators (Kleene (1952)) and the CASE function (p. 229).

[edit] Characteristic function in fuzzy set theory


In classical mathematics, characteristic functions of sets only take values 1 (members) or 0 (non-members). In fuzzy set theory, characteristic functions are generalized to take value in the real unit interval [0, 1], or more generally, in some algebra or structure (usually required to be at least a poset or lattice). Such generalized characteristic functions are more usually called membership functions, and the corresponding "sets" are called fuzzy sets. Fuzzy sets model the gradual change in the membership degree seen in many real-world predicates like "tall", "warm", etc.

Simple function
From Wikipedia, the free encyclopedia

Jump to: navigation, search In the mathematical field of real analysis, a simple function is a (sufficiently 'nice' - see below for the formal definition) real-valued function over a subset of the real line which attains only a finite number of values. Some authors also require simple functions to be measurable; as used in practice, they invariably are. A basic example of a simple function is the floor function over the half-open interval [1,9), whose only values are {1,2,3,4,5,6,7,8}. A more advanced example is the Dirichlet function over the real line, which takes the value 1 if x is rational and 0 otherwise. (Thus the "simple" of "simple function" has a

technical meaning somewhat at odds with common language.) Note also that all step functions are simple. Simple functions are used as a first stage in the development of theories of integration, such as the Lebesgue integral, because it is very easy to create a definition of an integral for a simple function, and also, it is straightforward to approximate more general functions by sequences of simple functions.

Contents
[hide]

1 2 3 4 5

Definition Properties of simple functions Integration of simple functions Relation to Lebesgue integration References

[edit] Definition
Formally, a simple function is a finite linear combination of indicator functions of measurable sets. More precisely, let (X, ) be a measurable space. Let A1, ..., An be a sequence of measurable sets, and let a1, ..., an be a sequence of real or complex numbers. A simple function is a function of the form

where

is the indicator function of the set A.

[edit] Properties of simple functions


By definition, the sum, difference, and product of two simple functions are again simple functions, and multiplication by constant keeps a simple function simple; hence it follows that the collection of all simple functions on a given measurable space forms a commutative algebra over .

[edit] Integration of simple functions

If a measure is defined on the space (X,), the integral of f with respect to is

if all summands are finite.

[edit] Relation to Lebesgue integration


Any non-negative measurable function is the pointwise limit of a monotonic increasing sequence of non-negative simple functions. Indeed, let f be a non-negative measurable function defined over the measure space (X,,) as before. For each , subdivide the range 2n 2n n of f into 2 + 1 intervals, 2 of which have length 2 . For each n, set

for (Note that, for fixed n, the sets negative real line.)

, and

In,k are disjoint and cover the non-

Now define the measurable sets for .

Then the increasing sequence of simple functions

converges pointwise to f as . Note that, when f is bounded, the convergence is uniform. This approximation of f by simple functions (which are easily integrable) allows us to define an integral f itself; see the article on Lebesgue integration for more details.

Dirac comb
From Wikipedia, the free encyclopedia

(Redirected from Sampling function) Jump to: navigation, search

A Dirac comb is an infinite series of Dirac delta functions spaced at intervals of

T
In mathematics, a Dirac comb (also known as an impulse train and sampling function in electrical engineering) is a periodic Schwartz distribution constructed from Dirac delta functions

for some given period T. Some authors, notably Bracewell as well as some textbook authors in electrical engineering and circuit theory, refer to it as the Shah function (possibly because its graph resembles the shape of the Cyrillic letter sha ). Because the Dirac comb function is periodic, it can be represented as a Fourier series:

Contents
[hide]

1 2 3 4 5 6 7

Scaling property Fourier series Fourier transform Sampling and aliasing Use in directional statistics See also References

[edit] Scaling property


The scaling property follows directly from the properties of the Dirac delta function

[edit] Fourier series


It is clear that T(t) is periodic with period T. That is

for all t. The complex Fourier series for such a periodic function is

where the Fourier coefficients, cn are

All Fourier coefficients are 1/T resulting in

[edit] Fourier transform


The Fourier transform of a Dirac comb is also a Dirac comb. Unitary transform to ordinary frequency domain (Hz):

Unitary transform to angular frequency domain (radian/s):

[edit] Sampling and aliasing


Main article: NyquistShannon sampling theorem
Reconstruction of a continuous signal from samples taken at sampling interval T is done by some sort of interpolation, such as the WhittakerShannon interpolation formula. Mathematically, that process is often modelled as the output of a lowpass filter whose input is a Dirac comb whose teeth have been weighted by the sample values. Such a comb is equivalent to the product of a comb and the original continuous signal. That mathematical abstraction is often described as "sampling" for purposes of introducing the subjects of aliasing and the Nyquist-Shannon sampling theorem.

[edit] Use in directional statistics

In directional statistics, the Dirac comb of period 2 is equivalent to a wrapped Dirac delta function, and is the analog of the Dirac delta function in linear statistics. In linear statistics, the random variable (x) is usually distributed over the real number line, or some subset thereof, and the probability density of x is a function whose domain is the set real numbers, and whose integral from to is unity. In directional statistics, the random variable () is distributed over the unit circle and the probability density of is a function whose domain is some interval of the real numbers of length 2 and whose integral over that interval is unity. Just as the integral of the product of a Dirac delta function with an arbitrary function over the real number line yields the value of that function at zero, so the integral of the product of a Dirac comb of period 2 with an arbitrary function of period 2 over the unit circle yields the value of that function at zero.

Frequency comb
From Wikipedia, the free encyclopedia

Jump to: navigation, search A frequency comb is the graphic representation of the spectrum of a mode locked laser. An octave spanning comb can be used for mapping radio frequencies into the optical frequency range or it can be used to steer a piezoelectric mirror within a carrier envelope phase correcting feedback loop. (It should not be confused with mono-mode laser frequency stabilization as mode-locking requires multi-mode lasers.)

An ultrashort pulse of light in the time domain. In this figure, the amplitude and intensity are Gaussian functions. Note how the author chooses to set the maximum of the function into the maximum of the envelope.

A Dirac comb is an infinite series of Dirac delta functions spaced at intervals of T.

Contents
[hide]

1 2 3 4 5 6 7

Frequency comb generation Frequency comb widening to one octave Carrier-envelope offset measurement Carrier-envelope offset control Applications History References

8 See also 9 Further reading 10 External links

[edit] Frequency comb generation


Modelocked lasers produce a series of optical pulses separated in time by the round-trip time of the laser cavity. The spectrum of such a pulse train is a series of Dirac delta functions separated by the repetition rate (the inverse of the round trip time) of the laser. This series of sharp spectral lines is called a frequency comb or a frequency Dirac comb. A purely electronic device, which generates a series of pulses, also generates a frequency comb. These are produced for electronic sampling oscilloscopes, but also used for frequency comparison of microwaves, because they reach up to 1 THz. Since they include 0 Hz they do not need the tricks which make up the rest of this article.

[edit] Frequency comb widening to one octave


This requires broadening of the laser spectrum so that it spans an octave. This is typically achieved using supercontinuum generation by strong self-phase modulation in nonlinear photonic crystal fiber. However, it has been shown that an octave-spanning spectrum can be generated directly from a Ti:sapphire laser using intracavity self-phase modulation. Or the second harmonic can be generated in a long crystal so that by consecutive sum frequency generation and difference frequency generation the spectrum of first and second harmonic widens until they overlap.

[edit] Carrier-envelope offset measurement


Each line is displaced from a harmonic of the repetition rate by the carrierenvelope offset frequency. The carrier-envelope offset frequency is the rate at which the peak of the carrier frequency slips from the peak of the pulse envelope on a pulse-to-pulse basis. Measurement of the carrier-envelope offset frequency is usually done with a self-referencing technique, in which the phase of one part of the spectrum is compared to its harmonic. In the 'frequency 2 frequency' technique, light at the lower energy side of the broadened spectrum is doubled using second harmonic generation in a

nonlinear crystal and a heterodyne beat is generated between that and light at the same wavelength on the upper energy side of the spectrum. This beat frequency, detectable with a photodiode, is the carrier-envelope offset frequency. Alternatively, from light at the higher energy side of the broadened spectrum the frequency at the peak of the spectrum is subtracted in a nonlinear crystal and a heterodyne beat is generated between that and light at the same wavelength on the lower energy side of the spectrum. This beat frequency, detectable with a photodiode, is the carrier-envelope offset frequency. Because the phase is measured directly and not the frequency, it is possible to set the frequency to zero and additionally lock the phase, but because the intensity of the laser and this detector is not very stable, and because the whole spectrum beats in phase source, one has to lock the phase on a fraction of the repetition rate.

[edit] Carrier-envelope offset control


In the absence of active stabilization, the repetition rate and carrier-envelope offset frequency would be free to drift. They vary with changes in the cavity length, refractive index of laser optics, and nonlinear effects such as the Kerr effect. The repetition rate can be stabilized using a piezoelectric transducer, which moves a mirror to change the cavity length. In Ti:sapphire lasers using prisms for dispersion control, the carrier-envelope offset frequency can be controlled by tilting the high reflector mirror at the end of the prism pair. This can be done using piezoelectric transducers. In high repetition rate Ti:sapphire ring lasers, which often use double-chirped mirrors to control dispersion, modulation of the pump power using an acoustooptic modulator is often used to control the offset frequency. The phase slip depends strongly on the Kerr effect, and by changing the pump power one changes the peak intensity of the laser pulse and thus the size of the Kerr phase shift. This shift is far smaller than 6 rad, so an additional device for coarse adjustment is needed. See also: phase-locked loop The breakthrough which led to a practical frequency comb was the development of technology for stabilizing the carrier-envelope offset frequency.

[edit] Applications

A frequency comb allows a direct link from radio frequency standards to optical frequencies. Current frequency standards such as atomic clocks operate in the microwave region of the spectrum, and the frequency comb brings the accuracy of such clocks into the optical part of the electromagnetic spectrum. A simple electronic feedback loop can lock the repetition rate to a frequency standard. There are two distinct applications of this technique. One is the optical clock where an optical frequency is overlapped with a single tooth of the comb on a photodiode and a radio frequency is compared to the beat signal, the repetition rate, and the CEO-frequency. Applications for the frequency comb technique include optical metrology, frequency chain generation, optical atomic clocks, high precision spectroscopy, and more precise GPS technology.[1]. The other is doing experiments with few cycle pulses, like above threshold ionization, attosecond pulses, highly efficient nonlinear optics or high harmonics generation. This can be single pulses so that no comb exists and therefore it is not possible to define a carrier envelope offset frequency, rather the carrier envelope offset phase is important. A second photodiode can be added to the setup to gather phase and amplitude in a single shot, or difference frequency generation can be used to even lock the offset on a single shot basis albeit with low power efficiency. Without an actual comb one can look at the phase vs frequency. Without a carrier envelope offset all frequencies are cosines. That means all frequencies have the phase zero. The time origin is arbitrary. If a pulse comes at later times, the phase increases linearly with frequency, but still the zero frequency phase is zero. This phase at zero frequency is the carrier envelope offset. The second harmonic not only has twice the frequency but also twice the phase. That means for a pulse with zero offset the second harmonic of the low frequency tail is in phase with the fundamental of the high frequency tail and otherwise it is not. Spectral phase interferometry for direct electric-field reconstruction (SPIDER) measures how the phase increases with frequency, but it cannot determine the offset, so the name electric field reconstruction is a bit misleading.

[edit] History
Theodor W. Hnsch and John L. Hall shared half of the 2005 Nobel Prize in Physics for contributions to the development of laser-based precision spectroscopy, including the optical frequency comb technique. The other half of the prize was awarded to Roy Glauber.

The femtosecond comb technique has, in 2006, been extended to the extreme ultraviolet range, which enables frequency metrology to that region of the spectrum.[2]

Green's function
From Wikipedia, the free encyclopedia
Jump to: navigation, search

This article is about the classical approach to Green's functions. For a modern discussion, see fundamental solution.

In mathematics, a Green's function is a type of function used to solve inhomogeneous differential equations subject to specific initial conditions or boundary conditions. Under many-body theory, the term is also used in physics, specifically in quantum field theory, electrodynamics and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition. Green's functions are named after the British mathematician George Green, who first developed the concept in the 1830s. In the modern study of linear partial differential equations, Green's functions are studied largely from the point of view of fundamental solutions instead.

Contents
[hide] 1 Definition and uses 2 Motivation 3 Green's functions for solving inhomogeneous boundary value problems

o o o

3.1 Framework 3.2 Theorem 4.1 Eigenvalue expansions

4 Finding Green's functions 5 Green's functions for the Laplacian 6 Example 7 Further examples 8 See also 9 References 10 External links

[edit] Definition and uses


A Green's function, G(x, s), of a linear differential operator L = L(x) acting on distributions over a subset of the Euclidean space Rn, at a point s, is any solution of

LG(x,s) = (x s)

(1)

where is the Dirac delta function. This property of a Green's function can be exploited to solve differential equations of the form

Lu(x) = f(x)

(2)

If the kernel of L is non-trivial, then the Green's function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria will give a unique Green's function. Also, Green's functions in general are distributions, not necessarily proper functions. Green's functions are also a useful tool in solving wave equations, diffusion equations, and in quantum mechanics, where the Green's function of the Hamiltonian is a key concept, with important links to the concept of density of states. As a side note, the Green's function as used in physics is usually defined with the opposite sign; that is,

LG(x,s) = (x s).
This definition does not significantly change any of the properties of the Green's function.

If the operator is translation invariant, that is when L has constant coefficients with respect to x, then the Green's function can be taken to be a convolution operator, that is,

G(x,s) = G(x s).


In this case, the Green's function is the same as the impulse response of linear time-invariant system theory.

[edit] Motivation
See also: Spectral theory

Loosely speaking, if such a function G can be found for the operator L, then if we multiply the equation (1) for the Green's function by f(s), and then perform an integration in the s variable, we obtain;

The right hand side is now given by the equation (2) to be equal to L u(x), thus:

Because the operator L = L(x) is linear and acts on the variable x alone (not on the variable of integration s), we can take the operator L outside of the integration on the right hand side, obtaining;

And this suggests; (3) Thus, we can obtain the function u(x) through knowledge of the Green's function in equation (1), and the source term on the right hand side in

equation (2). This process relies upon the linearity of the operator L. In other words, the solution of equation (2), u(x), can be determined by the integration given in equation (3). Although f(x) is known, this integration cannot be performed unless G is also known. The problem now lies in finding the Green's function G that satisfies equation (1). For this reason, the Green's function is also sometimes called the fundamental solution associated to the operator L. Not every operator L admits a Green's function. A Green's function can also be thought of as a right inverse of L. Aside from the difficulties of finding a Green's function for a particular operator, the integral in equation (3), may be quite difficult to evaluate. However the method gives a theoretically exact result. This can be thought of as an expansion of f according to a Dirac delta function basis (projecting f over (x s)) and a superposition of the solution on each projection. Such an integral equation is known as a Fredholm integral equation, the study of which constitutes Fredholm theory.

[edit] Green's functions for solving inhomogeneous boundary value problems


The primary use of Green's functions in mathematics is to solve non-homogeneous boundary value problems. In modern theoretical physics, Green's functions are also usually used as propagators in Feynman diagrams (and the phrase Green's function is often used for any correlation function).

[edit] Framework

Let L be the SturmLiouville operator, a linear differential operator of the form

and let D be the boundary conditions operator

Let f(x) be a continuous function in [0,l]. We shall also suppose that the problem

is regular (i.e., only the trivial solution exists for the homogeneous problem).
[edit] Theorem

There is one and only one solution u(x) which satisfies

and it is given by

where G(x,s) is a Green's function satisfying the following conditions:


1. 2.

G(x,s) is continuous in x and x, s)=0</math>


For ,

DG(x,s) = 0

3. 4.

G'(s + 0,s) G'(s 0,s) = 1 / p(s)


Derivative "jump": Symmetry: G(x, s) = G(s, x)

[edit] Finding Green's functions


[edit] Eigenvalue expansions

If a differential operator L admits a set of eigenvectors n(x) (i.e., a set of functions n and scalars n such that Ln = nn)) that is complete, then it is possible to construct a Green's function from these eigenvectors and eigenvalues. Complete means that the set of functions satisfies the following completeness relation:

Then the following holds:

where represents complex conjugation. Applying the operator L to each side of this equation results in the completeness relation, which was assumed true. The general study of the Green's function written in the above form, and its

relationship to the function spaces formed by the eigenvectors, is known as Fredholm theory.

[edit] Green's functions for the Laplacian


Green's functions for linear differential operators involving the Laplacian may be readily put to use using the second of Green's identities. To derive Green's theorem, begin with the divergence theorem (otherwise known as Gauss's theorem):

Let and substitute into Gauss' law. Compute and apply the chain rule for the operator:

Plugging this into the divergence theorem produces Green's theorem:

Suppose that the linear differential operator L is the Laplacian, , and that there is a Green's function G for the Laplacian. The defining property of the Green's function still holds:

Let = G in Green's theorem. Then:

Using this expression, it is possible to solve Laplace's equation or Poisson's equation , subject to either Neumann or Dirichlet

boundary conditions. In other words, we can solve for (x) everywhere inside a volume where either (1) the value of (x) is specified on the bounding surface of the volume (Dirichlet boundary conditions), or (2) the normal derivative of (x) is specified on the bounding surface (Neumann boundary conditions). Suppose the problem is to solve for (x) inside the region. Then the integral

reduces to simply

(x)
due to the defining property of the Dirac delta function and we have:

This form expr esse s the wellkno wn prop erty of harm onic funct ions that if the valu e or norm al deriv

ative is kno wn on a boun ding surfa ce, then the valu e of the funct ion insid e the volu me is kno wn ever ywhe re. In elect rosta tics,

(x)
is inter prete d as the elect ric pote ntial,

(x)
as

elect ric char ge dens ity, and the norm al deriv ative as the norm al com pone nt of the elect ric field. If the probl em is to solv ea Diric hlet boun dary valu e probl em, the Gree n's funct

ion shou ld be chos en such that

G(x, x')
vani shes whe n eithe rx or x' is on the boun ding surfa ce.T hus only one of the two term s in the surfa ce integ ral rema ins. If the probl em is to solv ea

Neu man n boun dary valu e probl em, the Gree n's funct ion is chos en such that its norm al deriv ative vani shes on the boun ding surfa ce, as it woul d see ms to be the most logic al

choi ce. (See Jack son J.D. clas sical elect rody nami cs, page 39). How ever, appli catio n of Gau ss's theor em to the diffe renti al equa tion defin ing the Gree n's funct ion yield s

m e a n i n g t h e n o r m al d e ri v a ti v e o f

G ( x , x')
c a n n o t v a n is h o n

t h e s u rf a c e, b e c a u s e it m u s t i n t e g r a t e t o 1 o n t h e s u rf a c

e. ( A g ai n, s e e J a c k s o n J . D. cl a s si c al el e c tr o d y n a m i c s, p a g e 3

9 f o r t h is a n d t h e f o ll o w i n g a r g u m e n t). T h e si m p le s t f o r m

t h e n o r m al d e ri v a ti v e c a n t a k e is t h a t o f a c o n s t a n t, n a m el

1 / S,
w h e r e S is t h e s u rf a c e a r e a o f t h e s u rf a c e. T h e s u rf a c

e t e r m i n t h e s o lu ti o n b e c o m e s

w h e r e

i s t h e a v

e r a g e v a l u e o f t h e p o t e n t i a l o n t h e s u r f a c e

. T h i s n u m b e r i s n o t k n o w n i n g e n e r a l , b u t i s

o f t e n u n i m p o r t a n t , a s t h e g o a l i s o f t e n t o

o b t a i n t h e e l e c t r i c f i e l d g i v e n b y t h e g r a d

i e n t o f t h e p o t e n t i a l , r a t h e r t h a n t h e p o t e n

t i a l i t s e l f . W i t h n o b o u n d a r y c o n d i t i o n s , t h

e G r e e n ' s f u n c t i o n f o r t h e L a p l a c i a n ( G r e e n

' s f u n c t i o n f o r t h e t h r e e v a r i a b l e L a p l a c e

e q u a t i o n ) i s :

S u p p o s i n g t h a t t h e b o u n d i n g

s u r f a c e g o e s o u t t o i n f i n i t y , a n d p l u g g i n g

i n t h i s e x p r e s s i o n f o r t h e G r e e n ' s f u n c t i o n

, t h i s g i v e s t h e f a m i l i a r e x p r e s s i o n f o r e l e

c t r i c p o t e n t i a l i n t e r m s o f e l e c t r i c c h a r g e

d e n s i t y ( i n t h e C G S u n i t s y s t e m ) a s

[ e d

i t ] E x a m p l e
G i v e n t h e p r o b l e m

F i n d

t h e

G r e e n ' s

f u n c t i o n .

F i r s t

s t e p : T h e

G r e e n '

f u n c t i o n

f o r

t h e

l i n e a r

o p e r a t o r

a t

h a n d

i s

d e f i n e d

a s

t h e

s o l u t i o n

t o

g(x,s) = c1cos x + c2sin x.

Sinc function
From Wikipedia, the free encyclopedia

(Redirected from Sinc) Jump to: navigation, search

"Sinc" redirects here. For the designation used in the United Kingdom for areas of wildlife interest, see Site of Importance for Nature Conservation.
In mathematics, the sinc function, denoted by sinc(x) has two nearly equivalent definitions[1]. In digital signal processing and information theory, the normalized sinc function is commonly defined by

The normalized sinc (blue) and unnormalized sinc function (red) shown on the same scale.

It is qualified as normalized because its integral over all x is 1. All of the zeros of the normalized sinc function are integer values of x. The Fourier transform of the normalized sinc function is the rectangular function with no scaling. This function is fundamental in the concept of reconstructing the original continuous bandlimited signal from uniformly spaced samples of that signal.

In mathematics, the historical unnormalized sinc function is defined by

The only difference between the two definitions is in the scaling of the independent variable (the x-axis) by a factor of . In both cases, the value of the function at the removable singularity at zero is understood to be the limit value 1. The sinc function is analytic everywhere. The term "sinc" (English pronunciation: /sk/) is a contraction of the function's full Latin name, the sinus cardinalis (cardinal sine), first introduced by Phillip M. Woodward in 1953.[2][3][4]

Contents
[hide]

1 2 3 4 5

Properties Relationship to the Dirac delta distribution See also References External links

[edit] Properties

The local maxima and minima (small white dots) of the unnormalized, red sincfunction correspond to its intersections with the blue cosine function.

The zero crossings of the unnormalized sinc are at nonzero multiples of , while zero crossings of the normalized sinc occur at nonzero integer values. The local maxima and minima of the unnormalized sinc correspond to its intersections with the cosine function. That is, sin()/ = cos() for all points where the derivative of sin(x)/x is zero and thus a local extremum is reached. A good approximation of the x-coordinate of the n-th extremum with positive x-coordinate is

where odd n lead to a local minimum and even n to a local maximum. Besides the extrema at xn, the curve has an absolute maximum at 0 = (0,1) and because of its symmetry to the y-axis extrema with xcoordinates xn. The normalized sinc function has a simple representation as the infinite product

and is related to the gamma function formula:

(x) by Euler's reflection

Euler discovered that

The continuous Fourier transform of the normalized sinc (to ordinary frequency) is rect(f),

where the rectangular function is 1 for argument between 1/2 and 1/2, and zero otherwise. This corresponds to the fact that the sinc filter is the ideal (brick-wall, meaning rectangular frequency response) low-pass filter. This Fourier integral, including the special case

is an improper integral and not a convergent Lebesgue integral, as

The normalized sinc function has properties that make it ideal in relationship to interpolation of sampled bandlimited functions:

It is an interpolating function, i.e., sinc(0) = 1, and sinc(k) = 0 for nonzero integer k. The functions xk(t) = sinc(tk) (k integer) form an orthonormal basis for bandlimited functions in the function space L2(R), with highest angular frequency H = (that is, highest cycle frequency H = 1/2).

Other properties of the two sinc functions include:

The unnormalized sinc is the zeroth order spherical Bessel function of the first kind, . The normalized sinc is

where Si(x) is the sine integral.

sinc( x) (not normalized) is one of two linearly independent solutions to the linear ordinary differential equation

The other is cos( x)/x, which is not bounded at x = 0, unlike its sinc function counterpart.

where the normalized sinc is meant.

[edit] Relationship to the Dirac delta distribution


The normalized sinc function can be used as a nascent delta function, meaning that the following weak limit holds:

This is not an ordinary limit, since the left side does not converge. Rather, it means that

for any smooth function with compact support. In the above expression, as a approaches zero, the number of oscillations per unit length of the sinc

function approaches infinity. Nevertheless, the expression always oscillates inside an envelope of 1/( a x), and approaches zero for any nonzero value of x. This complicates the informal picture of (x) as being zero for all x except at the point x = 0 and illustrates the problem of thinking of the delta function as a function rather than as a distribution. A similar situation is found in the Gibbs phenomenon.

Trigonometric integral
From Wikipedia, the free encyclopedia
(Redirected from Sine integral) Jump to: navigation, search

Si(x) (blue) and Ci(x) (green) plotted on the same plot.

In mathematics, the trigonometric integrals are a family of integrals which involve trigonometric functions. A number of the basic trigonometric integrals are discussed at the list of integrals of trigonometric functions.

Contents
[hide] 1 Sine integral 2 Cosine integral 3 Hyperbolic sine integral 4 Hyperbolic cosine integral 5 Nielsen's spiral 6 Expansion o o 6.1 Asymptotic series (for large argument) 6.2 Convergent series

7 Relation with the exponential integral of imaginary argument 8 See also o 8.1 Signal processing 9 References

[edit] Sine integral

Plot of Si(x) for 0 x 8.

The different sine integral definitions are:

Si(x) is the primitive of sinx / x which is zero for x = 0; si(x) is the primitive of sinx / x which is zero for .
Note that function. When is the sinc function and also the zeroth spherical Bessel

, this is known as the Dirichlet integral.

In signal processing, the oscillations of the Sine integral cause overshoot and ringing artifacts when using the sinc filter, and frequency domain ringing if using a truncated sinc filter as a low-pass filter. The Gibbs phenomenon is a related phenomenon: thinking of sinc as a low-pass filter, it corresponds to truncating the Fourier series, which causes the Gibbs phenomenon.

[edit] Cosine integral

Plot of Ci(x) for 0 < x 8.

The different cosine integral definitions are:

ci(x) is the primitive of cos x / x which is zero for


have:

. We

[edit] Hyperbolic sine integral


The hyperbolic sine integral:

[edit] Hyperbolic cosine integral


The hyperbolic cosine integral:

where

is the EulerMascheroni constant.

[edit] Nielsen's spiral

Nielsen's spiral.

The spiral formed by parametric plot of si,ci is known as Nielsen's spiral. It is also referred to as the Euler spiral, the Cornu spiral, a clothoid, or as a linear-curvature polynomial spiral. The spiral is also closely related to the Fresnel integrals.

This spiral has applications in vision processing, road and track construction and other areas.

[edit] Expansion
Various expansions can be used for evaluation of Trigonometric integrals, depending on the range of the argument.
[edit] Asymptotic series (for large argument)

These series are divergent[clarification needed], although can be used for estimates and even precise evaluation at
[edit] Convergent series

These series are convergent at any complex , although for the series will converge slowly initially, requiring many terms for high precision.

[edit] Relation with the exponential integral of imaginary argument

Function

is called exponential integral. It is closely related with Si and Ci:

As each involved function is analytic except the cut at negative values of the argument, the area of validity of the relation should be extended to Re(x) > 0. (Out of this range, additional terms which are integer factors of appear in the expression). Cases of imaginary argument of the generalized integroexponential function are

which is the real part of

Similarly

Borwein integral
From Wikipedia, the free encyclopedia

Jump to: navigation, search This article may need to be wikified to meet Wikipedia's quality standards. Please help by adding relevant internal links, or by improving the article's layout. (July 2011)

Click [show] on right for more details.[show]

In mathematics, a Borwein integral is an integral studied by Borwein & Borwein (2001) involving products of sinc(ax), where the sinc function is given by sinc(x) = sin(x)/x. These integrals are notorious for exhibiting apparent patterns that eventually break down. An example they give is

This pattern continues up to

However at the next step the obvious pattern fails:

In general similar integrals have value /2 whenever the numbers 3, 5, ... are replaced by positive real numbers such that the sum of their reciprocals is less than 1. In the example above, 1/3 + 1/5 + ... + 1/13 < 1, but 1/3 + 1/5 + ... + 1/15 > 1.

Dirichlet integral
From Wikipedia, the free encyclopedia

Jump to: navigation, search This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2011)

In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet. One of those is

This can be derived from attempts to evaluate a double improper integral two different ways. It can also be derived using differentiation under the integral sign.

Contents
[hide] 1 Evaluation o o o 1.1 Double Improper Integral Method 1.2 Differentiation under the integral sign 1.3 Complex integration

2 Notes 3 See also 4 External links

Evaluation
Double Improper Integral Method

Pre-knowledge of properties of Laplace transforms allows us to evaluate this Dirichlet integral succinctly in the following manner:

This is equivalent to attempting to evaluate the same double definite integral in two different ways, by reversal of the order of integration, viz.,

Differentiation under the integral sign

First rewrite the integral as a function of variable

. Let

then we need to find

. and apply the Leibniz Integral

Differentiate with respect to Rule to obtain:

This integral was evaluated without proof, above, based on Laplace trasform tables; we derive it this time. It is made much simpler by recalling Euler's formula,

then,
where represents the imaginary part.

Integrating with respect to

where

is a constant to be determined. As,

for some integers m & n. It is easy to show that has to be zero, by analyzing easily observed bounds for this integral:

End of proof. Extending this result further, with the introduction of another variable, first noting that is an even function and therefore

then:

Complex integration

The same result can be obtained via complex integration. Let's consider

As a function of the complex variable z, it has a simple pole at the origin, which prevents the application of Jordan's lemma, whose other hypotheses are satisfied. We shall then define a new function[1] g(z) as follows

The pole has been moved away from the real axis, so g(z) can be integrated along the semicircle of radius R centered at z=0 and closed on the real axis, then the limit should be taken. The complex integral is zero by the residue theorem, as there are no poles inside the integration path

The second therm vanishes as R goes to infinity; for arbitrarily small , the Sokhatsky Weierstrass theorem applied to the first one yelds

Where P.V. indicates Cauchy Principal Value. By taking the imaginary part on both sides and noting that sinc(x) is even and by definition sinc(0) = 1, we get the desired result

Exponential integral
From Wikipedia, the free encyclopedia
Jump to: navigation, search

Plot of E1 function (top) and Ei function (bottom).

In mathematics, the exponential integral is a special function defined on the complex plane given the symbol Ei.

Contents
[hide] 1 Definitions 2 Properties o o o o o o o 2.1 Convergent series 2.2 Asymptotic (divergent) series 2.3 Exponential and logarithmic behavior: bracketing 2.4 Definition by Ein 2.5 Relation with other functions 2.6 Derivatives 2.7 Exponential integral of imaginary argument

3 Applications 4 Notes 5 References 6 External links

[edit] Definitions
For real, nonzero values of x, the exponential integral Ei(x) can be defined as

The function is given as a special function because is not an elementary function, a fact which can be proven using the Risch Algorithm. The definition above can be used for positive values of x, but the integral has to be understood in terms of the Cauchy principal value, due to the singularity in the integrand at zero. For complex values of the argument, the definition becomes ambiguous due to branch points at 0 and .[1] In general, a branch cut is taken on the negative real axis and Ei can be defined by analytic continuation elsewhere on the complex plane.

The following notation is used,[2]

For positive values of the real part of z, this can be written[3]

The behaviour of E1 near the branch cut can be seen by the following relation[4]:

[edit] Properties
Several properties of the exponential integral below, in certain cases, allow to avoid its explicit evaluation through the definition above.
[edit] Convergent series

Integrating the Taylor series for e / t, and extracting the logarithmic singularity, we can derive the following series representation for E1(x) for real x[5]:

For complex arguments off the negative real axis, this generalises to[6]

where is the EulerMascheroni constant. The sum converges for all complex z, and we take the usual value of the complex logarithm having a branch cut along the negative real axis.

[edit] Asymptotic (divergent) series

Relative error of the asymptotic approximation for different number of term in the truncated sum

Unfortunately, the convergence of the series above is slow for arguments of larger modulus. For example, for x=10 more than 40 terms are required to get an answer correct to three significant figures.[7] However, there is a divergent series approximation that can be obtained by integrating zezE1(z) by parts[8]:

which has error of order O(N!z ) and is valid for large values of Re(z). The relative error of the approximation above is plotted on the figure to the right for various values of N (N = 1 in red, N = 5 in pink). When x > 40, the approximation above with N = 40 is correct to within 64 bit double precision.

[edit] Exponential and logarithmic behavior: bracketing

Bracketing of

E1 by elementary functions

From the two series suggested in previous subsections, it follows that E1 behaves like a negative exponential for large values of the argument and like a logarithm for small values. For positive real values of the argument, E1 can be bracketed by elementary functions as follows[9]:

The left-hand side of this inequality is shown in the graph to the left in blue; the central part E1(x) is shown in black and the right-hand side is shown in red.
[edit] Definition by

Ein
Both Ei and E1 can be written more simply using the entire function Ein[10] defined as

(note that this is just the alternating series in the above definition of E1). Then we have

[edit] Relation with other functions

The exponential integral is closely related to the logarithmic integral function li(x) by the formula

for positive real values of

The exponential integral may also be generalized to

which can be written as a special case of the incomplete gamma function[11]:

The generalized form is sometimes called the Misra function[12] m(x), defined as

Including a logarithm defines the generalized integroexponential function[13]

[edit] Derivatives

The derivatives of the generalised functions En can be calculated by means of the formula [14]

Note that the function E0 is easy to evaluate (making this recursion useful), since it is just e z / z.[15]
[edit] Exponential integral of imaginary argument

E1(ix) against x; real part


black, imaginary part red.

If z is imaginary, it has a nonnegative real part, so we can use the formula

to get a relation with the

trigonometric integrals Si and Ci:

The real and imaginary parts of E1(x) are plotted in the figure to the right with black and red curves.

[edit] Applicatio ns
Timedependent heat transfer Nonequilibriu m groundwater flow in the Theis solution (called a well

function)
Radiative transfer in stellar atmospheres Radial Diffusivity Equation for transient or unsteady state flow with line

sources and sinks

Logarithmic integral function


From Wikipedia, the free encyclopedia

(Redirected from Logarithmic integral) Jump to: navigation, search In mathematics, the logarithmic integral function or integral logarithm li(x) is a special function. It occurs in problems of physics and has number theoretic significance, occurring in the prime number theorem as an estimate of the number of prime numbers less than a given value.

Logarithmic integral function plot

Contents
[hide]

1 2 3 4 5 6 7 8 9

Integral representation Offset logarithmic integral Series representation Special values Asymptotic expansion Infinite logarithmic integral Number theoretic significance See also References

[edit] Integral representation


The logarithmic integral has an integral representation defined for all positive real numbers by the definite integral:

Here, ln denotes the natural logarithm. The function 1 / ln(t) has a singularity at t = 1, and the integral for x > 1 has to be interpreted as a Cauchy principal value:

[edit] Offset logarithmic integral


The offset logarithmic integral or Eulerian logarithmic integral is defined as

or

As such, the integral representation has the advantage of avoiding the singularity in the domain of integration. This function is a very good approximation to the number of prime numbers less than x.

[edit] Series representation


The function li(x) is related to the exponential integral Ei(x) via the equation

which is valid for x > 1. This identity provides a series representation of li(x) as

where 0.57721 56649 01532 ... is the Euler Mascheroni gamma constant. A more rapidly convergent series due to Ramanujan [1] is

[edit] Special values


The function li(x) has a single positive zero; it occurs at x 1.45136 92348 ...; this number is known as the RamanujanSoldner constant. li(2) 1.045163 780117 492784 844588 889194 613136 522615 578151 This is where is the incomplete gamma function. It must be understood as the Cauchy principal value of the function.

[edit] Asymptotic expansion


The asymptotic behavior for x is

where O is the big O notation. The full asymptotic expansion is

or

Note that, as an asymptotic expansion, this series is not convergent: it is a reasonable approximation only if the series is truncated at a finite number of terms, and only large

values of x are employed. This expansion follows directly from the asymptotic expansion for the exponential integral.

[edit] Infinite logarithmic integral


[clarification needed]

and discussed in Paul Koosis, The Logarithmic Integral, volumes I and II, Cambridge University Press, second edition, 1998.

[edit] Number theoretic significance


The logarithmic integral is important in number theory, appearing in estimates of the number of prime numbers less than a given value. For example, the prime number theorem states that:

(x)~li(x)
where (x) denotes the number of primes smaller than or equal to x.

Gibbs phenomenon
From Wikipedia, the free encyclopedia
Jump to: navigation, search

In mathematics, the Gibbs phenomenon, named after the American physicist J. Willard Gibbs, is the peculiar manner in which the Fourier series of a piecewise continuously differentiable periodic function behaves at a jump discontinuity: the nth partial sum of the Fourier series has large oscillations near the jump, which might increase the maximum of the partial sum above that of the function

itself. The overshoot does not die out as the frequency increases, but approaches a finite limit.[1] These are one cause of ringing artifacts in signal processing.

Contents
[hide] 1 Description o o o 1.1 History 1.2 Explanation 1.3 Solutions

2 Formal mathematical description of the phenomenon 3 Signal processing explanation 4 The square wave example 5 Consequences 6 See also 7 Notes 8 References 9 External links and references

[edit] Description

Functional approximation of square wave using 5 harmonics

Functional approximation of square wave using 25 harmonics

Functional approximation of square wave using 125 harmonics

The Gibbs phenomenon involves both the fact that Fourier sums overshoot at a jump discontinuity, and that this overshoot does not die out as the frequency increases. The three pictures on the right demonstrate the phenomenon for a square wave whose Fourier expansion is

More precisely, this is the function f which equals / 4 between 2n and (2n + 1) and / 4 between (2n + 1) and (2n + 2) for every integer n; thus this square wave has a jump discontinuity of height / 2 at every integer multiple of . As can be seen, as the number of terms rises, the error of the approximation is reduced in width and energy, but converges to a fixed height. A calculation for the square wave (see Zygmund, chap. 8.5., or the computations at the end of this article) gives an explicit formula for the limit of the height of the error. It turns out that the Fourier series exceeds the height / 4 of the square wave by

or about 9 percent. More generally, at any jump point of a piecewise continuously differentiable function with a jump of a, the nth partial Fourier series will (for n very large) overshoot this jump by approximately at one end and undershoot it by the same amount at the other end; thus the "jump" in the partial Fourier series will be about 18% larger than the jump in the original function. At the location of the

discontinuity itself, the partial Fourier series will converge to the midpoint of the jump (regardless of what the actual value of the original function is at this point). The quantity

is sometimes known as the WilbrahamGibbs constant.


[edit] History

The Gibbs phenomenon was first noticed and analyzed by the obscure Henry Wilbraham.[2] He published a paper on it in 1848 that went unnoticed by the mathematical world. It was not until Albert Michelson observed the phenomenon via a mechanical graphing machine that interest arose. Michelson developed a device in 1898 that could compute and re-synthesize the Fourier series. When the Fourier coefficients for a square wave were input to the machine, the graph would oscillate at the discontinuities. This would continue to occur even as the number of Fourier coefficients increased. Because it was a physical device subject to manufacturing flaws, Michelson was convinced that the overshoot was caused by errors in the machine. In 1898 J. Willard Gibbs published a paper on Fourier series in which he discussed the example of what today would be called a sawtooth wave, and described the graph obtained as a limit of the graphs of the partial sums of the Fourier series. Interestingly in this paper he failed to notice the phenomenon that bears his name, and the limit he described was incorrect. In 1899 he published a correction to his paper in which he describes the phenomenon and points out the important distinction between the limit of the graphs and the graph of the function that is the limit of the partial sums of the Fourier series.[citation needed] Maxime Bcher gave a detailed mathematical analysis of the phenomenon in 1906 and named it the Gibbs phenomenon.
[edit] Explanation

Informally, it reflects the difficulty inherent in approximating a discontinuous function by a finite series of continuous sine and cosine waves. It is important to put emphasis on the word finite because even though every partial sum of the Fourier series overshoots the function it is approximating, the limit of the partial

sums does not. The value of x where the maximum overshoot is achieved moves closer and closer to the discontinuity as the number of terms summed increases so, again informally, once the overshoot has passed by a particular x, convergence at the value of x is possible. There is no contradiction in the overshoot converging to a non-zero amount, but the limit of the partial sums having no overshoot, because where that overshoot happens moves. We have pointwise convergence, but not uniform convergence. For a piecewise C1 function the Fourier series converges to the function at every point except at the jump discontinuities. At the jump discontinuities themselves the limit will converge to the average of the values of the function on either side of the jump. This is a consequence of the Dirichlet theorem.[3] The Gibbs phenomenon is also closely related to the principle that the decay of the Fourier coefficients of a function at infinity is controlled by the smoothness of that function; very smooth functions will have very rapidly decaying Fourier coefficients (resulting in the rapid convergence of the Fourier series), whereas discontinuous functions will have very slowly decaying Fourier coefficients (causing the Fourier series to converge very slowly). Note for instance that the Fourier coefficients 1, 1/3, 1/5, ... of the discontinuous square wave described above decay only as fast as the harmonic series, which is not absolutely convergent; indeed, the above Fourier series turns out to be only conditionally convergent for almost every value of x. This provides a partial explanation of the Gibbs phenomenon, since Fourier series with absolutely convergent Fourier coefficients would be uniformly convergent by the Weierstrass M-test and would thus be unable to exhibit the above oscillatory behavior. By the same token, it is impossible for a discontinuous function to have absolutely convergent Fourier coefficients, since the function would thus be the uniform limit of continuous functions and therefore be continuous, a contradiction. See more about absolute convergence of Fourier series.
[edit] Solutions

In practice, the difficulties associated with the Gibbs phenomenon can be ameliorated by using a smoother method of Fourier series summation, such as Fejr summation or Riesz summation, or by using sigma-approximation. Using a wavelet transform with Haar basis functions, the Gibbs phenomenon does not occur.

[edit] Formal mathematical description of the phenomenon


Let be a piecewise continuously differentiable function which is periodic with some period L > 0. Suppose that at some point x0, the left limit and right limit differ by a non-zero gap a: of the function

For each positive integer N 1, let SN f be the Nth partial Fourier series

where the Fourier coefficients usual formulae

are given by the

Then we have

and

but

More generally, if xN is any sequence of real numbers which converges to x0 as , and if the gap a is positive then

and

If instead the gap a is negative, one needs to interchange limit superior with limit inferior, and also interchange the and signs, in the above two inequalities.

[edit] Signal processing explanation


For more details on this topic, see Ringing artifacts.

The sinc function, the impulse response of an ideal low-pass filter. Scaling narrows the function, and correspondingly increases

magnitude (which is not shown here), but

does not reduce the magnitude of the undershoot, which is the integral of the tail.

From the point of view of signal processing, the Gibbs phenomenon is the step response of a low-pass filter, and the oscillations are called ringing or ringing artifacts. Truncating the Fourier transform of a signal on the real line, or the Fourier series of a periodic signal (equivalently, a signal on the circle) corresponds to filtering out the higher frequencies by an ideal (brick-wall) lowpass/high-cut filter. This can be represented as convolution of the original signal with the impulse response of the filter (also known as the kernel), which is the sinc function. Thus the Gibbs phenomenon can be seen as the result of convolving a Heaviside step function (if periodicity is not required) or a square wave (if periodic) with a sinc function: the oscillations in the sinc function cause the ripples in the output.

The sine integral, exhibiting the Gibbs phenomenon for a step function on the

real line.

In the case of convolving with a Heaviside step function, the resulting function is exactly the integral of the sinc function, the sine integral; for a square wave the description is not as simply stated. For the step function, the magnitude of the undershoot is thus exactly the integral of the (left) tail, integrating to the first negative zero: for the normalized sinc of unit sampling period, this is The overshoot is accordingly of the same magnitude: the integral of the right tail, or, which amounts to the same thing, the difference between the integral from negative infinity to the first positive zero, minus 1 (the nonovershooting value). The overshoot and undershoot can be understood thus: kernels are generally normalized to have integral 1, so they send constant functions to constant functions otherwise they have gain. The value of a convolution at a point is a linear combination of the input signal, with coefficients (weights) the values of the kernel. If a kernel is non-negative, such as for a Gaussian kernel, then the value of the filtered signal will be a convex combination of the input values (the coefficients (the kernel) integrate to 1, and are nonnegative), and will thus fall between the minimum and maximum of the input signal it will

not undershoot or overshoot. If, on the other hand, the kernel assumes negative values, such as the sinc function, then the value of the filtered signal will instead be an affine combination of the input values, and may fall outside of the minimum and maximum of the input signal, resulting in undershoot and overshoot, as in the Gibbs phenomenon. Taking a longer expansion cutting at a higher frequency corresponds in the frequency domain to widening the brick-wall, which in the time domain corresponds to narrowing the sinc function and increasing its height by the same factor, leaving the integrals between corresponding points unchanged. This is a general feature of the Fourier transform: widening in one domain corresponds to narrowing and increasing height in the other. This results in the oscillations in sinc being narrower and taller and, in the filtered function (after convolution), yields oscillations that are narrower and thus have less area, but does not reduce the magnitude: cutting off at any finite frequency results in a sinc function, however narrow, with the same tail integrals. This explains the persistence of the overshoot and undershoot.

Oscillations can be interpreted as convolution with a sinc.

Higher cutoff makes the sinc narrower but taller, with the same magnitude tail integrals, yielding higher frequency oscillations, but whose magnitude does not vanish. Thus the features of the Gibbs phenomenon are interpreted as follows:
the undershoot is due to the impulse response having a negative tail integral, which is possible because the function takes negative values; the overshoot offsets this, by symmetry (the overall integral does not change under filtering); the persistence of the oscillations is because increasing the cutoff narrows the impulse response, but does not reduce its integral the oscillations thus move towards the discontinuity, but do not decrease in magnitude.

[edit] The square wave example

Animation of the additive synthesis of a square wave with an increasing number of harmonics. The Gibbs phenomenon is

visible especially when the number of harmonics is large.

We now illustrate the above Gibbs phenomenon in the case of the square wave described earlier. In this case the period L is 2, the discontinuity x0 is at zero, and the jump a is equal to / 2. For simplicity let us just deal with the case when N is even (the case of odd N is very similar). Then we have

Substituting

x = 0, we obtain

as claimed above. Next, we compute

If we introduce the normalized sinc function, , we can rewrite this as

But the expression in square brackets is a numerical integration approximation to the integral (more precisely, it is a midpoint rule approximation with spacing 2 / N). Since the sinc function is continuous, this approximation converges to the actual integral as . Thus we have

which was what was claimed in the previous section. A similar computation shows

[edit] Conseque nces


In signal processing, the Gibbs phenomenon is undesirable because it causes artifacts, namely clipping from the overshoot and undershoot, and ringing artifacts from the oscillations. In the case of low-pass filtering, these can be reduced or

eliminated by using different low-pass filters. In MRI, the Gibbs phenomenon causes artifacts in the presence of adjacent regions of markedly differing signal intensity. This is most commonly encountered in spinal MR imaging, where the Gibbs phenomenon may simulate the appearance of syringomyelia.

Runge's phenomenon
From Wikipedia, the free encyclopedia
Jump to: navigation, search

The red curve is the Runge function. The blue curve is a 5th-order interpolating polynomial (using six equally-spaced interpolating points). The green curve is a 9th-order interpolating polynomial (using ten equally-spaced interpolating points). At the interpolating points, the error between the function and the interpolating polynomial is (by definition) zero. Between the interpolating points (especially in the region close to the endpoints 1 and 1), the error between the function and the interpolating polynomial gets worse for higherorder polynomials.

In the mathematical field of numerical analysis, Runge's phenomenon is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree. It was discovered by Carl David Tolm Runge when exploring the behavior of errors when using polynomial interpolation to approximate certain functions.[1] The discovery was important because it shows that going to higher degrees does not always improve accuracy. The phenomenon is similar to the Gibbs phenomenon in Fourier series approximations.

Contents
[hide] 1 Problem

o o o o o

1.1 Reason 2.1 Change of interpolation points 2.2 Use of piecewise polynomials 2.3 Constrained minimization 2.4 Least squares fitting

2 Mitigations to the problem

3 See also 4 References

[edit] Problem
Consider the function:

Runge found that if this function is interpolated at equidistant points xi between 1 and 1 such that:

with a polynomial Pn(x) of degree n, the resulting interpolation oscillates toward the end of the interval, i.e. close to 1 and 1. It can even be proven that the interpolation error tends toward infinity when the degree of the polynomial increases:

However, the Weierstrass approximation theorem states that there is some sequence of approximating polynomials for which the error goes to zero. This shows that high-degree polynomial interpolation at equidistant points can be troublesome.
[edit] Reason

The error between the generating function and the interpolating polynomial of order N is given by

for some

in (1, 1).

For the case of the Runge function (also called the Cauchy Lorentz function) shown above,

the first two derivatives are

The magnitude of higher order derivatives of the Runge function get even larger. Therefore, the bound for the error (between the interpolating points) when using higher-order interpolating polynomials becomes larger.

[edit] Mitigations to the problem


[edit] Change of interpolation points

The oscillation can be minimized by using nodes that are distributed more densely towards the edges of the interval, specifically, with asymptotic density (on the interval [1,1]) given by the formula[2] . A standard example of such a set of nodes is Chebyshev nodes, for which the maximum error is guaranteed to diminish with increasing polynomial order. The phenomenon demonstrates that high degree polynomials are generally unsuitable for interpolation with equidistant nodes.

[edit] Use of piecewise polynomials

The problem can be avoided by using spline curves which are piecewise polynomials. When trying to decrease the interpolation error one can increase the number of polynomial pieces which are used to construct the spline instead of increasing the degree of the polynomials used.
[edit] Constrained minimization

One can also fit a polynomial of higher degree (for instance 2n instead of n + 1), and fit an interpolating polynomial whose first (or second) derivative has minimal L2 norm.
[edit] Least squares fitting

Another method is fitting a polynomial of lower degree using the method of least squares. Generally, when using m equidistant points, if then least squares approximation PN(x) is well-conditioned.[3]

Sigma approximation
From Wikipedia, the free encyclopedia
Jump to: navigation, search

In mathematics, -approximation adjusts a Fourier summation to eliminate the Gibbs phenomenon which would otherwise occur at discontinuities. A -approximated summation for a series of period T can be written as follows:

in terms of the normalized sinc function

Here, the term

is the Lanczos factor, which is responsible for eliminating most of the Gibbs phenomenon. It does not do so entirely, however, but one can square or even cube the expression to serially attenuate Gibbs Phenomenon in the most extreme cases.

Square wave
From Wikipedia, the free encyclopedia

Jump to: navigation, search This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (September 2009)

Sine, square, triangle, and sawtooth waveforms A square wave is a kind of non-sinusoidal waveform, most typically encountered in electronics and signal processing. An ideal square wave alternates regularly and instantaneously between two levels. Its stochastic counterpart is a two-state trajectory.

Contents
[hide]

1 2 3 4 5

Origins and uses Examining the square wave Characteristics of imperfect square waves Other definitions See also

[edit] Origins and uses


Square waves are universally encountered in digital switching circuits and are naturally generated by binary (two-level) logic devices. They are used as timing references or "clock signals", because their fast transitions are suitable for triggering synchronous logic circuits at precisely determined intervals. However, as the frequency-domain graph shows, square waves contain a wide range of harmonics; these can generate electromagnetic radiation or pulses of current that interfere with other nearby circuits, causing noise or errors. To avoid this problem in very sensitive circuits such as precision analog-to-digital converters, sine waves are used instead of square waves as timing references. In musical terms, they are often described as sounding hollow, and are therefore used as the basis for wind instrument sounds created using subtractive synthesis. Additionally, the distortion effect used on electric guitars clip the outermost regions of the waveform, causing it to increasingly resemble a square wave as more distortion is applied. Simple two-level Rademacher functions are square waves.

[edit] Examining the square wave


In contrast to the sawtooth wave, which contains all integer harmonics, the square wave contains only odd integer harmonics.

Using Fourier series we can write an ideal square wave as an infinite series of the form

A curiosity of the convergence of the Fourier series representation of the square wave is the Gibbs phenomenon. Ringing artifacts in non-ideal square waves can be shown to be related to this phenomenon. The Gibbs phenomenon can be prevented by the use of -approximation, which uses the Lanczos sigma factors to help the sequence converge more smoothly. An ideal square wave requires that the signal changes from the high to the low state cleanly and instantaneously. This is impossible to achieve in realworld systems, as it would require infinite bandwidth.

Animation of the additive synthesis of a square wave with an increasing number of harmonics Real-world square-waves have only finite bandwidth, and often exhibit ringing effects similar to those of the Gibbs phenomenon, or ripple effects similar to those of the -approximation. For a reasonable approximation to the square-wave shape, at least the fundamental and third harmonic need to be present, with the fifth harmonic being desirable. These bandwidth requirements are important in digital electronics, where finite-bandwidth analog approximations to square-wave-

like waveforms are used. (The ringing transients are an important electronic consideration here, as they may go beyond the electrical rating limits of a circuit or cause a badly positioned threshold to be crossed multiple times.) The ratio of the high period to the total period of a square wave is called the duty cycle. A true square wave has a 50% duty cycle - equal high and low periods. The average level of a square wave is also given by the duty cycle, so by varying the on and off periods and then averaging, it is possible to represent any value between the two limiting levels. This is the basis of pulse width modulation. Square wave sound sample

5 seconds of square wave at 1 kHz


Problems listening to this file? See media help.

[edit] Characteristics of imperfect square waves


As already mentioned, an ideal square wave has instantaneous transitions between the high and low levels. In practice, this is never achieved because of physical limitations of the system that generates the waveform. The times taken for the signal to rise from the low level to the high level and back again are called the rise time and the fall time respectively. If the system is overdamped, then the waveform may never actually reach the theoretical high and low levels, and if the system is underdamped, it will oscillate about the high and low levels before settling down. In these cases, the rise and fall times are measured between specified intermediate levels, such as 5% and 95%, or 10% and 90%. Formulas exist that can determine the approximate bandwidth of a system given the rise and fall times of the waveform.

[edit] Other definitions


The square wave has many definitions, which are equivalent except at the discontinuities:

It can be defined as simply the sign function of a sinusoid:

which will be 1 when the sinusoid is positive, 1 when the sinusoid is negative, and 0 at the discontinuities. It can also be defined with respect to the Heaviside step function u(t) or the rectangular function (t):

T is 2 for a 50% duty cycle. It can also be defined in a piecewise way:

when

Pulse wave
From Wikipedia, the free encyclopedia
Jump to: navigation, search

This article is about a pulse wave form. For a heart beat, see Pulse. This article is about a rectangular pulse train. For a Dirac pulse train, see Sampling function. For other uses, see Pulse (disambiguation).
This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2009)

The shape of the pulse wave is defined by its duty cycle D, which is the ratio between the pulse duration () and the period (T)

A pulse wave or pulse train is a kind of non-sinusoidal waveform that is similar to a square wave, but does not have the symmetrical shape associated with a perfect square wave. It is a term common to synthesizer programming, and is a typical waveform available on many synths. The exact shape of the wave is determined by the duty cycle of the oscillator. In many synthesizers, the duty cycle can be modulated (sometimes called pulse-width modulation) for a more dynamic timbre. The pulse wave is also known as the rectangular wave, the periodic version of the rectangular function.

Sine wave
From Wikipedia, the free encyclopedia

Jump to: navigation, search

"Sinusoid" redirects here. For the blood vessel, see Sinusoid (blood vessel).

The graphs of the sine and cosine functions are sinusoids of different phases.

The sine wave or sinusoid is a mathematical function that describes a smooth repetitive oscillation. It occurs often in pure mathematics, as well as physics, signal processing, electrical engineering and many other fields. Its most basic form as a function of time (t) is:

where:

A, the amplitude, is the peak deviation of the function from its center

position. , the angular frequency, specifies how many oscillations occur in a unit time interval, in radians per second , the phase, specifies where in its cycle the oscillation begins at t = 0. o When the phase is non-zero, the entire waveform appears to be shifted in time by the amount / seconds. A negative value represents a delay, and a positive value represents an advance. Sine wave 5 seconds of a 220 Hz sine wave
Problems listening to this file? See media help.

The oscillation of an undamped spring-mass system around the equilibrium is a sine wave. The sine wave is important in physics because it retains its waveshape when added to another sine wave of the same frequency and arbitrary phase. It is the only periodic waveform that has this property. This property leads to its importance in Fourier analysis and makes it acoustically unique.

Contents
[hide]

1 2 3 4 5

General form Occurrences Fourier series See also References

[edit] General form


In general, the function may also have: a spatial dimension, x (aka position), with frequency k (also called wavenumber) a non-zero center amplitude, D

which looks like this:

The wavenumber is related to the angular frequency by:.

where is the wavelength, f is the frequency, and c is the speed of propagation. This equation gives a sine wave for a single dimension, thus the generalized equation given above gives the amplitude of the wave at a position x at time t along a single line. This could, for example, be considered the value of a wave along a wire.

In two or three spatial dimensions, the same equation describes a travelling plane wave if position x and wavenumber k are interpreted as vectors, and their product as a dot product. For more complex waves such as the height of a water wave in a pond after a stone has been dropped in, more complex equations are needed.

[edit] Occurrences

Illustrating the cosine wave's fundamental relationship to the circle. This wave pattern occurs often in nature, including ocean waves, sound waves, and light waves.

A cosine wave is said to be "sinusoidal", because cos(x) = sin(x + 2), which is also a sine wave with a phase-shift of /2. Because of this "head start", it is often said that the cosine function leads the sine function or the sine lags the cosine.

The human ear can recognize single sine waves as sounding clear because sine waves are representations of a single frequency with no harmonics; some sounds that approximate a pure sine wave are whistling, a crystal glass set to vibrate by running a wet finger around its rim, and the sound made by a tuning fork. To the human ear, a sound that is made up of more than one sine wave will either sound "noisy" or will have detectable harmonics; this may be described as a different timbre.

[edit] Fourier series

Sine, square, triangle, and sawtooth waveforms

Main article: Fourier analysis


In 1822, Joseph Fourier, a French mathematician, discovered that sinusoidal waves can be used as simple building blocks to describe and approximate any periodic waveform including square waves. Fourier used it as an analytical tool in the study of waves and heat flow. It is frequently used in signal processing and the statistical analysis of time series.

Triangle wave
From Wikipedia, the free encyclopedia
Jump to: navigation, search

A triangle wave is a non-sinusoidal waveform named for its triangular shape.

A bandlimited triangle wave pictured in the time domain (top) and frequency domain (bottom). The fundamental is at 220 Hz (A3).

Like a square wave, the triangle wave contains only odd harmonics. However, the higher harmonics roll off much faster than in a square wave (proportional to the inverse square of the harmonic number as opposed to just the inverse). It is possible to approximate a triangle wave with additive synthesis by adding odd harmonics of the fundamental, multiplying every (4n1)th harmonic by 1 (or changing its phase by ), and rolling off the harmonics by the inverse square of their relative frequency to the fundamental. This infinite Fourier series converges to the triangle wave:

where

is the angular frequency.

Animation of the additive synthesis of a triangle wave with an increasing number of harmonics. See Fourier Analysis for a mathematical description. Triangle wave sound sample

5 seconds

of

anti-aliased triangle

wave at 220 Hz

Problems listening to this file? See media help.

Another definition of the triangle wave, with range from -1 to 1 and period 2a is:

where the symbol

represent the floor function of n.

Also, the triangle wave can be the absolute value of the sawtooth wave:

The triangle wave can also be expressed as the integral of the square wave:

Sawtooth wave
From Wikipedia, the free encyclopedia

Jump to: navigation, search

A bandlimited sawtooth wave pictured in the time domain (top) and frequency domain (bottom). The fundamental is at 220 Hz (A3). The sawtooth wave (or saw wave) is a kind of non-sinusoidal waveform. It is named a sawtooth based on its resemblance to the teeth on the blade of a saw. The convention is that a sawtooth wave ramps upward and then sharply drops. However, there are also sawtooth waves in which the wave ramps downward and then sharply rises. The latter type of sawtooth wave is called a "reverse sawtooth wave" or "inverse sawtooth wave". The piecewise linear function

based on the floor function of time t is an example of a sawtooth wave with period 1.

A more general form, in the range 1 to 1, and with period a, is

This sawtooth function has the same phase as the sine function. A sawtooth wave's sound is harsh and clear and its spectrum contains both even and odd harmonics of the fundamental frequency. Because it contains all the integer harmonics, it is one of the best waveforms to use for synthesizing musical sounds, particularly bowed string instruments like violins and cellos, using subtractive synthesis. A sawtooth can be constructed using additive synthesis. The infinite Fourier series

converges to an inverse sawtooth wave. A conventional sawtooth can be constructed using

In digital synthesis, these series are only summed over k such that the highest harmonic, Nmax, is less than the Nyquist frequency (half the sampling frequency). This summation can generally be more efficiently calculated with a fast Fourier transform. If the waveform is digitally created directly in the time domain using a non-bandlimited form, such as y = x floor(x), infinite harmonics are sampled and the resulting tone contains aliasing distortion.

Animation of the additive synthesis of a sawtooth wave with an increasing number of harmonics An audio demonstration of a sawtooth played at 440 Hz (A4) and 880 Hz (A5) and 1760 Hz (A6) is available below. Both bandlimited (non-aliased) and aliased tones are presented. Sawtooth aliasing demo Sawtooth waves played bandlimited and aliased at 440 Hz, 880 Hz, and 1760 Hz
Problems listening to this file? See media help.

[edit] Applications

Sawtooth waves are perhaps best known for their use in music. The sawtooth and square waves are the most common starting points used to create sounds with subtractive analog and virtual analog music synthesizers. The sawtooth wave is the form of the vertical and horizontal deflection signals used to generate a raster on CRT-based television or monitor screens. Oscilloscopes also use a sawtooth wave for their horizontal deflection, though they typically use electrostatic deflection. o On the wave's "ramp", the magnetic field produced by the deflection yoke drags the electron beam across the face of the CRT, creating a scan line. o On the wave's "cliff", the magnetic field suddenly collapses, causing the electron beam to return to its resting position as quickly as possible. o The voltage applied to the deflection yoke is adjusted by various means (transformers, capacitors, centertapped windings) so that the half-way voltage on the

sawtooth's cliff is at the zero mark, meaning that a negative voltage will cause deflection in one direction, and a positive voltage deflection in the other; thus, a center-mounted deflection yoke can use the whole screen area to depict a trace. Frequency is 15.734 kHz on NTSC, 15.625 kHz for PAL and SECAM). The vertical deflection system operates the same way as the horizontal, though at a much lower frequency (59.94 Hz on NTSC, 50 Hz for PAL and SECAM). The ramp portion of the wave must appear as a straight line. If otherwise, it indicates that the voltage isn't increasing linearly, and therefore that the magnetic field produced by the deflection yoke is not linear. As a result, the electron beam will accelerate during the non-linear portions. This would result in a television image "squished" in the direction of the non-linearity. Extreme cases will show marked brightness increases, since the electron beam spends more time on that side of the picture. The first television receivers had controls allowing users to adjust the picture's vertical or horizontal linearity. Such controls were not present on later sets as the stability of electronic components had improved.

S-ar putea să vă placă și