Sunteți pe pagina 1din 3

Lecture 5

Dirac delta functions, characteristic functions, and the law of large numbers
Dirac delta functions
Lets consider the following expression. ( ) ( ) ( )

The center of the delta function is a. Now, lets define a function whose derivative is the delta function, i.e. ( The function can be written as ( ) ( ) ) ( )

The integral is zero at values less than a and one at values greater than or equal to a. is called a step function. Recall that we were interested in Pf(f), the probability density associated with a function f(x1,,xN) if we already know P(x1,,xN). The probability that the function f(x1,,xN) is less than some value f0 is ( ) ( ( ( ( ( )) ) ( ) ( ( ) ( )) ( ( )) ))

The relationship between probability density Pf(f) and the cumulative probability Pr(f<f0) is ( ) ( ) ( ( ))

The last step is a general rule from calculus.

Characteristic functions
Lets insert the spectral representation of the Dirac delta function in the expression above. ( )
( ( ))

First, we pull a few terms out of the expectation value because they do not depend on x1,,xN, i.e. what the expectation value is averaged over. The new expectation value on the right is called the characteristic function and can be viewed as the Fourier transform of the probability density. It is given by ( )
( )

Fourier transforms connect many physical quantities in a complementary manner. In spectroscopy, for example, the spectral lineshape I() depends on the angular frequency whereas its Fourier transform, the relaxation function (t), depends on time t and tells us about the dynamics of a system. ( ) ( )

In scattering experiments, the scattering function or structure factor S(k) depends on the wave vector k whereas its Fourier transform, the density of particles (x), depends on position x.

The Gaussian (continuous) distribution


The normal distribution is ( )
( )

where x is a variable, and and are parameters. The Gaussian distribution is the limiting case of the binomial distribution for N>>1 and Np>>1. For the Poisson distribution, Np is finite. The binomial distribution is ( ) ( )

We also know that the Gaussian distribution is normalized, i.e. ( )

The law of large numbers


Consider N identically distributed random variables that are independent * +

Lets define the sum of these variables to be y Now consider M sets of values of x1,,xN. This produces M values of y
( ) ( ) ( )

( )

( )

etc.

( )

An important question to ask is, how is y distributed across the M different samples? The law of large numbers tells us that for any realization of N random numbers {x1,,xN}, as N approaches infinity. For a set of independent variables the probability density can be factorized as follows ( The mean is ( ) ) ( ) ( )

The law of large numbers basically says the following: as N approaches infinity, all y(i) will be equal. We can use this law in situations where the system is identically distributed and independent, i.e. the probability density is factorizable.

S-ar putea să vă placă și