Sunteți pe pagina 1din 11

THE GAUSSIAN INTEGRAL

KEITH CONRAD

Let

21 x2

dx, and K =
0

These numbers are positive, and J = I/(2 2) and K = I/ 2.


I=

x2

dx, J =

Theorem. With notation as above, I =

ex dx.

2, or equivalently J =

/2, or equivalently K = 1.

We will give multiple proofs of this result. (Other lists of proofs are in [3] and [8].) The theorem
1 2
2
2
is subtle because there is no simple antiderivative for e 2 x (or ex or ex ). For comparison,
R 1 x2
1 2
2
dx can be computed using the antiderivative e 2 x : this integral is 1.
0 xe
1. First Proof: Polar coordinates
The most widely known proof uses multivariable calculus: express J 2 as a double integral and
then pass to polar coordinates:
Z
Z
Z Z
2
2
2
x2
y 2
J =
e
dx
e
dy =
e(x +y ) dx dy.
0

This is a double integral over the first quadrant, which we will compute by using polar coordinates.
In polar coordinates, the first quadrant is {(r, ) : r 0 and 0 /2}. Writing x2 + y 2 = r2
and dx dy = r dr d,
Z /2 Z
2
2
J =
er r dr d
0

r2

re

/2

dr

d
0

1

2
= er
2
2
0
1
=

2 2

=
.
4

Taking square roots, J = /2. This method is due to Poisson [8, p. 3].
=

2. Second Proof: Another change of variables


Our next proof uses another change of variables to compute J 2 , but this will only rely on singlevariable calculus. As before, we have
Z Z
2
2
J2 =
e(x +y ) dx dy,
0

0
1

KEITH CONRAD

but instead of using polar coordinates we make a change of variables x = yt with dx = y dt, so

Z Z
Z Z
y 2 (t2 +1)
y 2 (t2 +1)
2
ye
dy dt.
e
y dt dy =
J =
0

Since

R
0

yeay dy =

1
2a

for a > 0, we have


Z
dt
1

2
J =
= = ,
2
2(t + 1)
2 2
4
0

so J = /2. This approach is due to Laplace [6, pp. 9496] and historically precedes the more
familiar technique in the first proof above. We will see in our seventh proof that this was not
Laplaces first method.
3. Third Proof: Differentiating under the integral sign
For t > 0, set
t

Z
A(t) =

x2

2
.

dx

The integral we want to calculate is A() = J 2 and then take a square root.
Differentiating A(t) with respect to t,
Z t
Z t
2
0
x2
t2
t2
A (t) = 2
e
dx e
= 2e
ex dx.
0

Let x = ty, so
2

A0 (t) = 2et

2 y2

tet

Z
dy =

2te(1+y

2 )t2

dy.

The function under the integral sign is easily antidifferentiated with respect to t:
Z
Z 1
2 2
2 2
d 1 e(1+y )t
e(1+y )t
0
dy =
dy.
A (t) =

t 1 + y 2
dt 0
1 + y2
0
Letting
Z

B(t) =
0

2)

et (1+x
1 + x2

dx,

we have A0 (t) = B 0 (t) for all t > 0, so there is a constant C such that
A(t) = B(t) + C

(3.1)

R0
2
for all t > 0. To find C, we let t 0+ in (3.1). The left side tends to ( 0 ex dx)2 = 0 while the
R1
right side tends to 0 dx/(1 + x2 ) + C = /4 + C. Thus C = /4, so (3.1) becomes
2

et (1+x )
dx.
1 + x2
0
0

Letting t in this equation, we obtain J 2 = /4, so J = /2.


A comparison of this proof with the first proof is in [17].
Z

x2

dx

THE GAUSSIAN INTEGRAL

4. Fourth Proof: A volume integral


Our next proof is due to T. P. Jameson [4] and it was rediscovered by A. L. Delgado [2]. Revolve
1 2
1
2
2
the curve z = e 2 x in the xz-plane around the z-axis to produce the bell surface z = e 2 (x +y ) .
See below, where the z-axis is vertical and passes through the top point, the x-axis lies just under
the surface through the point 0 in front, and the y-axis lies just under the surface through the
point 0 on the left. We will compute the volume V below the surface and above the xy-plane in
two ways.

R1
First we compute V by horizontal slices, which are discs: V = 0 A(z) dz where A(z) is the area
of the disc formed by slicing the surface at height z. Writing the radius of the disc at height z as
1 2
r(z), A(z) = r(z)2 . To compute r(z), the surface cuts the xz-plane at a pair of points (x, e 2 x )
1 2
where the height is z, so e 2 x = z. Thus x2 = 2 ln z. Since x is the distance of these points from
the z-axis, r(z)2 = x2 = 2 ln z, so A(z) = r(z)2 = 2ln z. Therefore
1
Z 1

V =
2 ln z dz = 2 (z ln z z) = 2(1 lim z ln z).
0

z0+

By LHospitals rule, limz0+ z ln z = 0, so V = 2. (A calculation of V by shells is in [10].)


Next we compute the volume by vertical slices in planes x = constant. Vertical slices are scaled
bell curves: look at the black contour lines in the picture. The equation of the bell curve along the

KEITH CONRAD
1

top ofR the vertical slice with x-coordinate x is z = e 2 (x +y ) , where y varies and x is fixed. Then

V = A(x) dx, where A(x) is the area of the x-slice:


Z
Z
1 2
1 2
12 (x2 +y 2 )
12 x2
e
e 2 y dy = e 2 x I.
A(x) =
dy = e

1 2

1 2

Thus V = A(x) dx = e 2 x I dx = I e 2 x dx = I 2 .

Comparing the two formulas for V , we have 2 = I 2 , so I = 2.


5. Fifth Proof: The -function
R
For any integer n 0, we have n! = 0 tn et dt. For x > 0 we define
Z
dt
tx et ,
(x) =
t
0
so (n) = (n 1)! when n 1. Using integration by parts, (x + 1) = x(x). One of the basic
properties of the -function [13, pp. 193194] is
Z 1
(x)(y)
(5.1)
tx1 (1 t)y1 dt.
=
(x + y)
0
Set x = y = 1/2:
 2 Z 1
1
dt
p

=
.
2
t(1 t)
0
Note
  Z
Z
Z x2
t dt Z et
1
e
2
dt =

=
=
2x dx = 2
ex dx = 2J,
te
2
t
x
t
0
0
0
0
p
R
1
2
2
so 4J = 0 dt/ t(1 t). With the substitution t = sin ,
Z /2
2 sin cos d

4J 2 =
= 2 = ,
sin

cos

2
0

so J = /2. Equivalently, (1/2) = . Any method that proves (1/2) = is also a method
R
2
that calculates 0 ex dx.

6. Sixth Proof: Asymptotic Estimates

We will show J = /2 by a technique whose steps are based on [14, p. 371].


For x 0, power series expansions show 1 + x ex 1/(1 x). Reciprocating and replacing x
with x2 , we get
(6.1)

1 x2 ex

1
.
1 + x2

for all x R.
For any positive integer n, raise the terms in (6.1) to the nth power and integrate from 0 to 1:
Z 1
Z 1
Z 1
dx
2 n
nx2
(1 x ) dx
e
dx
.
2 n
0
0
0 (1 + x )

THE GAUSSIAN INTEGRAL

Under the changes of variables x = sin on the left, x = y/ n in the middle, and x = tan on the
right,
Z /2
Z /4
Z n
1
2n+1
y 2
(6.2)
(cos )
d
e
dy
(cos )2n2 d.
n 0
0
0
R /2
Set Ik = 0 (cos )k d, so I0 = /2, I1 = 1, and (6.2) implies
Z n

2
(6.3)
nI2n+1
ey dy nI2n2 .
0

We will show that as k ,

and

kIk2

nI2n+1

/2. Then
r

n
1

=
2n + 1I2n+1
=
2
2n + 1
2 2

1
=
,
2n 2I2n2
2
2
2n 2
2
R n

2
so by (6.3) 0 ey dy /2. Thus J = /2.
To show kIk2 /2, first we compute several values of Ik explicitly by a recursion. Using
integration by parts,
Z /2
Z /2
k
Ik =
(cos ) d =
(cos )k1 cos d = (k 1)(Ik2 Ik ),

nI2n2 =

so
k1
Ik2 .
k
Using (6.4) and the initial values I0 = /2 and I1 = 1, the first few values of Ik are computed and
listed in Table 1.

(6.4)

Ik =

Ik
k
Ik
k
0
/2
1
1
2 (1/2)(/2)
3
2/3
5 8/15
4 (3/8)(/2)
6 (15/48)(/2) 7 48/105
Table 1.

From Table 1 we see that


1
2n + 1 2
for 0 n 3, and this can be proved for all n by induction using (6.4). Since 0 cos 1 for
k
[0, /2], we have Ik Ik1 Ik2 = k1
Ik by (6.4), so Ik1 Ik as k . Therefore (6.5)
implies
1

2
2
I2n

= (2n)I2n

2n 2
2
as n . Then

2
2
(2n + 1)I2n+1
(2n)I2n

as n , so kIk2 /2 as k . This completes our proof that J = /2.

(6.5)

I2n I2n+1 =

KEITH CONRAD

Remark 6.1. This proof is closely related to the fifth proof using the -function. Indeed, by (5.1)
Z 1
1
( k+1
2 )( 2 )
=
t(k+1)/2+1 (1 t)1/21 dt,
1
( k+1
+
)
0
2
2
and with the change of variables t = (cos )2 for 0 /2, the integral on the right is equal to
R /2
2 0 (cos )k d = 2Ik , so (6.5) is the same as

= I2n I2n+1
2(2n + 1)
=

2n+2
1
1
( 2n+1
2 )( 2 ) ( 2 )( 2 )
2( 2n+2
2( 2n+3
2 )
2 )

1 2
( 2n+1
2 )( 2 )
4( 2n+1
2 + 1)

1 2
( 2n+1
2 )( 2 )
2n+1
4 2n+1
2 ( 2 )

( 12 )2
,
2(2n + 1)

or equivalently (1/2)2 = . We saw in the fifth proof that (1/2) =

if and only if J =

/2.

7. Seventh Proof: The original proof

The original proof that J = /2 is due to Laplace [7] in 1774. (An English translation of
Laplaces article is mentioned in the bibliographic citation for [7], with preliminary comments on
that article in [15].) He wanted to compute
Z 1
dx

(7.1)
.
log x
0
R

2
Setting y = log x, this integral is 2 0 ey dy = 2J, so we expect (7.1) to be .
Laplaces starting point for evaluating (7.1) was a formula of Euler:
Z 1
Z 1 s+r
xr dx
x
dx
1

(7.2)
=
2s
2s
s(r
+
1)
2
1x
1x
0
0
for positive r and s. (Laplace himself said this formula held whatever be r or s, but if s < 0 then
the number under the square root is negative.) Accepting (7.2), let r 0 in it to get
Z 1
Z 1
xs dx
1
dx

=
.
(7.3)
2s
2s
s2
1x
1x
0
0
Now let s 0 in (7.3). Then 1 x2s 2s log x by LHopitals rule, so (7.3) becomes
Z 1
2
dx

= .
log x
0

Thus (7.1) is .
Eulers formula (7.2) looks mysterious, but we have met it before. In the formula let xs = cos
with 0 /2. Then x = (cos )1/s , and after some calculations (7.2) turns into
Z /2
Z /2
1

(r+1)/s1
(7.4)
(cos )
d
(cos )(r+1)/s d =
.
(r + 1)/s 2
0
0

THE GAUSSIAN INTEGRAL

R /2
We used the integral Ik = 0 (cos )k d before when k is a nonnegative integer. This notation
1
makes sense when k is any positive real number, and then (7.4) assumes the form I I+1 = +1
2 for
= (r +1)/s 1, which is (6.5) with a possibly nonintegral index. Letting r = 0 and s = 1/(2n + 1)
in (7.4) recovers (6.5). Letting s 0 in (7.3) corresponds to letting n in (6.5), so the 6th
proof is in essence a more detailed version of Laplaces 1774 argument.
8. Eighth Proof: Contour Integration
R

We will calculate ex /2 dx using contour integrals and the residue theorem. However, we
2
cant just integrate ez /2 , as this function has no poles. For a long time nobody knew how to
handle this integral using contour integration. For instance, in 1914 Watson [16, p. 79] wrote at
the
of his book Cauchys theorem cannot be employed to evaluate all definite integrals; thus
R end
2
x
dx has not been evaluated except by other methods. In the 1940s several contour integral
0 e
solutions were published using awkward contours such as parallelograms [9], [11, Sect. 5] (see [1,
Exer. 9, p. 113] for a recent appearance). Our approach will follow Kneser [5, p. 121] (see also [12,
pp. 413414] or [18]), using a rectangular contour and the function
2

ez /2

.
1 e (1+i)z
This function comes out of nowhere, so our first task is to motivate the introduction of this function.
We seek a meromorphic function f (z) to integrate around the rectangular contour R in the
figure below, with vertices at R, R, R + ib, and R + ib, where b will be fixed and we let R .

Suppose f (z) 0 along the right and left sides of R uniformly as R . Then by applying
the residue theorem and letting R , we would obtain (if the integrals converge)
Z
Z
X
f (x) dx +
f (x + ib) dx = 2i
Resz=a f (z),

where the sum is over poles of f (z) with imaginary part between 0 and b. This is equivalent to
Z
X
(f (x) f (x + ib)) dx = 2i
Resz=a f (z).

Therefore we want f (z) to satisfy


(8.1)

f (z) f (z + ib) = ez

2 /2

where f (z) and b need to be determined.


2
Lets try f (z) = ez /2 /d(z), for an unknown denominator d(z) whose zeros are poles of f (z).
We want f (z) to satisfy
(8.2)

f (z) f (z + ) = ez

2 /2

KEITH CONRAD

for some (which will not be purely imaginary, so (8.1) doesnt quite work, but (8.1) is only
2
motivation). Substituting ez /2 /d(z) for f (z) in (8.2) gives us
!
2
1
e z /2
2
z 2 /2
(8.3)
e
= ez /2 .

d(z)
d(z + )
Suppose d(z + ) = d(z). Then (8.3) implies
d(z) = 1 e z

2 /2

,
2

and with this definition of d(z), f (z) satisfies (8.2) if and only if e = 1, or equivalently 2 2iZ.

2
The simplest nonzero solution is = (1 + i). From now on this is the value of , so e /2 =
ei = 1 and then
2

f (z) =

ez /2
ez /2
=
,
d(z)
1 + e z

which is Knesers function mentioned earlier. This function satisfies (8.2) and we henceforth ignore
the motivation (8.1). Poles of f (z) are at odd integral multiples of /2.
We will integrate this f (z) around the rectangular contour R below, whose height is Im( ).

The poles of f (z) nearest the origin


are plotted in the figure; they lie along the line y = x. The
only pole of f (z) inside R (for R > /2) is at /2, so by the residue theorem
2

e /8
2ie3 /8

f (z) dz = 2iResz= /2 f (z) = 2i


=
= 2.
2 /2

(1 + i)
( )e
R

As R , the value of |f (z)| tends to 0 uniformly along the left and right sides of R , so

2 =

Z
f (x) dx +

+i

+i
Z

f (x + i ) dx.

f (x) dx

f (z) dz

THE GAUSSIAN INTEGRAL

In the second integral, write i as and use (real) translation invariance of dx to obtain
Z
Z

f (x + ) dx
f (x) dx
2 =

Z
(f (x) f (x + )) dx
=
Z

2
ex /2 dx by (8.2).
=

9. Ninth Proof: Stirlings Formula

R
1 2
Besides the integral formula e 2 x dx = 2 that we have been discussing, another place

in mathematics where 2 appears is in Stirlings formula:


nn
n! n 2n as n .
e

n
n
In 1730 De Moivre proved n! C(n /e ) n for some
positive number C without being able to
determine C. Stirling soon thereafter showed C = 2 and wound up having the whole formula

named after him. We will show that


determining
that
the
constant
C
in
Stirlings
formula
is
2

is equivalent to showing that J = /2 (or, equivalently, that I = 2).


Applying (6.4) repeatedly,
2n 1
I2n =
I2n2
2n
(2n 1)(2n 3)
I2n4
=
(2n)(2n 2)
..
.
(2n 1)(2n 3)(2n 5) (5)(3)(1)
I0 .
=
(2n)(2n 2)(2n 4) (6)(4)(2)
Inserting (2n 2)(2n 4)(2n 6) (6)(4)(2) in the top and bottom,
(2n 1)!

(2n 1)(2n 2)(2n 3)(2n 4)(2n 5) (6)(5)(4)(3)(2)(1)


=
.
2
n1
2
(2n)((2n 2)(2n 4) (6)(4)(2))
2
2n(2
(n 1)!) 2

Applying De Moivres asymptotic formula n! C(n/e)n n, ,

1
(2n 1)2n 2n1
2n 1

C((2n 1)/e)2n1 2n 1

I2n
=
1
2(n1)
2n
n1
n1
2
2n 2
Ce(n 1) (n1)2 (n 1) 2
2n(2
C((n 1)/e)
n 1) 2
I2n =

as n . For any a R, (1 + a/n)n ea as n , so (n + a)n ea nn . Substituting this into


the above formula with a = 1 and n replaced by 2n,
(9.1)
Since Ik1
Therefore

e1 (2n)2n 12n

= .
2
2n
C 2n

Ik , the outer terms in (6.3) are both asymptotic to nI2n /(C 2) by (9.1).
I2n

22(n1) Ce(e1 nn )2 n12 n

2
ey dy
C 2
0

as n , so J = /(C 2). Therefore C = 2 if and only if J = /2.

10

KEITH CONRAD

10. Tenth Proof: Fourier transforms


For a continuous function f : R C that is rapidly decreasing at , its Fourier transform is
the function Ff : R C defined by
Z
f (x)eixy dx.
(Ff )(y) =

R
For example, (Ff )(0) = f (x) dx.
Here are three properties of the Fourier transform.
If f is differentiable, then after using differentiation under the integral sign on the Fourier
transform of f we obtain
Z
0
ixf (x)eixy dx = i(F(xf (x)))(y).
(Ff ) (y) =

Using integration by parts on the Fourier transform of f , with u = f (x) and dv = eixy dx,
we obtain
F(f 0 )(y) = iy(Ff )(y).
If we apply the Fourier transform twice then we recover the original function up to interior
and exterior scaling:
(10.1)

(F 2 f )(x) = 2f (x).

Lets show the appearance of 2 in (10.1) is equivalent to the evaluation of I as


2
Fixing a > 0, set f (x) = eax , so

2.

f 0 (x) = 2axf (x).


1
Applying the Fourier transform to both sides of this equation implies iy(Ff )(y) = 2a i
(Ff )0 (y),
1
1
0
0
which simplifies to (Ff ) (y) = 2a y(Ff )(y). The general solution of g (y) = 2a yg(y) is g(y) =
2
Cey /(4a) , so
2
(Ff )(y) = Cey /(4a)

for some constant C. Letting a = 12 , so f (x) = ex

2 /2

, we obtain

y 2 /2

(Ff )(y) = Ce
= Cf (y).
R x2 /2
dx = I, so I = Cf (0) = C.
Setting y = 0, the left side is (Ff )(0) = e
Applying the Fourier transform to both sides of the equation (Ff )(y)
= Cf (y), we get 2f (x) =
2
2
C(Ff
)(x) = C f (x). At x = 0 this becomes 2 = C , so I = C = 2. Since I > 0, the number
I is 2. If we didnt know the constant on the right side of (10.1) were 2, whatever its value is
2
would
wind up being C , so saying 2 appears on the right side of (10.1) is equivalent to saying
I = 2.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]

C. A. Berenstein and R. Gay, Complex Variables, Springer-Verlag, New York, 1991.


R
2
A. L. Delgado, A Calculation of 0 ex dx, The College Math. J. 34 (2003), 321323.
H. Iwasawa, Gaussian Integral Puzzle, Math. Intelligencer 31 (2009), 3841.
T. P. Jameson, The Probability Integral by Volume of Revolution, Mathematical Gazette 78 (1994), 339340.
H. Kneser, Funktionentheorie, Vandenhoeck and Ruprecht, 1958.
P. S. Laplace, Theorie Analytique des Probabilites, Courcier, 1812.
P. S. Laplace, Memoire sur la probabilite des causes par les ev`enemens, Oeuvres Completes 8, 2765. (English
trans. by S. Stigler as Memoir on the Probability of Causes of Events, Statistical Science 1 (1986), 364378.)

THE GAUSSIAN INTEGRAL

11

[8] P. M. Lee, http://www.york.ac.uk/depts/maths/histstat/normal_history.pdf.


[9] L. Mirsky, The Probability Integral, Math. Gazette 33 (1949), 279. Online at http://www.jstor.org/stable/
3611303.
[10] C. P. Nicholas and R. C. Yates, The Probability Integral, Amer. Math. Monthly 57 (1950), 412413.
[11] G. Polya, Remarks on Computing the Probability Integral in One and Two Dimensions, pp. 6378 in Berkeley
Symp. on Math. Statist. and Prob., Univ. California Press, 1949.
[12] R. Remmert, Theory of Complex Functions, Springer-Verlag, 1991.
[13] W. Rudin, Principles of Mathematical Analysis, 3rd ed., McGraw-Hill, 1976.
[14] M. Spivak, Calculus, W. A. Benjamin, 1967.
[15] S. Stigler, Laplaces 1774 Memoir on Inverse Probability, Statistical Science 1 (1986), 359363.
[16] G. N. Watson, Complex Integration and Cauchys Theorem, Cambridge Univ. Press, Cambridge, 1914.
[17] http://gowers.wordpress.com/2007/10/04/when-are-two-proofs-essentially-the-same/#comment-239.
[18] http://math.stackexchange.com/questions/34767/int-infty-infty-e-x2-dx-with-complex-analysis.

S-ar putea să vă placă și