Sunteți pe pagina 1din 20

Name: Jesper Toft Kristensen

Spring 2014
Chapter: 2

Problem 2.1 (Random walks in grade space):


The random walk is applicable in many settings. Let us try applying it to a
multiple-choice test! We will perform a random walk in grade space. Cool!
This is how we think about this problem. Consider a single student. At each
question she can answer correctly with 70 % probability and incorrectly with
30 % probability. The reason for an increased probability of a right answer
is that she has studied for this exam. The probabilities would be 50/50 if
she is just guessing (we will get back to this later).
Now, there are 10 questions on the exam. A correct answer to a question
gives 10 points. An incorrect answer gives nothing, zero points. So, she can
answer everything wrong and she ends where she started: at zero points. If
she answers everything wrong except one question she has taken one step
to the right (I imagine the x-axis where we can stand on multiples of 10
up to 100, those are the possible scores for one person) and stands at 10
points. So this is random walk where we can either step to the right, or
stay where we are. So it is different than the walk in the book where we
could move right or left. Here we cannot move left. But that doesn’t matter.

a)
What is the expected mean for the exam? We can find the expected size of
the first step l1 . With 70 % probability the step is of size 10. With 30 %
probability it is of size zero. The expected value of this variable is the sum
over all the values it can take weighted by the respective probabilities of
happening:
7 3
hl1 i = × 10 + × 0 = 7.
10 10
So, on average, each question makes the student move 7 points up in grade
space. Now, the exam score S after N = 10 questions is found by summing
up each step (question) we took (answered):
i=N
X
SN = li .
i=1

The mean of the exam is:


*i=N + i=N
X X
hSN i = li = hl1 i = N hl1 i = 7N = 7 × 10 = 70,
i=1 i=1
where we have used that hli i = 7 for all i. So indeed the mean is 70. What
is the standard deviation of the exam? To get this, consider the mean value
of the squared of the quiz score after N steps:
2
hSN i.

We can decompose the quiz after N steps into the quiz score after N − 1
steps plus the last step lN . In other words, and more particularly, the quiz
score after 10 steps is the score after 9 steps plus what we score on the last,
tenth, question:
2
hSN i = h(SN −1 + lN )2 i = h(SN
2 2
−1 + lN + 2SN −1 lN )i
2 2
= hSN −1 i + hlN i + 2hSN −1 lN i.

We can compute the second term. lN 2 is the N th step squared. With 70 %

probability the step is 10, and with the remaining probability it is zero, so:
2 7 3
hlN i = × 102 + × 02 = 70.
10 10
Let us consider the last term. We can explicitly write out the sum of the
scores SN −1 :
* i=9 + i=9
X X
2hSN −1 lN i = 2 li lN = 2 hli lN i.
i=1 i=1

We know that the steps are uncorrelated. That is, step 1 is uncorrelated
with step 9. When two variables are uncorrelated we know that:

hXY i = hXihY i.

We know that the mean of each step is 7. So, for any i (from zero to N − 1):

hli lN i = hli i × hlN i = 7 × 7 = 49.

So (with N = 10):
i=9
X
2hSN −1 lN i = 2 hli lN i = 2 × 49 × (N − 1) = 98(N − 1).
i=1
2 i we get:
Going back to the expression for the squared total quiz score hSN
2 2 2
hSN i = hSN −1 i + hlN i + 2hSN −1 lN i
2
= hSN −1 i + 70 + 98(N − 1).

Carrying this out for N steps gives:


2
hSN i = · · · = 70N + 98N (N − 1)
= 98N 2 − 28N.
From this, the standard deviation is:
q
σs = hSN 2 i − hS i2 .
N

For hSN i2 decompose this sum:


hSN i2 = (hSN −1 i + hlN i)2 = hSN −1 i2 + hlN i2 + 2hSN −1 ihlN i
*N −1 +
X
= hSN −1 i2 + 49 + 2 li × 7
i=1
2
= hSN −1 i + 49 + 2 × (7(N − 1)) × 7
= hSN −1 i2 + 49 + 98(N − 1) = · · · = 98N 2 − 49N.
Thus:
p √
σs = 98N 2 − 28N − (98N 2 − 49N ) = 21N .
For N = 10 questions on the quiz we get:

σs = 210 = 14.49.
This answer is indeed close to the 15 given in the problem text as a hint.

b)
Now let us compute the standard deviation of a student taking the test and
guessing. The probability of answering right is 50 %. We can easily extend
the previous exercise (the mean of a step is now 5, the mean of a step squared
is 50):
2 2 2
hSN i = hSN −1 i + hlN i + 2hSN −1 lN i
2
= hSN −1 i + 50 + 2hSN −1 lN i.

And we have (notice the 52 instead of 72 ):


X−1
i=N
2hSN −1 lN i = 2 hli lN i = 2 × 52 × (N − 1) = 50(N − 1).
i=1

Giving:
2 2
hSN i = hSN −1 i + 50 + 50(N − 1)
2 2
= hSN −1 i + 50N = · · · = 50N .

And:
hSN i2 = (hSN −1 i + hlN i)2 = hSN −1 i2 + hlN i2 + 2hSN −1 ihlN i
*N −1 +
X
2
= hSN −1 i + 25 + 2 li × 5
i=1
= hSN −1 i2 + 25 + 2 × (5(N − 1)) × 5
= hSN −1 i2 + 25 + 50(N − 1) = · · · = 50N 2 − 25N.
Finally giving for the standard deviation:
p √
σsrandom = 50N 2 − (50N 2 − 25N ) = 25N ,
which for N = 10 is:
σsrandom = 15.8.
Thus, the standard deviation of the exam, if every student guess, is larger
than if the students show up with knowledge! The ratio is (let the observed
standard deviation be σs ):
σsrandom 15.8
= ≈ 1.09.
σs 14.49
You can also derive a result in terms of the probability of getting the answers
right. I did this and found that:
p
σs = 100N p(1 − p),
where p is the probability of answering correctly. Notice that σs = 0 if p = 1
as it should be: every student answers correctly so the mean is 100 and each
individual score is 100 as well (no spread around 100). The same goes for
p = 0 (every student has a wrong answer). Anyway, this function peaks at
p = 50 % so the random guesses lead to the largest standard deviation.

This is because there are more ways you can end up with a mean of 50
than there are ending up with a mean of 70 on the exam with a fixed num-
ber of questions. The standard deviation is a measure of how much we
spread around the mean. It sort of measures the number of different ways
the students can get a particular score making sure to have a mean of 70
(meaning: consider an ensemble of students. They collectively have a mean
of 70, but each student have different particular scores of course. If they all
guess then the particular scores can vary more than if they do not guess).

Problem 2.2 (Photon diffusion in the Sun):


The Sun has a radius of 7 × 105 km. In the inner part the photons travel via
random walks. This happens out to a radius of L = 5 × 105 km. Now, the
mean free path of the photon in the inner part of the Sun is assumed to be
l = 50 µm. That is not far compared to the Sun’s radius. We know that the
photons do a random walk. This also means that the distance they travel
can be written in terms of the mean free path l and the number of steps it
takes to go from the center to the given radius N as:
√ √
L = N l = N × 50 × 10−6 m
2  2
5 × 108 m 5 × 108 m

⇒ N = = = 1026 .
10 × 10−6 m 10 × 10−6 m
So the photon has to take 1026 steps to reach the convection part of the
Sun (assuming it starts from the center). Ignoring the index of refraction it
travels at the speed of light. Thus, each step it takes, takes a time:

l 50 × 10−6 m
∆t = = ≈ 167 fs.
c 3 × 108 m s−1
Wow, it does not take a long time to take each step! However, the photon
has to take an enormous amount of steps, so the total time T is:

T = N ∆T = 1026 × 167 fs = 1.67 × 1013 s.

A year has about 3 × 107 s so we get:

1.67 × 1013 s
T = ≈ 5.6 × 105 yrs = 560, 000 yrs.
3 × 107 s yr−1
Thus, the photon travels for (approx.) half a million years from the center
of the Sun to the part where convection takes over!

Problem 2.3 (Molecular motors and random walks):

a)
If the motor stalls it must mean that there is no net force pushing it for-
ward anymore. This means that the overall tilt in the energy landscape has
disappeared. In other words, we are undoing the push the motor gets from
burning energy by pulling the little bead in the other end of it. A flat land-
scape implies δ = 0: there is no difference between wells.

For the next part, let us assume that there are no external forces push-
ing the motor forward. In this scenario a motor is just doing a random walk
on the DNA (say). So we can use the tools developed in the text book.
We know that, since the motor satisfies the random walk, it will satisfy the
diffusion equation (this is sloppy language. Instead, it is the ensemble of
end points of a large collection of motors that satisfy this). Now, there is
indeed a non-zero force pushing the motor in one direction along the DNA
so we need the diffusion equation developed in the presence of a constant
(meaning space-independent) force F . It takes the form (see the text book
for how this came about):

∂ρ ∂ρ ∂2ρ
= −γF + D 2.
∂t ∂x ∂x
Let us consider equilibrium, that is, we have released a bunch of motors on
a DNA string and let them go for a while. We wait and wait. Finally, the
density does not change in time anymore. It does vary in space however. But
if go to one particular point in space we won’t, on average, see any change
in the density. This means that the time derivative of the density is zero:
∂ρ ∂2ρ
0 = −γF +D 2
∂x
 γ ∂x
⇒ ρ(x) = A exp − F x .
D
So we see that we won’t just find a constant density of motors on the DNA.
Well, we will if F = 0 of course, but it is in reality non-zero. Now, from
footnote 20 we have the Einstein relation:
D
= kB T,
γ
leaving us with the density of motors:
 
Fx
ρ(x) = A exp − .
kB T
The numerator is the energy (force times distance) associated with moving
a distance x in a force field F . Now, consider a well in the energy landscape,
say the right well of the two shown. What is the flux out of this will to the
left one? We can find this by computing the density of motors that reach
the top of the hill they must climb. In other words, we compute the number
of motors that make a step with enough energy to climb the hill. For motors
in the right well it takes an energy of V + δ/2 (the height of the hill as seen
from there) to get to the top. This density is how many motors leave the
well to go against the force gradient. Subscript these with a minus:
!
V + 2δ
ρ− = A exp − ,
kB T
where A is the density in this well in the absence of external forces. Consider
then the left well. The left well sends motors to the right well if they climb
a potential hill of height V − δ/2. Thus, the influx of motors to the right
well is:
!
V − 2δ
ρ+ = B exp − ,
kB T

So, the ratio of the forward (in) to the backward (out) flux is:
! !
ρ+ B V − 2δ V − 2δ
= exp − exp +
ρ− A kB T kB T
! !
V − 2δ V + 2δ

B
= exp − exp +
A kB T kB T
 
B δ
= exp
A kB T
If δ = 0 the density in each well must be the same so A = B:
 
ρ+ δ
= exp .
ρ− kB T
Indeed, this value is larger than one for δ > 0 which makes sense: when
δ > 0 there is a nonzero force pushing the little motors to the right (so more
goes to the right, the plus direction, than to the left, the minus direction).

Under the assumption that the motor stalled under the influence of the
known force from the bead we have a good estimate of how much force it
takes to change the energy landscape over a given distance. In other words,
we can estimate the energy δ:

δ = 27 × 10−12 N × 0.34 × 10−9 m


 

= 9.18 × 10−21 J.

We then get for the ratio:

9.18 × 10−21 J
 
ρ+
= exp = 9.2.
ρ− 1.381 × 10−23 J/K × 300 K

b)
Given that we computed the force-times-distance (δ) in part a) we find the
efficiency to range from

9.18 × 10−21 J
= 18.4 %
5 × 10−20 J
to
9.18 × 10−21 J
= 46 %.
2 × 10−20 J

Problem 2.5 (Generating random walks):

a)
I wrote a small Python program for generating the 10,000 step random walk.
I have plotted the result in Fig. 1. Then, I plotted a 2-dimensional random
walk: y(t) versus x(t). I kept the aspect ratio equal in the plot. I have the
Python script available as well. The result is shown in Fig. 2. For N = 10
we are of length on the order of 1. This is indeed multiplied by 10 to get
on the order of 10 (along x) and 20 (along y) in the N = 1000 case (so,
N has increased by a factor of 100 which roughly changes the length of the
Figure 1: Random walk generated √ in Python with 10,000 steps. The walk
length is indeed on the order of 10000 × L = 100 × 1/2 = 50, where L is
the (maximum) step size of 1/2, as expected.

walk by a factor of 10, at least it is close enough to be true). And yes, when
increasing the number of steps to 100,000 we do see that the length now
becomes of order 150 (so it has roughly gone up by a factor of 10 from 20,
this is close enough at least).
Figure 2: 2D random walk. x(t) ≡ xt is shown on the abscissa and y(t) ≡ yt
on the ordinate.

The python code for solving part a) is given below:

1 """
2 Author : Jesper Kristensen , Spring 2014
3 """
4 import numpy as np
5 import matplotlib . pylab as plt
6
7 """
8 Simply generates an Nxd matrix of random numbers in the
range
9 [ -0.5 ,0.5) . This represents a d - dimensional random walk
of N
10 steps .
11 """
12 def g e n e r a t e _ r a n d _ n u m b e r s (N , d ) :
13 # [0 , 1) -0.5 --> [ -0.5 , 0.5)
14 return np . subtract ( np . random . rand (N , d ) , 0.5)
15
16 """
17 Computes the path length versus step in a random walk
the
18 latter given in the array " steps ".
19 The random walk is of N steps in d dimensions so " steps "
20 is assumed to be an Nxd numpy array .
21 """
22 def compute_xt ( steps ) :
23 return np . cumsum ( steps , axis =0)
24

25 """
26 Plots x ( t ) of a 1 D random walk .
27 Uses " sp " as a subplot call , e . g . , sp = '111 '
28 """
29 def p l o t _ x _ o f _ t _ r a n d o m _ w a l k ( xt , sp ) :
30 plt . subplot ( sp )
31 plt . plot ( xt , linewidth =2)
32 plt . tick_params ( labelsize =24)
33 plt . xlabel ( ' time t ( step number ) ' , fontsize =24)
34 plt . ylabel ( 'xt ' , fontsize =24)
35
36 """
37 Plots y ( t ) versus x ( t ) in a 2 D random walk .
38 Uses " sp " as a particular subplot ( e . g . , sp = '111 ')
39 """
40 def p l o t _ y _ v s _ x _ r a n d o m _ w a l k ( xt , yt , sp ) :
41 plt . subplot ( sp , aspect = ' equal ')
42 plt . plot ( xt , yt , linewidth =2)
43 plt . tick_params ( labelsize =24)
44 plt . xlabel ( 'xt ' , fontsize =24)
45 plt . ylabel ( 'yt ' , fontsize =24)
46
47 """
48 Solve problem 2.5 in Sethna 's Statistical
49 Mechanics book .
50 """
51 if __name__ == ' __main__ ':
52
53 # === 2.5 a ) :
54 N =10000
55 d =1 # Just in 1 D
56 random_steps = g e n e r a t e _ r a n d _ n u m b e r s (N , d )
57 xt = compute_xt ( random_steps )
58 plt . figure (1)
59 p l o t _ x _ o f _ t _ r a n d o m _ w a l k ( xt , ' 111 ')
60 plt . title ( 'N =% g ' % N )
61
62 # Now to the 2 D walk :
63 plt . figure (2)
64 d =2
65 i =1
66 Nvalues = [10 , 1000 , 100000] # How many random steps ?
67 for N in Nvalues :
68 random_steps = g e n e r a t e _ r a n d _ n u m b e r s (N , d )
Figure 3: End points of W = 10000 random walks of N = 1 step (red
square) and N = 10 steps (blue circular region).

69 xt = compute_xt ( random_steps [: , 0])


70 yt = compute_xt ( random_steps [: , 1])
71 p l o t _ y _ v s _ x _ r a n d o m _ w a l k ( xt , yt , '1 '+ str ( len ( Nvalues )
) + str ( i ) )
72 plt . title ( 'N =% g ' % N )
73 i += 1
74
75 plt . show ()

b)
In Fig. 3 we plot the end points of W = 10000 random walks. The square
shows the walks for N = 1. We see that this has the square symmetry. How-
ever, for N = 10 a circular symmetry, not apparent from the microscopic
step (diagonal steps are longer than other steps), emerges!

c)
Plotting the distribution of end points for W = 10000 1D random walks of
various number of steps within each walk gives the result in Fig. 4. The
Gaussian distribution which predicts the distribution of end points is over-
laying each plot and given analytically by:
1
ρ(x) = √ exp(−x2 /2σ 2 )
2πσ
I first find a which is the RMS of a set of numbers uniformly distributed
Figure 4: Distribution of end points from random walks with number of
random walks W = 10000 and various values of number of steps N in each
walk. The steps are uniformly distributed in (−0.5, 0.5).

in (−0.5, 0.5). Since the mean of this distribution is zero we can also just
compute the standard deviation. We know from chapter 1 that the standard √
deviation of√a uniform distribution over an interval of range 1 is a = 1/ 12.
Then, σ = N a which defines the Gaussian for each N . From Fig. 4 we see
that already at N = 2 (2 steps in the random walk, and the histogram shows
10000 such walks) the Gaussian is a good approximation to the distribution
of end points!

Problem 2.6 (Fourier and Green):

a)
Say the initial profile (t = 0) is a cosine: cos(k0 x) = cos(10x). What shape
does it take at 10 s? For this, we use the formula:

δ(k − k0 ) + δ(k + k0 )
ρ̃(k, t) = ρ̃(k, 0)G(k, t) = exp(−Dk 2 t),
2
where we used that the Fourier transform of a pure cosine with angular ve-
locity k0 is a delta function at ±k0 : ρ̃(k, 0) = (δ(k − k0 ) + δ(k + k0 ))/2.
What happens to the cosine profile in Fourier space? Well, we see that we
a multiplying by an exponential distribution in time. If the profile vanishes
in Fourier space (meaning: the amplitude of the two traveling waves at ±k0
are being squeezed to zero) it will also vanish in real space. But we see that
the frequency stays unaltered. Indeed, think about what we do when trans-
ferring this profile back to real space: we are going to integrate over k. So
because of the delta functions all that happens is that k0 is being picked up in
the Gaussian over k: exp(−Dk02 t), but we still get a cosine back (because k 2
is the same for both ±k0 ). It’s amplitude has been altered, not its frequency.

b)
Next, let us check how long time it takes for a delta function to spread as
much as the Gaussian centered at 5 in Fig. 2.11 in the text book. Let the
density start at position y and time t = −t0 then we know that the density
at some later time t = 0 and position x is:

(y − x)2
Z Z  
1
ρ(x, t = 0) = dyρ(y, −t0 )G(y − x, t + t0 ) = dyδ(y) √ exp −
4πDt0 4Dt0
x2 /2
 
1 1
= √ √ exp −
2π 2Dt0 2Dt0

So we see that the particular Gaussian distribution is achieved when (notice


the units of Dt is m2 ):

2Dt0 = 1
1 1 m2
⇒ t0 = = = 500 s
2D 2 × 0.001 m2 /s

Thus, it takes 500 seconds for a delta function density profile to reach the
shape of the Gaussian distribution on the left in Fig. 2.11 in the problem
text. In other words, 10 s is a long time on the scale of evolution of the
Gaussian shape.

c)
We have found that, a Gaussian does not change notably in a time frame
of 10 s. Thus, the Gaussian on the left and the Gaussian envelope stays
constant (approximately). But the cosine decays in amplitude exponentially
proportional to exp(−0.001 × 102 × 10) ≈ 0.4, so it should be approximately
halved in size. Thus, Fig. (c) is the right choice.

Problem 2.7 (Periodic Difussion):


a)
In general, the density at any time t is the sum of all the Gaussians evolving
each from the position jL where j is any integer. For small times, and when
looking at −L/2 < x < L/2 we only need to worry about the Gaussian
starting at x = 0 because the other Gaussians have not broadened enough
to influence the particular region we look at. In particular, there is a Gaus-
sian starting at x = L (just one example) which is diffusing into our region,
but initially it won’t affect us much, if at all because it is very narrow (its
standard deviation, the broadness, is proportional t). So the answer is the
Gaussian:
(y − x)2
Z  
1
ρ(x, t) = dyρ(y, 0) √ exp −
4πDt 4Dt

!
(y − x)2
Z  
X 1
= dy δ(y − nL) √ exp −
n=−∞ 4πDt 4Dt
x2
 
1
= √ exp −
4πDt 4Dt
in the region −L/2 < x < L/2 (where we only pick up one term from the
sum over delta functions: the δ(y − 0) one – all the others are outside the
given domain) and at small times t.

b)
The Fourier method shows us how the k-space representation of ρ looks like
at any time. In particular:
ρ̃(k, t) = ρ̃(k, 0) exp(−Dk 2 t).
What is the Fourier spectrum at t = 0? We can get this as follows. First,
apply the Fourier transform operator to the real space density. We get:
Z x=∞
ρ̃(k, 0) = ρ(x, 0) exp(−ikx)dx.
x=−∞

We know that the initial density ρ(x, 0) is a train of delta functions - a


so-called Dirac comb (which could represent a 1D lattice where each delta
function is a lattice point):
n=∞
X
ρ(x, 0) = δ(x − nL).
n=−∞

Plug this into the Fourier transform formula and get:



Z x=∞ !
X
ρ̃(k, 0) = δ(x − nL) exp(−ikx)dx
x=−∞ n=−∞
n=∞
X
= exp(−ikLn).
n=−∞

Before we continue let us just pause and think about this problem. If you
have been exposed to any kind of solid state physics you will know the qual-
itative feature of the solution at this point. Since the Dirac comb represents
a 1D lattice the Fourier transform is like looking at the reciprocal lattice.
So we expect another Dirac comb (to represent this reciprocal lattice which
will of course also be in 1D, we just don’t know the lattice points as of yet.
Well knowing the relation between real space and reciprocal space you can
actually get that as well right now, but let us pretend we don’t). The point
is that we expect the result to be another Dirac comb.

Now, this sum might seem a bit strange at first, but we can realize that
it is actually just a Dirac-comb in disguise. In particular, let us use the
identity:
m=∞
X
2πδ(y) = exp(−iym).
m=−∞

which holds on the interval [−π, π]. We can scale y by L in this expression
and simultaneously scale the interval:
m=∞
X
2πδ(yL) = exp(−iyLm) = 2πδ(y).
m=−∞

(the last equality because we will pick out y = 0 no matter the scaling,
that is δ(yL) is going to give the same result as δ(y)) now on the interval
[−π/L, π/L] (this is a Dirac comb but we will use the Dirac delta function
to cast it in the more familiar form below). Thus, using this result in the
expression for ρ̃(k, 0) we get:

ρ̃(k, 0) = 2πδ(k),

on the interval [−π/L, π/L]. The period of this function is thus 2π/L. There-
fore, considering the entire line of k values, using the Dirac delta function,
we have:
m=∞
X  
2πm
ρ̃(k, 0) = 2π δ k− .
m=−∞
L

And indeed, we get another Dirac comb with delta functions at the points
2πm/L. In other words, the Fourier transform of a Dirac comb is another
Dirac comb.

Now, back to the opening statement we have that, at any future time t:
m=∞
X  
2πm
ρ̃(k, t) = 2π δ k− exp(−Dk 2 t).
m=−∞
L

We see how larger k-values die off rapidly with increasing time (exponentially
indeed). Surely, the k = 0 value never dies off simply because exp(0) = 1.
This is equivalent to a "conservation of mass" statement. The larger the
magnitude of k (because of the k squared term in the exponential) the faster
that term dies off. So very rapid oscillations in the real space density dies off
very quickly. Therefore, at large times the two smallest k-values (in terms
of magnitude) are zero and 2π/L. So keeping just the two smallest k’s we
get (three terms because ±2π/L have the same size so we must keep both),
at large t:
     
2π 2π
exp −Dk 2 t + δ k + exp −Dk 2 t .
 
ρ̃(k, large t) = 2π δ(k) + δ k −
L L

We can perform an inverse Fourier transform and see that the real space
density evolves as:

ρ(x, large t)
Z ∞
1
= ρ̃(k, large t) exp(ikx)dk
2π −∞
Z ∞      
2π 2
 2π 2

= δ(k) + δ k − exp −Dk t + δ k + exp −Dk t exp(ikx)dk
−∞ L L
     
2 D 2π
= 1 + 2 exp −4π t cos x
L2 L

The last equality tells us that: at infinite time the real space density will be
constant everywhere. This makes sense. The second term will vanish with
time (because the domain is infinite this Gaussian will not raise the overall
level of stuff).

The expression from part (a) was valid in the range −L/2 to L/2. We can
make the range independent of L by writing the expression as (and introduce
x̂ as this new normalized variable):
!
1 x̂2
ρ(x, short t) = √ exp − D 
4πDt 4 L2 t

now valid from −1/2 to 1/2. Let us further introduce a π in the short-time
expression (and let x̂ be redefined to include this as well) and further reduce
the range now to [−1/2π, 1/2π]:
!
1 x̂2
ρ(x, short t) = √ exp − 2 D 
4πDt 4π L2 t

The point is that we can now directly compare the short and long term
expressions and see that the characteristic time should satisfy:

4π 2 Dτ /L2 = 1.
(the characteristic length of an exponential is obtained when its argument
takes the value one, so I just equated the argument to 1). Now we get for
the characteristic time:
L2
τ= .
4π 2 D
Does this expression have the right units? The diffusion constant D has
units length squared by time, and L is length, so yes it does. When t is
much smaller than this number the short-term expression for ρ(x, t) is valid.
And when t is much larger the long-term expression for ρ(x, t) is valid.

Problem 2.8 (Thermal Difussion):


a)
We can readily solve this from the equations given. First, notice that

E = cp pT
∂E ∂T
⇒ = cp p .
∂t ∂t
We are also given that
∂E
= −∇ · J.
∂t
and that

J = −kt ∇T
⇒∇·J = −kt ∇2 T,

assuming kt is spatially constant. Now, by using these results, we immedi-


ately see that:
∂T
cp p = kt ∇2 T.
∂t
and thus that
∂T kt 2
= ∇ T.
∂t cp p

b)
We showed in part (a) that the temperature T follows the diffusion equa-
tion. We can use the Fourier transform technique to get the solution of the
temperature, in Fourier space, at any future time t by:

T̃ (k, t) = T̃ (k, 0) exp(−Dk 2 t).


The Fourier transform of the initial profile, which is just a sine, is readily
obtained (we should already know it’s a delta function at plus minus its
frequency k 0 ):
Z ∞
T̃ (k, 0) = sin(k 0 x) exp(−ikx)dx
−∞
= iπ(δ(k + k 0 ) − δ(k − k 0 )).

And thus we get, at any time t:

T̃ (k, t) = iπ(δ(k + k 0 ) − δ(k − k 0 )) exp(−Dk 2 t).

Where we see indeed that the modulation dies off exponentially in time (last
factor). We can write the decay rate in terms of the wavelength λ as:
 2 !

exp(−Dk 2 t) = exp −D t ∝ exp(−1/λ2 ).
λ

That is, longer wavelength modulations decay slower than shorter wavelength
modulations. In other words, a temperature profile that wiggles very rapidly
will decay very fast into something more smoother looking (and eventually
to the constant zero) than a profile that already looked almost constant (and
thus didn’t wiggle as much).

Problem 2.9 (Frying Pan):


a)
I would guess that the pan would not be useful if we were talking about a
few minutes. But it is also an iron pan so it is not hours (cooking something
on the pan takes on the order of 1 hour, if it had been multiple hours there
would be no problem, but I know from experience that it does get too hot).
Let “too hot to touch" mean 100 C (boiling temperature of water). My es-
timate is 1 hour.

b)
We need to transport this amount of heat:

cp ρV ∆Tneed ,

where ∆Tneed is the difference between the temperature where the handle
gets too hot and the initial temperature: room temperature. The diffusion
equation tells us that, roughly:
∆Tstove kt ∆Tstove
= ,
∆t cp ρ (∆x)2
∆Tstove is the difference between the stove temperature and the initial handle
temperature of the handle: room temperature. From this, rearrange to get
(using V = A∆x):
cp ρV ∆Tstove ∆Tstove
= Akt .
∆t ∆x
This is the amount of heat we transport per unit time through area A under
a temperature gradient of ∆Tstove /∆x. The time δt needed is (we change
∆Tstove to the difference we need: ∆Tneed ):
cp ρV ∆Tneed
δt =
Akt ∆T∆x

stove

cp ρ(∆x)2 ∆Tneed
 
=
kt ∆Tstove
−1 −1
450 J kg C × 7900 kg m−3 × (0.3 m)2 (100 − 30) K
 
=
80 J s−1 m−1 C−1 (400 − 30) K
≈ 13 minutes.

So this method predicts roughly 13 minutes of stove time before the handle
gets too hot to hold.

c)
We start off with the diffusion equation in the general form:

∂T ∂2T
=D 2.
∂t ∂x
Comparing this to the particular diffusion equation in the problem:

∂T kt ∂ 2 T
=
∂t cp ρ ∂x2
we see that:
kt
D= .
cp ρ
The question now asks us to start off a small amount of energy at the origin
along the infinite rod and let it evolve in time. Analytically, this corresponds
to having a delta function at (x, t) = (0, 0) and then evolving it according to
the diffusion equation. This scenario is actually developed in the main text.
We duplicate the derivation here for convenience.

The Fourier transform of the initial delta function density at x = 0 is:


Z Z
T̃k (0) = T (x, 0) exp(−ikx)dx = δ(x) exp(−ikx)dx = 1.
Now we can use the Green method to get the density at any later time t:
Z
1
exp(ikx)G̃k (0) exp −Dk 2 t dk

G(x, t) =

x2
 
1
= √ exp − .
4πDt 4Dt

The last equation comes from inverse Fourier transforming a Gaussian which
is another Gaussian (the textbook derives this inverse Fourier transform).
The variance of this Gaussian can be read off directly:

σ 2 = 2Dt
σ2
⇒ t = .
2D
Assume the length of the handle to be 30 cm. Then we get for the time:

kt 80 J s−1 m−1 C−1


D = = = 2.25 × 10−5 m2 s−1 .
cp ρ (450 J kg−1 C−1 ) × (7900 kg m−3 )
(0.3 m)2
⇒ t = = 2000 s
2 × 2.25 × 10−5 m2 s−1
= 33 min.

So about half an hour.

S-ar putea să vă placă și