Sunteți pe pagina 1din 4

Measuring thermodynamic length

Gavin E. Crooks
Physical Bioscience Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
(Dated: February 5, 2008)
Thermodynamic length is a metric distance between equilibrium thermodynamic states. Among
other interesting properties, this metric asymptotically bounds the dissipation induced by a nite
time transformation of a thermodynamic system. It is also connected to the Jensen-Shannon diver-
gence, Fisher information and Raos entropy dierential metric. Therefore, thermodynamic length
arXiv:0706.0559v2 [cond-mat.stat-mech] 7 Sep 2007

is of central interest in understanding matter out-of-equilibrium. In this paper, we will consider how
to dene thermodynamic length for a small system described by equilibrium statistical mechanics
and how to measure thermodynamic length within a computer simulation. Surprisingly, Bennetts
classic acceptance ratio method for measuring free energy dierences also measures thermodynamic
length.

PACS numbers: 05.70.Ln, 05.40.-a

INTRODUCTION configurational probability distribution is given by the


Gibbs ensemble, [16]
Thermodynamic length is a natural measure of the dis- 1 H(x,) 1 i
tance between equilibrium thermodynamic states [1, 2, 3, p(x|) = e = e (t)Xi (x) (1)
Z Z
4, 5, 6, 7, 8, 9, 10, 11], which equips the surface of thermo-
dynamic states with a Riemannian metric and defines the where x is the configuration, t is time, = 1/kB T is the
length of a quasi-static transformation as the number of reciprocal temperature (T ) of the environment in natu-
natural fluctuations along that path. Unlike the entropy ral units, (kB is the Boltzmann constant), Z is the par-
or free energy change, which are state functions, the ther- tition function, and H is the Hamiltonian of the sys-
modynamic length explicitly depends on the path taken tem. This total Hamiltonian is split into a collection of
through thermodynamic state space. Thermodynamic collective variables Xi and conjugate generalized forces
length is of fundamental interest to the generalization of i , H = i (t)Xi (x). We use the Einstein convention
thermodynamics to finite time (rather than infinity slow) that repeated upper/lower indices are implicitly summed.
transformations. Minimum distance paths are geodesics The sub-Hamiltonians X are time-independent functions
on the Riemannian manifold and minimize the dissipa- of the configurations, whereas the conjugate variables
tion for slow, but finite time transformations [3, 8]. These are time dependent and configuration independent. Note
insights have been employed to optimize fractional distil- that the conjugate variables include a factor of inverse
lation and other thermodynamic processes [12, 13, 14]. temperature.
The study of thermodynamic length has largely been The s are the experimentally controllable parameters
restricted to the field of macroscopic, endoreversible ther- of the system and define the accessible thermodynamic
modynamics. However, there are deep connections be- state space. For example, in the isothermal-isobaric en-
tween thermodynamic length, information theory and the semble we have X = {U, V } and = {, p}, where U is
statistical physics of small systems far-from-equilibium. the internal energy, V is the volume and p is the external
In this paper we will consider the most appropriate def- pressure. Modern experimental techniques have broad-
inition of thermodynamic length for small systems and ened the range of controllable parameters beyond those
how to measure this distance in a computer simula- considered in standard thermodynamics. For instance,
tion. These considerations reveal a surprising connection optical tweezers can apply a constant force to the ends of
between thermodynamic length, Jensen-Shannon diver- a single DNA molecule. The equilibrium description of
gence and Bennetts acceptance ratio method for free en- this system includes the extension of the polymer, with
ergy calculations [15]. Bennetts method is an optimal the tension as conjugate variable. In computer simula-
measure of free energy differences, but it also indirectly tions we have much greater flexibility. The configuration
places a lower bound on the thermodynamic length be- functions can be rather arbitrary collective variables de-
tween neighboring thermodynamic states. lineating high dimensional manifolds of equilibrium ther-
modynamic states.
The partition function that normalizes the probability
distribution, Z, is directly related to the free energy F
THERMODYNAMIC LENGTH
(Gibbs potential), the free entropy (Massieu potential)
and entropy S:
Consider a physical system, possible microscopically
small, in equilibrium with a large thermal reservoir. The ln Z = F = = S i hXi i (2)
2

Angled brackets indicate an average over the appropriate thermodynamic metric tensor Eq. (4) is then identical to
equilibrium ensemble. The first derivatives of the free the Fisher information matrix [19].
entropy give the first moments of the collective variables,
X ln p(x) ln p(x)
gij () = p(x) (6)
i j
= hXi i (3) x
i X
= p(x)(Xi +i
)(Xj + )
and the second derivative yields the covariance matrix, x
j


= (Xi hXi i)(Xj hXj i)
2 hXi i

gij = i j
= j
= (Xi hXi i)(Xj hXj i) .
According to the Cramer-Rao inequality the variance of
(4)
any unbiased estimator is at least as high as the inverse
The covariance matrix gij is positive semi-definite and of the Fisher information [19].
varies smoothly from point to point, except at macro- In 1945 Rao introduced the entropy differential met-
scopic phase transitions. Therefore, we can use the co- ric, the distance between two distributions arising from
variance matrix as a metric tensor and naturally equip the Riemannian metric over the parameter space with
the manifold of thermodynamic states with a Rieman- the Fisher information metric tensor [20, 21]. This en-
nian metric. Recall that a metric provides a mea- tropy differential metric is identical to the thermody-
sure of distance between points. It is a real func- namic length when, as here, the variables are conjugate
tion d(a, b) such that (1) distances are non-negative, parameters of a Gibbs ensemble [11]. Note that if we
d(a, b) 0 with equality if and only if a = b, (2) sym- plug the Fisher information metric tensor [Eq. (6)] into
metric, d(a, b) = d(b, a) and (3) it is generally shorter the curve length [Eq. (5)] we can rewrite the entropy dif-
to go directly from point a to c than to go by way of ferential metric as [6, 17]
b, d(a, b) + d(b, c) d(a, c) (The triangle inequality).
Moreover, in a Riemannian metric we can measure the
v
Z u  2
uX 1 dp(x)
distance along curves connecting different points. The L= t dt (7)
length of a curve parameterized by t, from 0 to , is 0 x
p(x) dt
Z r i
d dj We should probable consider Raos definition as more
L= gij dt (5) general and fundamental than the thermodynamic defini-
0 dt dt
tion, just as the statistical definition of entropy is widely
and the point-to-point distance is the length of the short- considered more general and fundamental than the origi-
est curve. Curves of locally minimal distance are called nal thermodynamic definition. In particular, the entropy
geodesics, and are the closest analogs of straight lines differential metric natural extends to the situation where
in a curved space. Because of the connection to fluctu- the Hamiltonian is not a linear function of the control
ations [Eq. (4)] the length of curves in thermodynamic parameters, or where the system is not in thermal equi-
state space are measured by the number of natural fluc- librium.
tuations along the path. The larger the fluctuations the We can also define a related quantity, the thermody-
closer points are together [17, 18]. namics divergence of the path,
Originally, Weinhold [1] defined the thermodynamic Z
length L using the second derivatives of the internal en- di dj
J = gij dt (8)
ergy U (S, V, N ) with respect to the extensive variables 0 dt dt
as a metric tensor, and by Ruppeiner [2] using the cor- In Riemannian geometry J /2 is called the energy, or
responding derivatives of the entropy, S(U, V, N ). Us- action, of the curve, due to similarity with the kinetic
ing intensive variable derivatives of the free energy was energy integral in classical mechanics. The length and
first discussed by Schlogl [5, 10, 11]. For macroscopic divergence are related by the inequality,
thermodynamic systems these different definitions of the
metric are essentially equivalent [4, 5], analogously to J L2 (9)
the macroscopic equivalence of ensembles. However, in
small systems these metrics are in general different and which can be derived as a consequence of the Cauchy-
R R R 2
the Weinhold and Ruppeiner metrics may not exist, since Schwarz inequality 0 f 2 dt 0 g 2 dt 0 f g dt with
the second derivatives of the entropy and entropy are not g(t) = 1. The value of the divergence depends on the
guaranteed to be positive. The definition adopted in this parametrization. The minimum value L2 is attained only
paper [Eqs. (4)], essentially that of Schlogl, does not re- when the integrand is a constant along the path.
quire the thermodynamic limit. Thermodynamic length and divergence control the dis-
Moreover, with this definition we can make an impor- sipation of finite time thermodynamic transformations as
tant connection to statistical estimation theory, since the we approach the infinity slow quasi-static limit [3, 6, 8].
3

Consider a protocol that perturbs the conjugate vari- MEASURING THERMODYNAMIC LENGTH
ables of the system from 1 to N in a series of discrete
steps [8]. After each step we pause and allow the system Thermodynamic length and divergence are clearly of
to reequilibrate. After we get to the final thermodynamic fundamental interest and importance to non-equilibrium
state, we run the protocol in reverse, until we again reach thermodynamics. Therefore, we shall consider how best
the initial thermodynamic state. to measure these quantities. The relation between dissi-
The total average change in entropy of a single step is pation and divergence [Eqs. (12) and (13)] suggests one
Stotal = Ssystem + it+1 [hXi it+1 hXi it ] [6, 8]. Thus, obvious approach. We run equilibrium simulations at
the hysteresis, the total average dissipation of the com- a series of points along the path and examine the scal-
bined forward and backwards protocols, is ing of the dissipation with the number of steps. Since
N 1
length and divergence are properties of the path taken
=
X
it+1 [hXi it+1 hXi it ] + it [hXi it hXi it+1 ] ,
 through thermodynamic state space, but are indepen-
t=1
dent of the underlying dynamics of the system, one can
N 1 measure thermodynamic length in a computer simula-
tion using whatever dynamics is most convenient, be it
X
it+1 it [hXi it+1 hXi it ] ,
 
= (10)
t=1 Metropolis Monte Carlo, Langevin dynamics or deter-
N 1 ministically thermostated molecular dynamics. The only
condition is that the chosen dynamics reproduce the cor-
X
= i hXi i ,
t=1 rect equilibrium ensemble, Eq. (1).
Concretely, we must measure hXi i, the mean change
which we can also write as of the collective variables between neighboring thermody-
N 1 namic states. Given K uncorrelated measurements from
X i hXi i j an equilibrated computer simulation we can estimate this
= t , (11)
N t=1 t j t value as
X X
where = N t. In the continuum limit we can replace hXi i = p(x|2 )Xi (x) p(x|1 )Xi (x) (14)
the sum by an integral and find that x x
 
X p(x|2 )
N 1 Z
di dj = p(x|1 )Xi (x) 1
p(x|1 )
X
lim N i hXi i = gij dt = J (12) x
N
t=1 0 dt dt K
X  
Xi,1,n (exp 12 (j2 j1 )Xj,1,k 1)
As the number of steps along a path increases we ap-
k=1
proach a reversible, quasi-static process. In this limit,
the hysteresis scales as the thermodynamic divergence In the second line we rewrite the difference of the means
and inversely as the number of steps. (Note that this ex- as the mean difference. (We should not estimate the dif-
pression differs by a factor of 2 from Ref. [8] because here ference of the mean directly since this will lead to large
we have considered the hysteresis, the combined dissipa- statistical errors that will become larger as the number of
tion of the forward and reversed protocols, rather than steps increases.) The final line follows from the definition
the dissipation along a single direction.) Similar reason- of the Gibbs ensemble, Eq. (1). Here, Xi,t,k is kth mea-
ing relates the divergence and the hysteresis of a slow, surement of the ith collective variable, Xi (x) taken from
finite time transformation [3]. an equilibrium system defined by the conjugate variables
The asymptotic hysteresis and thermodynamic diver- t , and 12 = 2 1 is the difference in free entropy.
gence of a protocol will depend on the parametrization To employ Eq. (14) we need to know the free entropy
of the path. However, thanks to the length-divergence change, 12 , which can be optimally estimated using
inequality J L2 [Eq. (9)] we know that the minimum Bennetts acceptance ratio method [15, 22, 23]. Given K
thermodynamic divergence of the path is the square of measurements from each of two neighboring states, Xi,1,k
the thermodynamic length. Repeating the previous anal- and Xi,2,k the log likelihood that the free entropy has
ysis, we find that the thermodynamic length is related to a particular value is [22, 23]
the cumulative root mean single-step hysteresis.
K
1 X 1
N 1 p (12 ) = ln 
K 1 + exp + (i2 i1 )Xi,1,k
X
lim N i hXi i = L (13) k=1 12
N
t=1 K
1 X 1
+ ln  (15)
Consequently, we can locate optimal, minimal dissipation K
k=1
1 + exp 21 + (i1 i2 )Xi,2,k
paths connecting two thermodynamic states by measur-
ing and optimizing the thermodynamic length. and the Bennett optimal estimate of 12 maximizes this
4

likelihood. (See [22] for a clear and concise exposition of free entropy change [Eq. (15)] and Jensen-Shannon di-
this result.) vergence [via Eq. (16)] between neighboring ensembles.
Rather than using this free entropy measurement to es- The cumulative Jensen-Shannon metric along the path
timate the mean change in the collective variables using provides a lower bound to the thermodynamic length
Eq. (14), we will instead show that the Bennett likelihood [Eq. (19)] and a lower bound to the minimum divergence
is directly related to the thermodynamic divergence. If of the path [via Eq. (9)]. This procedure is then repeated
we insert the Gibbs ensemble [Eq. (1)] into the log like- with finer discretizations of the path, until the estimates
lihood, then in the large sample limit, we find that the of divergence and length converge.
likelihood scales as
This research was supported by the Department of En-
(12 ) 2K JS(p1 ; p2 ) ln 2 ergy, under contract DE-AC02-05CH11231.

(16)

where JS(p1 ; p2 ) is the Jensen-Shannon divergence, the


mean of the relative entropy of each distribution to the
mean distribution [24].
Electronic address: GECrooks@lbl.gov
1X pi 1X qi [1] F. Weinhold, J. Chem. Phys. 63, 2479 (1975).
JS(p; q) = pi ln 1 + qi ln 1 [2] G. Ruppeiner, Phys. Rev. A 20, 1608 (1979).
2 i 2 (pi+ qi ) 2i 2 (pi
+ qi ) [3] P. Salamon and R. S. Berry, Phys. Rev. Lett. 51, 1127
(17) (1983).
The minimum divergence is zero for identical distribu- [4] P. Salamon, J. Nulton, and E. Ihrig, J. Chem. Phys. 80,
tions and the maximum is ln 2. The square root of the 436 (1984).
Jensen-Shannon divergence is a metric between proba- [5] F. Schlogl, Z. Phys. B 59, 449 (1985).
[6] P. Salamon, J. D. Nulton, and R. S. Berry, J. Chem.
bility distributions [25]. However, unlike a Riemannian
Phys. 82, 2433 (1985).
metric, the Jensen-Shannon metric space is not an in- [7] J. D. Nulton and P. Salamon, Phys. Rev. A 31, 2520
trinsic length space. There may not be a mid point b be- (1985).
tween points a and c such that d(a, b) + d(b, c) = d(a, c) [8] J. Nulton, P. Salamon, B. Andresen, and Q. Anmin, J.
and consequentially we cannot naturally measure path Chem. Phys. 83, 334 (1985).
lengths. However, on any metric space we can define a [9] H. Janyszek and R. Mrugala, Phys. Rev. A 39, 6515
new intrinsic metric by measuring the distance along con- (1989).
[10] R. Mrugala, J. D. Nulton, J. C. Sch on, and P. Salamon,
tinuous paths. The Jensen-Shannon divergence between
Phys. Rev. A 41, 3156 (1990).
infinitesimally different distributions is [26] [11] D. Brody and N. Rivier, Phys. Rev. E 51, 1006 (1995).
[12] P. Salamon and J. D. Nulton, Europhys. Lett. 42, 571
1 X (dpi )2 (1998).
JS(p; p + dp) = . (18)
8 i pi [13] M. Schaller, K. H. Homann, G. Siragusa, P. Salamon,
and B. Andresen, Comp. Chem. Eng. 25, 1537 (2001).
If we compare with Eq. (7), we can see that in the con- [14] J. D. Nulton and P. Salamon, J. Non-Equilib. Thermo-
dyn. 27, 271 (2002).
tinuum limit
[15] C. H. Bennett, J. Comput. Phys. 22, 245 (1976).
Z Z [16] H. B. Callen, Thermodynamics and an Introduction to
L = 8 d JS and J = 8 dJS . (19) Thermostatistics (Wiley, New York, 1985), 2nd ed.
[17] W. K. Wootters, Phys. Rev. D 23, 357 (1981).
The induced Jensen-Shannon metric is proportional to [18] B. Andresen, R. S. Berry, R. Gilmore, E. Ihrig, and
P. Salamon, Phys. Rev. A 37, 845 (1988).
the thermodynamic (entropy differential) metric, and
[19] T. M. Cover and J. A. Thomas, Elements of Information
the induced Jensen-Shannon divergence is proportional Theory (Wiley, New York, 1991).
to the thermodynamic divergence. Consequentially, the [20] C. R. Rao, Bull. Calcutta Math. Soc. 37, 81 (1945).
square root of Jensen-Shannon divergence between two [21] J. Burbea and C. R. Rao, J. Multivariate Anal. 12, 575
thermodynamic states gives a lower bound on the ther- (1982).
modynamic length of any path between those same [22] M. R. Shirts, E. Bair, G. Hooker, and V. S. Pande, Phys.
states, and the Jensen-Shannon divergence is a lower Rev. Lett. 91, 140601 (2003).
[23] P. Maragakis, M. Spichty, and M. Karplus, Phys. Rev.
bound to the thermodynamic divergence.
Lett. 96, 100602 (2006).
To summarize, we can measure the thermodynamic [24] J. Lin, IEEE Trans. Info. Theory 37, 145 (1991).
length and minimum thermodynamic divergence along [25] J. Endres, D.M. Schindelin, IEEE Trans. Info. Theory
a path in thermodynamics state space by adapting Ben- 49, 1858 (2003).
netts method. We perform a series of equilibrium simu- [26] A. Majtey, P. W. Lamberti, M. T. Martin, and A. Plas-
lations along the path and find the maximum likelihood tino, Eur. Phys. J. D 32, 413 (2005).

S-ar putea să vă placă și