Sunteți pe pagina 1din 30

International Statistical Review (2011), 79, 1, 114–143 doi:10.1111/j.1751-5823.2011.00134.

Short Book Reviews


Editor: Simo Puntanen

Graphics for Statistics and Data Analysis with R


Kevin J. Keen
Chapman & Hall/CRC, 2010, xxxiv + 447 pages, £39.99/$69.95, hardcover
ISBN: 978-1-58488-087-5

Table of contents

1. The graphical display of information 7. Parametric density estimation for a single


2. Basic charts for the distribution of a single continuous variable
discrete variable 8. Depicting the distribution of two discrete
3. Advanced charts for the distribution of a single variables
discrete variable 9. Depicting the distribution of one continuous
4. Exploratory plots for the distribution of a variable and one discrete variable
single continuous variable 10. Depicting the distribution of two continuous
5. Diagnostic plots for the distribution of a variables
continuous variable 11. Graphical displays for simple linear regression
6. Nonparametric density estimation for a single 12. Graphical displays for polynomial regression
continuous variable 13. Visualizing multivariate data

Readership: Students wanting to learn about graphical design for statistical graphics.

“This book is intended for those wanting to learn about the basic principles of graphical design
as applied to the presentation of data.” So it is about the how and not the why of graphics. It
is mainly restricted to one and two dimensional graphics with just a short, and consequently
disappointing chapter on visualizing multivariate data at the end. A lot of the recommendations
are sound, though providing twenty-one alternative versions of the fourteen data points making
up the United Nations budget for 2008–9 was a strange decision, especially as the plots are
mostly on different pages, so that comparisons are difficult. It is also surprising that three of
these versions are coloured pie charts (including one pseudo three-dimensional exploded pie
chart). Given that there are only eight pages of colour displays in the whole book, you would
think that the author would take the opportunity to present something more attractive. And there
is the rub. An unscientific, if nevertheless revealing, test of any graphics book is whether there
are graphics in it that you would show to someone else and say “Look at that, isn’t it great?”
There is not one such graphic here. What you get in the book is some sensible advice, some
snippets of R code, a number of bad graphics (which the author rightly criticises), and a number
of slightly better graphics.
Antony Unwin: unwin@math.uni-augsburg.de
Universität Augsburg, Institut für Mathematik
D-86135 Augsburg, Germany


C 2011 The Author. International Statistical Review 
C 2011 International Statistical Institute.. Published by Blackwell Publishing Ltd, 9600 Garsington

Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA.
SHORT BOOK REVIEWS 115

Data Analysis and Graphics Using R: An Example-based Approach, Third Edition


John Maindonald, W. John Braun
Cambridge University Press, 2010, xxvi + 525 pages, £50.00/$80.00, hardcover
ISBN: 978-0-521-76293-9

Table of contents

1. A brief introduction to R 10. Multi-level models and repeated measures


2. Styles of data analysis 11. Tree-based classification and regression
3. Statistical models 12. Multivariate data exploration and
4. A review of inference concepts discrimination
5. Regression with a single predictor 13. Regression on principal component or
6. Multiple linear regression discriminant scores
7. Exploiting the linear model framework 14. The R system—additional topics
8. Generalized linear models and survival analysis 15. Graphics in R
9. Time series models

Readership: Scientists wishing to do statistical analysis on their own data.

This is a slightly expanded edition of a well-known text. Interestingly, the authors say that they
have rewritten the treatment of one-way anova and a major part of the chapter on regression. Now
why would they have decided to do that? They have also included more on errors in predictor
variables and on random forests. Finally, there is an additional chapter on graphics in R. The
book’s strengths are its sound practical advice, its readability (mathematical symbolism is played
down), the many real datasets, and the extensive use of R. The datasets are from a variety of
applications and are generally worth studying. They are all rather small, which is not so realistic
these days, though it is appropriate for the book’s intended readership. There is an accompanying
R package for the book, which contains most of the datasets. As is typical for R help files, the
code for the examples provided in the package is only for illustration and not for analysis. If
you want to see how the authors suggest the data are analysed, you need the book. The book’s
main weakness is its graphics. They are generally disappointing and not as informative as they
could or should be. In other words, they are as good as the graphics you find in most statistics
textbooks.
Antony Unwin: unwin@math.uni-augsburg.de
Universität Augsburg, Institut für Mathematik
D-86135 Augsburg, Germany

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
116 SHORT BOOK REVIEWS

Mathematics and Sports


Joseph A. Gallian (Editor)
Mathematical Association of America, 2010, xi + 329 pages, $39.95, softcover
ISBN: 978-0-88385-349-8

Table of contents

I Baseball 13. Is Tiger Woods a winner? (Scott M. Berry)


1. Sabermetrics: the past, the present, and the 14. G. H. Hardy’s golfing adventure (Roland
future (Jim Albert) Minton)
2. Surprising streaks and playoff parity: 15. Tigermetrics (Roland Minton)
probability problems in a sports context (Rick V NASCAR
Cleary) 16. Can mathematics make a difference? Exploring
3. Did humidifying the baseball decrease the tire troubles in NASCAR (Cheryll E. Crowe)
number of homers at Coors field? (Howard VI Scheduling
Penn) 17. Scheduling a tournament (Dalibor Froncek)
4. Streaking: finding the probability for a batting VII Soccer
streak (Stanley Rothman, Quoc Le) 18. Bending a soccer ball with math (Tim Chartier)
II Basketball VIII Tennis
5. Bracketology: how can math help? (Tim 19. Teaching mathematics and statistics using
Chartier, Erich Kreutzer, Amy Langville, tennis (Reza Noubary)
Kathryn Pedings) 20. Percentage play in tennis (G. Edgar Parker)
6. Down 4 with a minute to go (G. Edgar Parker) IX Track and Field
7. Jump shot mathematics (Howard Penn) 21. The effects of wind and altitude in the 400m
III Football sprint with various IAAF track geometries
8. How deep is your playbook? (Tricia Muldoon (Vanessa Alday, Michael Frantz)
Brown, Eric B. Kahn) 22. Mathematical ranking of the division III track
9. A look at overtime in the NFL (Chris Jones) and field conferences (Chris Fisette)
10. Extending the Colley method to generate 23. What is the speed limit for men’s 100 meter
predictive football rankings (R. Drew Pasteur) dash? (Reza Noubary)
11. When perfect isn’t good enough: retrodictive 24. May the best team win: determining the winner
rankings in college football (R. Drew Pasteur) of a cross country race (Stephen Szydlik)
IV Golf 25. Biomechanics of running and walking
12. The science of a drive (Douglas N. Arnold) (Anthony Tongen, Roshna E. Wunderlich)

Readership: Everyone with sporting interests listed in the chapters.

This is a book intended to demonstrate the illuminating power of mathematics to a larger


audience, and the chapters were solicited for the 2010 Mathematics Awareness Month. It is an
excellent, entertaining and informative work with something to satisfy every reader. Read this
book on a bus, train or plane and you will find yourself saying “Are we here already?”
Norman R. Draper: draper@stat.wisc.edu
Department of Statistics, University of Wisconsin – Madison
1300 University Avenue, Madison, WI 53706-1532, USA

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 117

A Comparison of the Bayesian and Frequentist Approaches to Estimation


Francisco J. Samaniego
Springer, 2010, xiii + 225 pages, €69.95/£62.99/$79.95, hardcover
ISBN: 978-1-4419-5940-9

Table of contents

1. Point estimation from a decision-theoretic 7. Bayesian vs. frequentist shrinkage in


viewpoint multivariate normal problems
2. An overview of the frequentist approach to 8. Comparing Bayesian and frequentist estimators
estimation under asymmetric loss
3. An overview of the Bayesian approach to 9. The treatment of nonidentifiable models
estimation 10. Improving on standard Bayesian and
4. The threshold problem frequentist estimators
5. Comparing bayesian and frequentist estimators 11. Combining data from “related” experiments
of a scalar parameter 12. Fatherly advice
6. Conjugacy, self-consistency and Bayesian Appendix: Standard univariate probability models
consensus

Readership: Intended to be broad, including an advanced undergraduate audience, but students


may lack the necessary maturity for this endeavour and the book would more likely benefit more
senior readers.
A Comparison is pleasant to read, written in a congenial style (especially the final “fatherly
advices”!), and the decision-theoretic background is well-set. Its self-declared purpose of
“identify[ing] the boundary between Bayes estimators which tend to outperform standard
frequentist estimators and Bayes estimators which don’t” is commendable in that an objective
comparison of Bayesian versus frequentist estimators should appeal to anyone. However, the
focus of A Comparison ends up being too narrow to appeal to a wide audience, given that the
book revolves around papers written jointly or singly by the author on this topic and that it is set
within a point estimation framework where there exists a “best” unbiased estimator, a condition
absent from most estimation problems (Lehmann & Casella, 1998). (Other inferential aspects
like testing are not covered.)
Towards the comparison of frequentist and Bayesian procedures, since under a given prior
G, the optimal procedure always is associated with G, A Comparison introduces a “true prior”
G0 that should calibrate the comparison. Unsurprisingly, the conclusion is that if G is close
enough to G0 , then the Bayesian procedure does better than the frequentist one. Since this
improvement depends on an unknown “truth”, the practical implications are limited. From a
Bayesian perspective, inference under “wrong” priors has been studied in the 90’s as Bayesian
robustness (Insua & Ruggeri, 2000).
A Comparison insistance in using conjugate (proper) priors is inappropriate in conjunction
with shrinkage estimation, since truly Bayesian shrinkage estimators correspond to hierarchical
priors (Berger & Robert, 1990). Furthermore, the appeal of self-consistency (Chapter 6) is
limited: a prior is self-consistent if, when prior expectation and observation coincide, prior
and posterior expectations are equal. This constraint focuses on a zero probability event and is
not parameterisation-invariant, while being restricted to natural conjugate priors, for example,
excluding mixtures of conjugate priors.
Chapter 9 offers a new perspective on non-identifiability, but focuses on the performances
of the Bayesian estimates of the non-identifiable part. The appeal of the Bayesian approach is
rather to infer on the identifiable part by integrating out non-identifiable parameters. Chapters
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
118 SHORT BOOK REVIEWS

10–11 about combining experiments are interesting but a modern Bayesian analysis would resort
to a non-parametric modelling rather than to empirical Bayes techniques.
In conclusion, A Comparison does not revolutionise the time-old debate about the relevance
of Bayesian procedures towards frequentist efficiency or about relying on frequentist estimates
under weak prior information. Given my reservations, I would have difficulties to advertise it as
a textbook
Christian P. Robert: christian.robert@ceremade.dauphine.fr
Ceremade—Université Paris-Dauphine, Bureau B638
Place du Maréchal de Lattre de Tassigny, 75775 PARIS Cedex 16, FRANCE

References
Berger, J.O. & Robert, C. (1990). Subjective hierarchical Bayes estimation of a multivariate normal mean: on the
frequentist interface. Ann. Statist., 18, 617–651.
Insua, D.R. & Ruggeri, F. (Eds.) (2000). Robust Bayesian Analysis. New York: Springer.
Lehmann, E.L. & Casella, G. (1998). Theory of Point Estimation, 2nd ed. New York: Springer.

Design and Analysis of Experiments with SAS


John Lawson
Chapman & Hall/CRC, 2010, xiii + 582 pages, £63.99/$99.95, hardcover
ISBN: 978-1-4200-6060-7

Table of contents

1. Introduction 8. Split-plot designs


2. Completely randomized designs with one factor 9. Crossover and repeated measure designs
3. Factorial designs 10. Response surface designs
4. Randomized block designs 11. Mixture experiments
5. Designs to study variances 12. Robust parameter design experiments
6. Fractional factorial designs 13. Experimental strategies for increasing
7. Incomplete and confounded block designs knowledge

Readership: Experimenters and their statistical colleagues.

This is specifically a book on response surface methodology written for those who use the
SAS computing system. Consequently, its appeal is somewhat limited, because all explanations
of experimental designs and their uses quickly merge into the consequent SAS programming
methods required to get such designs and perform the appropriate analyses. The exposition
throughout is first rate. The presentation and organization, the coverage of the topics, and
the discussions of the examples are all excellent. If you are an SAS user needing help with
experimental design, you will certainly profit from this text.
Norman R. Draper: draper@stat.wisc.edu
Department of Statistics, University of Wisconsin – Madison
1300 University Avenue, Madison, WI 53706-1532, USA

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 119

Principles and Theory for Data Mining and Machine Learning


Bertrand Clarke, Ernest Fokoué, Hao Helen Zhang
Springer, 2009, xv + 781 pages, €64.95/£58.99/$79.95, hardcover
ISBN: 978-0-387-98134-5

Table of contents

1. Variability, information, and prediction 7. Computational comparisons


2. Local smoothers 8. Unsupervised learning: clustering
3. Spline smoothing 9. Learning in high dimensions
4. New wave nonparametrics 10. Variable selection
5. Supervised learning: partition methods 11. Multiple testing
6. Alternative nonparametrics

Readership: PhD level students, and researchers and practitioners in statistical learning and
machine learning.

Data Mining may be seen as a response to the new demands that have been generated by large
high-dimensional (many variables) data sets, by new methodologies that take advantage of the
power of modern computing systems, and by the emergence of new data analysis techniques
that are a marked departure from more classical approaches. Machine Learning emphasizes the
use of formal structures that allow machines (computers) to automate important components
of inferential procedures. With “high-dimensional” data, model uncertainty often becomes a
dominant issue for the analyst. The first chapter has insightful and interesting comment on the
curse of dimensionality, sparsity, exploding numbers of models, multicollinearity and concurvity,
the effect of noise, local dimension, and parsimony. There is a note on the selection of design
points for computer experiments.
This text assumes a thorough training in undergraduate statistics and mathematics. Computed
examples that include R code are scattered through the text. There are numerous exercises, many
with commentary that sets out guidelines for exploration.
As with most texts in this area, the independent, symmetric unimodal error model is assumed
throughout. In comparing nonparametric regression with linear regression the authors comment
that people tend to put less emphasis on the error structure than on uncertainty in estimates of
the functional form f . They argue that:

“This is reasonable because, outside of large departures from independent, symmetric, unimodal ε i s,
the dominant sources of uncertainty come from estimating f .”

This grossly downplays the importance of temporal and spatial error structures, and of layered
structures of variation, in many of the large high dimensional data sets that analysts nowadays
encounter. The over-riding reason for staying with the independent, symmetric unimodal error
model is surely that no one book can cover everything! Within these bounds, this book gives a
careful treatment that is encyclopedic in its scope.
The book divides into three parts. Part I (chapters 1–4) is on nonparametric regression; Part
II (chapters 5–7) is a mix of classification and recent nonparametric methods that includes
computational comparisons; Part III (chapters 8–11) covers high-dimensional problems that
include clustering, dimension reduction, variable selection and multiple comparisons.
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
120 SHORT BOOK REVIEWS

This is a challenging text that is thorough in its coverage of technical issues.


John H. Maindonald: john.maindonald@anu.edu.au
Centre for Mathematics & Its Applications
Australian National University, Canberra ACT 0200, Australia

Bayesian Model Selection and Statistical Modeling


Tomohiro Ando
Chapman & Hall/CRC, 2010, xiv + 286 pages, £63.99/$89.95, hardcover
ISBN: 978-1-4398-3614-9

Table of contents

1. Introduction 6. Simulation approach for computing the


2. Introduction to Bayesian analysis marginal likelihood
3. Asymptotic approach for Bayesian inference 7. Various Bayesian model selection criteria
4. Computational approach for Bayesian inference 8. Theoretical development and comparisons
5. Bayesian approach for model selection 9. Bayesian model averaging

Readership: Statistics graduate students and researchers in Bayesian model choice.

While Bayesian model selection is one of my favourite research topics, I am alas disappointed
after reading this book. First, the innovative part of the book is mostly based on papers written
by the author over the past five years, revolving around the Bayesian predictive information
criterion (BPIC, Ando, 2007). Second, the more general picture constitutes a regression when
compared with existing books like Chen et al. (2000). The coverage of the existing literature is
often incomplete and sometimes confusing. This is especially true for the computational aspects
that are generally poorly treated or at least not treated in a way from which a newcomer to the field
would benefit. For instance, the Metropolis–Hastings algorithm (page 66) is first introduced in
a Metropolis-within-Gibbs framework, however the acceptance probability forgets to account
for the other components of the parameter; or Chapter 6 opts for the worst possible choice in
the “Gelfand–Day’s” (sic!) and bridge sampling estimators by considering the harmonic mean
version with the sole warning that it “can be unstable in some applications” (page 172).
The author often uses complex econometric models as illustrations, which is nice; however,
he does not pursue the details far enough for a reader to replicate the study without further
reading. The few exercises in each chapter are rarely helpful, more like appendices. Take, for
example, Exercise 6, page 196, which (re-)introduces the Metropolis–Hastings algorithm, even
though it has already been defined on page 66, and then asks the reader to derive a marginal
likelihood estimator. Another exercise on page 164 covers the theory of DNA micro-arrays and
gene expression in ten lines (repeated verbatim page 227), then asks the reader to identify marker
genes responsible for a certain trait.
The quality of the editing is quite poor, with numerous typos throughout the book. For instance,
as a short sample of those, Gibbs sampling is spelled Gibb’s sampling (only) in Chapter 6, the
bibliography is not printed in alphabetical order and contains erroneous entries, like Jacquier,
Nicolas and Rossi (2004) or Tierney and Kanade (1986), some sentences are not grammatically
correct (for example, “the posterior has multimodal”, page 55) or meaningful (for example, “the
accuracy of this approximation on the tails may not be accurate”, page 49).
After reading this book, I feel the contribution to the field of Bayesian Model Selection and
Statistical Modeling is too limited and disorganised for the book to be recommended as “helping
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 121

you choose the right Bayesian model” (as advertised on the back-cover). It certainly falls short
of being an appropriate textbook for most audiences.
Christian P. Robert: christian.robert@ceremade.dauphine.fr
Ceremade—Université Paris-Dauphine, Bureau B638
Place du Maréchal de Lattre de Tassigny, 75775 PARIS Cedex 16, FRANCE

References
Ando, T. (2007). Bayesian predictive information criterion for the evaluation of hierarchical Bayesian and empirical
Bayes models. Biometrika, 94, 443–458.
Chen, M., Shao, Q. & Ibrahim, J. (2000). Monte Carlo Methods in Bayesian Computation. New York: Springer.

Statistical Inference: An Integrated Bayesian/Likelihood Approach


Murray Aitkin
Chapman & Hall/CRC, 2010, xvii + 236 pages, £57.99/$89.95, hardcover
ISBN: 978-1-4200-9343-8

Table of contents
1. Theories of statistical inference 5. Regression and analysis of variance
2. The integrated Bayes/likelihood approach 6. Binomial and multinomial data
3. t-Tests and normal variance tests 7. Goodness of fit and model diagnostics
4. Unified analysis of finite populations 8. Complex models

Readership: Graduate or advanced undergraduate statisticians of all philosophies, especially


Bayesians.

This book describes an approach to inference based on using the likelihood function as the
primary measure of evidence for parameters and models. The emphasis on evidence rather than
decision theory makes the book especially relevant to scientific investigations.
It gives interesting and thoughtful comparisons to alternative approaches to inference, arguing
that that presented here has particular strengths. In place of Bayes factors to compare models, a
strategy using the full posterior distribution of the likelihood is described. It also shows that the
approach provides a natural strategy for finite population inference.
The author describes the overall result as providing a “general integrated Bayesian/likelihood
analysis of statistical models”, to serve as an alternative to standard Bayesian inference and
as a foundation “for a course sequence” in modern Bayesian theory. The very deep and solid
inferential foundations the book lays support a matching carefully thought out and impressive
superstructure, covering topics which include variance component models, finite mixtures,
regression, anova, complex survey designs, and other topics.
It would provide a valuable and thought provoking volume for advanced students studying
the foundations of inference and their practical implications. It would make a particularly good
book for a reading group.
David J. Hand: d.j.hand@imperial.ac.uk
Mathematics Department, Imperial College
London SW7 2AZ, UK

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
122 SHORT BOOK REVIEWS

Charming Proofs: A Journey into Elegant Mathematics


Claudi Alsina, Roger B. Nelsen
Mathematical Association of America, 2010, xxiv + 295 pages, $59.95, hardcover
ISBN: 978-0-88385-348-1

Table of contents

Introduction 7. The quadrilaterals’ corner


1. A garden of integers 8. Squares everywhere
2. Distinguished numbers 9. Curves ahead
3. Points in the plane 10. Adventures in tiling and coloring
4. The polygonal playground 11. Geometry in three dimensions
5. A treasury of triangle theorems 12. Additional theorems, problems and proofs
6. The enchantment of the equilateral triangle Solutions to the challenges

Readership: Secondary school, college, and university teachers, or indeed anyone who enjoys
the aesthetics of mathematics.
This is a collection of remarkable proofs, all using elementary mathematical or geometrical
arguments, and all very simple but often extraordinarily powerful. While some will be well
known, I imagine that almost every reader will find material here that they have not encountered
before.
Although the book is a mathematics book, I feel sure that some of the theorems would have
direct relevance to statistics. For example, how about: for any even number of different points
distributed inside a circle it is always possible to draw a line across the circle missing every
point and such that exactly half lie on each side of the line. Surely this can find application in
segmentation analysis, for applications in marketing and other areas.
In addition to the proofs themselves, there are over 130 “challenges” aimed at stimulating the
reader to create similar such “charming proofs”. Solutions to these challenges appear at the end
of the book.
I cannot help but feel that working carefully through the proofs in this book would materially
improve one’s creative powers and ability to think laterally.
David J. Hand: d.j.hand@imperial.ac.uk
Mathematics Department, Imperial College
London SW7 2AZ, UK

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 123

Maximum Penalized Likelihood Estimation, Volume II: Regression


Paul P. Eggermont, Vincent N. LaRiccia
Springer, 2009, xx + 571 pages, €74.95/£67.99/$99.00, hardcover
ISBN: 978-0-387-40267-3

Table of contents

12. Nonparametric regression 20. Kalman filtering for spline smoothing


13. Smoothing splines 21. Equivalent kernels for smoothing splines
14. Kernel estimators 22. Strong approximation and confidence bands
15. Sieves 23. Nonparametric regression in action
16. Local polynomial estimators Appendix 4. Bernstein’s inequality
17. Other nonparametric regression problems Appendix 5. The TVDUAL implementation
18. Smoothing parameter selection Appendix 6. Solutions to some critical exercises
19. Computing nonparametric estimators

Readership: This book is meant for specialized readers or graduate students interested in the
theory, computation and application of Nonparametric Regression to real data, and the new
contributions of the authors.

A strong mathematical background from the reader is needed, even though the authors try
to make the presentation intuitively plausible before embarking on rigorous arguments. For
mathematically mature readers, the book would be a delight to read. Many others, with a
good background in cubic splines, may want to read this book to see the generalizations via
Reproducing Kernel Hilbert Space (RKHS) and Kalman’s State Space models. Two real life data
sets are analyzed both by the old and new methods.
This is the second volume of what is likely to be a trilogy, the third volume discussing inverse
problems. The first volume is a relatively sophisticated introduction to classical parametric and
nonparametric methods. The second volume, that is, the volume under review, begins with an
introductory chapter on Nonparametric Regression, splines, and RKHS. Splines are developed
more fully in the second chapter. The next six chapters provide fairly detailed coverage of sieves,
local polynomial estimators, non-smooth Nonparametric Regression, and computation.
The remaining four chapters provide what is the core of the book and its major new
contribution. Two chapters show how Kalman’s state space models lead to a convenient method
for computing higher splines when data cannot be modeled with cubic splines. The next chapter
provides confidence bands. The last chapter returns to the two data sets introduced right at
the beginning. Much of this new work is connected earlier work on (diffuse) Gaussian process
priors. As Bayesians know well, diffuse priors are improper, hence source of many technical
difficulties, but they are too useful to be given up.
The authors have not only written a scholarly and very readable book but provide major new
methods and insights. Nonetheless, it cannot be easy to offer a graduate course based on it. But
if there is a workshop based on the book at SAMSI (the NSF funded Statistics and Applied
Mathematics Institute), then it would help evaluate the methods as well as lead to teachable
notes for a graduate course.
Jayanta K. Ghosh: ghosh@stat.purdue.edu
Department of Statistics, Purdue University
West Lafayette, IN 47909, USA

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
124 SHORT BOOK REVIEWS

Probability, Statistics, & Financial Math


Peter Caithamer
Peter Caithamer, 2010, vii + 667 pages, $180.00, hardcover
ISBN: 978-0-9830011-0-2

Table of contents

I Probability Theory 7. Linear regression


1. Logic & set theory III Financial Mathematics
2. Probability measures 8. Interest theory
3. Random variables and their distributions 9. Life contingencies
4. Stochastic processes 10. Options pricing
II Mathematical Statistics Solutions
5. Estimation Tables & charts
6. Hypothesis tests

Readership: Undergraduate and graduate students in mathematical statistics, actuarial sciences,


and finance. Actuary candidates preparing to actuarial exams. Actuaries and other professionals
in finance.

The textbook is written with the material for the core actuarial exams in mind and this strongly
influenced its scope which is truly staggering. Just to quote from the introduction: “It may
be used for 6 to 8 complete undergraduate/graduate courses on probability theory, stochastic
process, mathematical statistics, regression & time series, credibility theory, interest theory,
life contingencies, and option pricing.” One gets skeptical about the outcome if the goals are
set so broadly in terms of the subject and the audience—it is obvious that something has to be
sacrificed along the way. Thus initially I was very doubtful about the final result, wondering if it is
possible at all to write a coherent exposition that would speak to an inexperienced undergraduate
student on fundamentals of probability and statistics and at the same time address graduate
level audience by explaining how Girsanov’s Theorem allows to reformulate no arbitrage rule
in finance. Surprisingly, despite this daunting task the book delivers on the promises. It is
written carefully and in notationally consistent manner. While discussing the topics from the
wide spectrum of difficulty, it does it in a bold but honest manner not avoiding mathematically
advanced language when it cannot be avoided.
So where did it have to give away? By necessity it had to be very brief and by going
directly to the definitions and theorems it lacks on motivation and background. This is
to great extend remedied by a well balanced set of exercises and problems with complete
solutions provided in an appendix. However, the discussed topics are not placed in broader
perspective of the field to which they belong. For these reasons it is not a book that one
would pick up to read for enjoyment in free time. However, it will definitely be appreciated
when efficiency is needed during preparation for an examination in any of the covered fields
of broadly understood probability theory and its applications. Since there is currently no
textbook available with this scope, it is an important and quite successful first effort to fill this
gap.
It is evident that the book has been tested in the classroom and thus could be utilized as
a textbook. However, when considered for this, it should be accompanied by some additional
enhancing texts or an additional effort should be made to give more motivation to a potential
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 125

audience. The book would also serve well as a study guide that can be used by students and
professionals alike.
Krzysztof Podgórski: krys@maths.lth.se
Centre for Mathematical Sciences
Lund Institute of Technology/Lund University
Box 118, 22100 Lund, Sweden

The Pleasures of Statistics: The Autobiography of Frederick Mosteller


Frederick Mosteller, Edited by Stephen E. Fienberg David C. Hoaglin, Judith M. Tanur
Springer, 2010, xvi + 344 pages, €42.75/£35.99/$39.95, softcover
ISBN: 978-0-387-77955-3

Table of contents

Part I. Examples of Quantitative Studies 9. Carnegie Institute of Technology


Introduction 10. Graduate schools: Carnegie and Princeton
1. Why did Dewey beat Truman in the 11. Magic
pre-election polls of 1948? 12. Beginning research
2. Sexual behavior in the United States: The 13. Completing the doctorate
Kinsey report 14. Coming to Harvard University
3. Learning theory: Founding mathematical 15. Organizing statistics
psychology Part III. Continuing Activities
4. Who wrote the disputed Federalist papers, 16. Evaluation
Hamilton or Madison? 17. Teaching
5. The safety of anesthetics: The national 18. Group writing
halothane study 19. The Cape
6. Equality of educational opportunity: The 20. Biostatistics
Coleman report 21. Health policy and management
Part II. Early Life and Education 22. Health science policy
7. Childhood 23. Editors’ epilogue
8. Secondary school

Readership: All interested in statistical research, statistics in society, and academic life,
particularly those for whom it is not too late to benefit from the wisdom in this book.

There are very few book-length autobiographies of statisticians, and so the appearance of one
will be of interest to most readers of this Review, as it was to me. However, I would be surprised
if many people other than those who knew Frederick Mosteller personally, or were already
very familiar with his work, could anticipate the pleasure they will get from reading this book.
Neither the title, the cover, a quick skim, or even a familiarity with the basics of Mosteller’s
career, would lead one to expect such a good book. It was pure joy to read it, though I must say I
did not begin at the beginning. Mosteller’s childhood during the Great Depression, his secondary
school, college and graduate education, his activities during World War 2, and his early career
steps, were all much more interesting to me than the statistics case studies which occupy the first
third of the book. It seemed to me to be a very American story (work hard, develop good habits,
take opportunities when they present themselves, treat others well, etc.), and while it is not quite
“rags to riches”, it has that feel about it. I could not help but think of Mark Twain. Full of wit,
wisdom and sage advice, the book has much to offer those interested in the teaching of statistics,
cross-disciplinary as well as narrow sense statistical research and academic administration, but
perhaps most of all, it is written for those who believe in the power of statistics to help make the
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
126 SHORT BOOK REVIEWS

world a better place. The entire book has a pedagogical rather than a tell-all style, which doesn’t
seem out of place. On the contrary, it gives Mosteller the opportunity to explain a variety of
aspects of statistics to the lay reader, to put them in a larger context, and to comment on some
of the people involved, at the same time revealing something about himself. This combination
of textbook and personal memoir is very appealing.
Terry Speed: terry@stat.berkeley.edu
Department of Statistics, 367 Evans Hall #3860
University of California, Berkeley, CA 94720-3860, USA

Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction
Bradley Efron
Cambridge University Press, 2010, xii + 263 pages, £40.00/$65.00, hardcover
ISBN: 978-0-521-19249-1

Table of contents

Introduction and foreword 7. Estimation accuracy


1. Empirical Bayes and the James–Stein estimator 8. Correlation questions
2. Large-scale hypothesis testing 9. Sets of cases (enrichment)
3. Significance testing algorithms 10. Combination, relevance, and comparability
4. False discovery rate control 11. Prediction and effect size estimation
5. Local false discovery rates A. Exponential families
6. Theoretical, permutation, and empirical null B. Programs and data sets
distributions

Readership: Everyone interested in large-scale inference, and people interested in what’s new
and different about 21st in comparison with 20th century statistics.

As we all will have noticed, a number of fields have emerged over the last few decades which
present inferential challenges not adequately met by the methods developed by Pearson, Fisher,
Neyman, and their immediate successors. Mapping disease risk, brain imaging, and analysing
microarray data are just three examples. Put very briefly, these challenges involve estimating,
testing or predicting many things, that is, large-scale inference, the title of Bradley Efron’s latest
book. Although there are clear precursors, notably R. von Mises in 1942 studying water quality,
and R. A. Fisher and colleagues in 1943 surveying butterfly species, the story presented in this
book begins in 1956 with papers presented to the Third Berkeley Symposium by H. Robbins
on an empirical Bayes approach to Statistics, and C. Stein on the inadmissibility of the usual
estimator of the mean of the multivariate normal distribution. It was not, and is not immediately
obvious that these two papers have a lot in common, much less with the theory of simultaneous
hypothesis testing with linear models, and with R. A. Fisher’s butterflies. But they do, and as
best I can tell—the book is not clear on this point—the connection was made around 1974–1975
by Efron himself, perhaps in collaboration with others. What was a modest sub-field of statistics
two or three decades ago, has become much more important now that many has came to mean,
not just 10 or 100, but 10 thousand or 100 thousand or more, which is where we are now.
In the last decade, Efron has played a leading role in laying down the foundations of large-
scale inference, not only in bringing back and developing old ideas, but also linking them with
more recent developments, including the theory of false discovery rates and Bayes methods. We
are indebted to him for this timely, readable and highly informative monograph, a book he is
uniquely qualified to write. It is not a comprehensive text on the topics in its title, for example,
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 127

some important works on empirical Bayes methods are not mentioned. Rather, it is a synthesis
of many of Efron’s own contributions over the last decade with that of closely related material,
together with some connecting theory, valuable comments, and challenges for the future. His
avowed aim is “not to have the last word” but to help us deal “with the burgeoning statistical
problems of the 21st century”. He has succeeded admirably.
Terry Speed: terry@stat.berkeley.edu
Department of Statistics, 367 Evans Hall #3860
University of California, Berkeley, CA 94720-3860, USA

Visualizing Data Patterns with Micromaps


Daniel B. Carr, Linda Williams Pickle
Chapman & Hall/CRC, 2010, xvii + 164 pages, £44.99/$69.95, hardcover
ISBN: 978-1-4200-7573-1

Table of contents

1. An introduction to micromaps 6. Comparative micromaps


2. Research influencing micromap design 7. Putting it all together
3. Data visualization design principles Appendix 1. Data sources and notes
4. Linked micromaps Appendix 2. Symmetric perceptual groupings
5. Conditioned micromaps

Readership: Scientists wishing to explore and present spatial data with maps.

Charting geographic data is difficult. Polygon map displays of data recorded by area are
commonly used, though they suffer from large areas often having tiny populations while small
areas have large populations, so that assessing spatial patterns can be tricky. Interactive software is
one approach to making the displays more flexible and useful. Spatial displays called micromaps,
a form of display using small multiples, have been suggested by Dan Carr, and applied and
developed in collaboration with Linda Pickle, primarily to US health data. Graphics like these
are immediately recognisable to people familiar with the areas shown, and the book mainly uses
data for the fifty US States plus the District of Columbia. Overall, micromaps are an effective
tool and the book explains them at length, with lots of examples, so that non-statisticians can
understand and use them. Of course, this means that there is little statistical depth and although
the authors frequently and properly recommend caution in overinterpreting the displays, they
occasionally indulge in it themselves, for instance in discussing Figure 4.15 and Figure 5.3.
Dealing with graphics requires many different skills and it is a strength of the book that the
relevant topics in perception and cognition are well summarised in the second chapter.
While the book is attractively presented in full colour and there are many real examples, it is a
bit surprising that these are not more striking. The authors have been using micromaps for many
years and you would expect them to present their most insightful graphics in their book. When
they stray from the US, as in Figure 6.3 where they display yield spreads for government bonds
for a number of countries, they are not successful at all (and don’t appear to have noticed that in
the first of seven maps of the world, continental Europe and most of Asia are missing). In the
final chapter there is a lengthy discussion of a fascinating dataset for Louisiana before and after
Hurricane Katrina. Various types of micromaps are used and reveal interesting information. In
some cases other kinds of display would have been more effective and it is an opportunity missed
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
128 SHORT BOOK REVIEWS

not to show micromaps in conjunction with other displays. There are several references to the
availability of code, maps and data on the book’s website. At the time of writing, this has not
yet happened. Some excellent material is available on Dan Carr’s own website (for instance, the
Katrina dataset) and elsewhere, but not all that is promised in the book.
In general the graphic displays in this book are clear and straightforward, they are not
cluttered with unnecessary decoration. Reasons for poor display and how poor display may
be avoided are well covered in the second chapter. They suggested a way of presenting my
final recommendation, which has been hidden in the layout of the review. In the form you are
currently reading this will be difficult to spot. If you want a clue, think of the Book of Kells,
and if you don’t, look at the version of the review on my website.
Antony Unwin: unwin@math.uni-augsburg.de
Universität Augsburg, Institut für Mathematik
D-86135 Augsburg, Germany

Statistics for Archaeologists: A Common Sense Approach, Second Edition


Robert D. Drennan
Springer, 2010, xv + 333 pages, €49.94/£44.99/$49.95, softcover
ISBN: 978-1-4419-6071-9

Table of contents

Part I. Numerical Exploration 14. Comparing proportions of different samples


1. Batches of numbers 15. Relating a measurement variable to another
2. The level or center of a batch measurement variable
3. The spread or dispersion of a batch 16. Relating ranks
4. Comparing batches Part IV. Special Topics in Sampling
5. The shape or distribution of a batch 17. Sampling a population with subgroups
6. Categories 18. Sampling a site or region with spatial units
Part II. Sampling 19. Sampling without finding anything
7. Samples and populations 20. Sampling and reality
8. Different samples from the same population Part V. Multivariate Analysis
9. Confidence and population means 21. Multivariate approaches and variables
10. Medians and resampling 22. Similarities between cases
11. Categories and population proportions 23. Multidimensional scaling
Part III. Relationships between Two Variables 24. Principal components analysis
12. Comparing two sample means 25. Cluster analysis
13. Comparing means of more than two samples

Readership: Archeologists and others; read the next paragraph.

“This book is intended as an introduction to basic statistical principles and techniques for the
archaeologist” says the opening of the preface. “All examples and exercises are set specifically
in the context of archaeology . . . (However) physical anthropologists, sociologists, political
scientists and specialists in other fields make use of these same principles and techniques. The
mix of topics, (and the) approach . . . reflect my own view of what is most useful . . . (for) . . .
archeological data.”
For the indicated type of audience, this is a superb book, setting the use of basic statistics in a
format that makes sense of the formulas rather than just saying “compute this”. Mathematically
fluent students who scorn a specific context will complain that there is too much “talky talky”
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 129

surrounding the formulas. That talk, missing from many applied elementary texts, is especially
what the audience for this book needs and deserves, and rarely gets in class, in my experience.
To the mathematically well endowed, books like this can appear somewhat simple minded. They
are, however, very hard to write.
Robert Drennan (and his wife, acknowledged in the preface) have succeeded where others
have failed, namely to explain, in an understandable way, the advantages of simple statistical
techniques in a specific applied context. I heartily recommend this text.
Norman R. Draper: draper@stat.wisc.edu
Department of Statistics, University of Wisconsin – Madison
1300 University Avenue, Madison, WI 53706-1532, USA

Plans d’expérience: constructions et analyses statistiques


Walter Tinsson
Springer, 2010, xv + 532 pages, €94.79/£85.90/$129.00, broché
ISBN: 978-3-642-11471-7

Table des matières

Partie I. Généralités Partie III. Plans d’expérience pour facteurs qualitatifs


1. La notion de plan d’expérience 8. Plans d’expérience pour facteurs qualitatifs
2. Outils mathématiques pour les plans 9. Plans d’expérience en bloc pour facteur
d’expérience qualitatifs
Partie II. Plans d’expérience pour facteurs quantitatifs Partie IV. Optimalité des plans d’expérience
3. Plans d’expérience pour modèles d’ordre un 10. Critères d’optimalité
4. Plans d’expérience pour modèles à effects Partie V. Annexes
d’interactions A. Plans factoriel et représentation linéaire des
5. Plans d’expérience pour surfaces de réponse groupes
6. Plans d’expérience en blocs B. Plans d’expérience classiques
7. Plans d’expérience pour mélanges C. Notations utilisées

Public visé: Statisticiens, chercheurs en plans d’expérience, ingénieurs.

Ce livre se situe à mi-chemin entre les ouvrages appliqués qui proposent des catalogues de plans
d’expérience et les ouvrages théoriques qui, en étant trop abstraits, sont difficilement exploitables
pour les applications. Les nombreux exemples développés permettent de bien comprendre les
enjeux. Par ailleurs, les outils mathématiques nécessaires y sont bien détaillés.
Le cœur du livre, correspondant aux chapitres de la partie II, proposent un traitement complet
de différentes problématiques industrielles. Le cheminement se fait en trois étapes: discussion
sur le modèle à utiliser, choix du plan d’expérience associé et analyse statistique qui en découle.
Ce schéma identique à tous ces chapitres permet d’avoir une vision cohérente des sujets traités.
Par ailleurs, les outils mathématiques nécessaires y sont bien détaillés et le lecteur qui souhaitent
approfondir les détails les plus techniques trouvera à la fin de chaque chapitre de nombreux
développements.
L’auteur a choisi de se concentrer sur les propriétés d’orthogonalité pour justifier le choix des
plans d’expérience utilisés, alors que l’efficacité et les propriétés d’optimalité des plans ne sont
évoqués qu’au dernier chapitre du livre ce qui nuit un peu à la vision d’ensemble. On notera
l’absence d’exemple d’utilisation de logiciels dédiés à la construction de plans d’expérience
ainsi que du traitement des modèles non-linaires.
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
130 SHORT BOOK REVIEWS

En conclusion, ce livre est idéal pour les étudiants en master de statistique ou les ingénieurs
en activité ayant à utiliser les plans d’expérience.
Pierre Druilhet: Pierre.Druilhet@math.univ-bpclermont.fr
Université Blaise Pascal, Laboratoire de Mathématiques
Campus des Cézeaux, B.P. 80026, 63177 Aubière cedex, France

Bayesian Analysis for Population Ecology


Ruth King, Byron J. T. Morgan, Olivier Gimenez, Stephen P. Brooks
Chapman & Hall/CRC, 2010, xiii + 442 pages, £50.99/$82.95, hardcover
ISBN: 978-1-4398-1187-0

Table of contents

Part I. Introduction to Statistical Analysis of 7. MCMC and RJMCMC computer programs


Ecological Data Part III. Ecological Applications
1. Introduction 8. Covariates, missing values and random effects
2. Data, models and likelihoods 9. Multi-state models
3. Classical inference based on the likelihood 10. State-space modelling
Part II. Bayesian Techniques and Tools 11. Closed populations
4. Bayesian inference Appendix A: Common distributions
5. Markov Chain Monte Carlo Appendix B: Programming in R
6. Model discrimination Appendix C: Programming in WinBUGS

Readership: Ecologists interested in Bayesian Analysis of Ecological problem, also other


ecologists in general, who are interested in any of the following, population ecological models
and statistical inference based on those models and classical or Bayesian methods.

The book is divided into three parts. The first part introduces some general problems of
population ecology and a rich collection of models to solve these problems via likelihood based
classical inference. Likelihoods are easy to derive once one has well defined data collection
procedures and probability models for the data.
One class of major general problems is the study of extinction, abundance or runaway growth
of different species. To answer this, one has to estimate the population total in each population
for several years. Depending on the species under consideration, one may have a census, or the
Common Bird Census or data based on marked samples which are then partly recaptured in
subsequent samples. Each method will have its own model and likelihood and likelihood based
estimate and so on. Even the technique of marking animals differs widely from population to
population. Birds are ringed, insects are marked by specks of fat, animals may be marked by
radio transmitters and tracked by satellite. Even DNA matching is used in some cases. Part 1
contains a wealth of material on aspect of such data, models analysis as well as the history of
evolution of the subject.
Part 2 is a good, self-contained introduction to Bayesian Analysis for the problems mentioned
above. There are good discussions of informative and not so informative priors, the Jeffreys prior,
different techniques of MCMC, including Gibbs sampling, Metropolis–Hastings algorithm, and
the Reversible Jump MCMC for model selection or calculation of posterior distribution of
parameters given a model. It appears that the ease with which Bayesians can average over
models or choose a model in a well calibrated way has been one of the main reasons for
popularity of Bayesian methods in ecology.
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 131

Part 3 is a collection of interesting special topics in ecological applications. They include


missing values, state space models (without the usual linearity and normal distribution), and
applications of model fitting, model choice and model average.
The authors write very well and illustrate with good examples. Both the technical and non-
technical discussions are good.
Jayanta K. Ghosh: ghosh@stat.purdue.edu
Department of Statistics, Purdue University
West Lafayette, IN 47909, USA

NIST Handbook of Mathematical Functions


Frank W. J. Olver, Daniel W. Lozier, Ronald F. Boisvert, Charles W. Clark (Editors)
Cambridge University Press, 2010, xv + 951 pages, £35.00/$50.00, softcover (also available as
hardcover)
ISBN: 978-0-521-14063-8

Table of contents

1. Algebraic and analytic methods (Ranjan Roy, (George E. Andrews)


Frank W. J. Olver, Richard A. Askey, Roderick 18. Orthogonal polynomials (Tom H. Koornwinder,
S. C. Wong) Roderick S. C. Wong, Roelof Koekoek, Rene F.
2. Asymptotic approximations (Frank W. J. Olver, Swarttouw)
Roderick S. C. Wong) 19. Elliptic integrals (Bille C. Carlson)
3. Numerical methods (Nico M. Temme) 20. Theta functions (William P. Reinhardt, Peter L.
4. Elementary functions (Ranjan Roy, Frank W. J. Walker)
Olver) 21. Multidimensional theta functions (Bernard
5. Gamma function (Richard A. Askey, Ranjan Deconinck)
Roy) 22. Jacobian elliptic functions (William P.
6. Exponential, logarithmic, sine and cosine Reinhardt, Peter L. Walker)
integrals (Nico M. Temme) 23. Weierstrass elliptic and modular functions
7. Error functions, Dawson’s and Fresnel integrals (William P. Reinhardt, Peter L. Walker)
(Nico M. Temme) 24. Bernoulli and Euler polynomials (Karl Dilcher)
8. Incomplete gamma and related functions 25. Zeta and related functions (Tom M. Apostol)
(Richard B. Paris) 26. Combinatorial analysis (David M. Bressoud)
9. Airy and related functions (Frank W. J. Olver) 27. Functions of number theory (Tom M. Apostol)
10. Bessel functions (Frank W. J. Olver, Leonard C. 28. Mathieu functions and Hill’s equation
Maximon) (Gerhard Wolf )
11. Struve and related functions (Richard B. Paris) 29. Lamé functions (Hans Volkmer)
12. Parabolic cylinder functions (Nico M. Temme) 30. Spheroidal wave functions (Hans Volkmer)
13. Confluent hypergeometric functions (Adri B. 31. Heun functions (Brian D. Sleeman, Vadim
Olde Daalhuis) Kuznetsov)
14. Legendre and related functions (T. Mark 32. Painlevé transcendents (Peter A. Clarkson)
Dunster) 33. Coulomb functions (Ian J. Thompson)
15. Hypergeometric function (Adri B. Olde 34. 3j,6j,9j symbols (Leonard C. Maximon)
Daalhuis) 35. Functions of matrix argument (Donald St. P.
16. Generalized hypergeometric functions and Richards)
Meijer G-function (Richard A. Askey, Adri B. 36. Integrals with coalescing saddles (Michael V.
Olde Daalhuis) Berry, Chris Howls)
17. q-Hypergeometric and related functions

Readership: Mathematicians, scientists (theoretical physicists, engineers, chemists, statisticians,


economists, etc.).

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
132 SHORT BOOK REVIEWS

This is like trying to review the bible: it would be eccentric to argue that it is not a “thoroughly
good thing”. It’s the modern successor to the wonderful Handbook of Mathematical Functions,
edited by Abramowitz and Stegun, and maybe that’s enough said.
However, a few aspects deserve comment. The Preface tells us that this has been a ten-year
project, involving many technical experts both within and without the NIST. Major developments
include:
(i) the omission of tables of values of special functions, previously listed in Abramowitz and
Stegun;
(ii) the introduction of full-colour graphics;
(iii) a list of applications of the special functions in each chapter;
(iv) the availability of a web-based version.
In place of (i) there is now a Computation section in each chapter: available methods are
described, references are given, and links to sites where software can be accessed are identified.
With (ii) one can see at a glance how the function behaves, a picture painting a thousand words,
as they say. Feature (iii) might sometimes help to introduce new ways of looking at a particular
function. Facility (iv) is likely to become an everyday tool of many researchers: the online version
is the NIST Digital Library of Mathematical Functions (DLMF), accessible at . In addition there
is a CD in a pocket at the back with the whole book in a pdf file.
In summary, this splendid work doesn’t really need the approbation of a mere reviewer. And
now I’m off to look up my first unidentified integral to see if it’s a standard form.
Martin Crowder: m.crowder@imperial.ac.uk
Mathematics Department, Imperial College
London SW7 2AZ, UK

Hidden Markov Models for Time Series: An Introduction Using R, Second Edition
Walter Zucchini, Iain L. MacDonald
Chapman & Hall/CRC, 2009, xxii + 275 pages, £50.99/$82.95, hardcover
ISBN: 978-1-58488-573-3

Table of contents

Part One: Model Structure, Properties and Methods Part Two: Applications
1. Preliminaries: mixtures and Markov chains 9. Epileptic seizures
2. Hidden Markov models: definition and 10. Eruptions of the Old Faithful geyser
properties 11. Drosophila speed and change of direction
3. Estimation by direct maximization of the 12. Wind direction at Koeberg
likelihood 13. Models for financial series
4. Estimation by the EM algorithm 14. Births at Edendale hospital
5. Forecasting, decoding and state prediction 15. Homicides and suicides in Cape Town
6. Model selection and checking 16. Animal behavior model with feedback
7. Bayesian inference for Poisson-HMMs Appendix A: Examples of R code
8. Extensions of the basic hidden Markov model Appendix B: Some proofs

Readership: Applied statisticians, scientists, engineers, users of Statistics.

This is by way of a follow-up to the authors’ previous book in 1997. There, the time series
considered were discrete-valued: Part I (50 pages) presented a survey of models, and Part II
(150 pages) concentrated on hidden Markov models. In this new book, the old Part I has been
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 133

removed, which is a slight shame, since it covered an interesting variety of models: the authors
reason that such models have not been used widely in applications. Instead, the treatment is now
extended to cover continuous as well as discrete-valued data.
In this 2009 book data of many types are considered, including binary, categorical, counts,
continuous (univariate and multivariate) and circular. The authors point out that their book
places emphasis on applications rather than theoretical research, and mention others more
suitable for readers interested in the latter. Indeed, Part II comprises over 100 pages devoted to
eight applications, one per chapter. The data are available for download from a web site and
R-code for performing the analyses is listed in Appendix A. Nevertheless, the mathematical
underpinning for the models is set out clearly in Part I (132 pages). Here the basic definition and
properties of hidden Markov chains are covered, together with computational details, inference,
prediction, model-checking, and some extensions.
It is clear that much care has gone into this book: it has a very detailed contents list, a list of
abbreviations and notations, thoughtful data analyses, many references and a detailed index. In
fact, it would be difficult not to thoroughly recommend it to anyone interested in learning how
to tackle these types of data.
Martin Crowder: m.crowder@imperial.ac.uk
Mathematics Department, Imperial College
London SW7 2AZ, UK

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
134 SHORT BOOK REVIEWS

Meta-analysis and Combining Information in Genetics and Genomics


Rudy Guerra, Darlene R. Goldstein (Editors)
Chapman & Hall/CRC, 2010, xxiii + 335 pages, £63.99/$99.95, hardcover
ISBN: 978-1-58488-522-1

Table of contents
Part 0. Introductory Material 9. Comparison of meta-analysis to combined
1. A brief introduction to meta-analysis, genetics analysis of a replicated microarray study
and genomics (Darlene R. Goldstein, Rudy (Darlene R. Goldstein, Mauro Delorenzi, Ruth
Guerra) Luthi-Carter, Thierry Sengstag)
Part I. Similar Data Types I: Genotype Data 10. Alternative probe set definitions for combining
2. Combining information across genome-wide microarray data across studies using different
linkage scans (Carol J. Etzel, Tracy J. Costello) versions of Affymetrix oligonucleotide arrays
3. Genome search meta-analysis (GSMA): a (Jeffrey S. Morris, Chunlei Wu, Kevin R.
nonparametric method for meta-analysis of Coombes, Keith A. Baggerly, Jing Wang, Li
genome-wide linkage studies (Cathryn M. Zhang)
Lewis) 11. Gene ontology-based meta-analysis of
4. Heterogeneity in meta-analysis of quantitative genome-scale experiments (Chad A. Shaw)
trait linkage studies (Hans C. van Part III. Combining Different Data Types
Houwelingen, Jérémie J. P. Lebrec) 12. Combining genomic data in human studies
5. An empirical Bayesian framework for QTL (Debashis Ghosh, Daniel Rhodes, Arul
genome-wide scans (Kui Zhang, Howard Chinnaiyan)
Wiener, T. Mark Beasley, Christopher I. Amos, 13. An overview of statistical approaches for
David B. Allison) expression trait loci mapping (Christina
Part II. Similar Data Types II: Gene Expression Data Kendziorski, Meng Chen)
6. Composite hypothesis testing: an approach 14. Incorporating GO annotation information in
built on intersection-union tests and Bayesian expression trait loci mapping (J. Blair
posterior probabilities (Stephen Erickson, Christian, Rudy Guerra)
Kyoungmi Kim, David B. Allison) 15. A misclassification model for inferring
7. Frequentist and Bayesian error pooling transcriptional regulatory networks (Ning Sun,
methods for enhancing statistical power in Hongyu Zhao)
small sample microarray data analysis (Jae K. 16. Data integration for the study of protein
Lee, Hyung Jun Cho, Michael O’Connell) interactions (Fengzhu Sun, Ting Chen,
8. Significance testing for small microarray Minghua Deng, Hyunju Lee, Zhidong Tu)
experiments (Charles Kooperberg, Aaron 17. Gene trees, species trees, and species networks
Aragaki, Charles C. Carey, Suzannah (Luay Nakhleh, Derek Ruths, Hideki Innan)
Rutherford)

Readership: Students and researchers in meta-analysis of biological data.

The theme is the pooling of information from a number of distinct data sets, specifically genomic
data. The current statistical methodology for this purpose falls under the umbrella term meta-
analysis. The stated aim of this book is to contribute to the development of such techniques
to a wider and more diverse area of applications. This is in response to the rapidly increasing
production of large amounts of genomic data of various types.
This is an edited volume of 17 chapters contributed jointly by 45 subject experts. There is
an introductory chapter, setting out the framework and giving a brief survey of the basic ideas
and methods. There follow 16 chapters dealing with a variety of more specialised areas of
application. These are organised into three parts: Parts I and II address the treatment of different
sets of data of similar type; Part III extends this to data of different types. The chapters are largely
self-contained, so that readers can dip in to the particular articles of interest to themselves.
The book assumes a certain level of mathematical and statistical experience. Familiarity
with probability manipulation and distributions, together with a working knowledge of standard
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 135

statistical inference and methods, is presumed. However, the editors point out that the more
technical parts can be glossed over if the reader wishes.
My impression is that the book will be most useful for students and researchers who wish
to see what developments are currently in progress in this important area. That said, there is
a wealth of material here for the non-expert wishing to move into the area. And, unlike some
edited tomes in past ages, the articles here have clearly been carefully meshed to give a coherent
picture.
Martin Crowder: m.crowder@imperial.ac.uk
Mathematics Department, Imperial College
London SW7 2AZ, UK

Diagnostic Measurement: Theory, Methods, and Applications


André A. Rupp, Jonathan Templin, Robert A. Henson
Guilford Press, 2010, xx + 348 pages, $75.00, hardcover (also available as softcover)
ISBN: 978-1-60623-528-7

Table of contents

1. Introduction 7. The LCDM framework


Part I. Theory: Principles of Diagnostic Measurement 8. Modeling the attribute space in DCMs
with DCMs Part III. Applications: Utilizing DCMs in Practice
2. Implementation, design, and validation of 9. Estimating DCMs using Mplus
diagnostic assessments 10. Respondent parameter estimation in DCMs
3. Diagnostic decision making with DCMs 11. Item parameter estimation in DCMs
4. Attribute specification for DCMs 12. Evaluating the model fit of DCMs
Part II. Methods: Psychometric Foundations of DCMs 13. Item discrimination indices for DCMs
5. The statistical nature of DCMs 14. Accommodating complex sampling designs in
6. The statistical structure of core DCMs DCMs

Readership: Students, educators, scientists from applied statistics, psychometrics, measurement


and research methodology, psychological and educational assessment, and other areas.

This book focuses on what the authors call “diagnostic classification models” (DCMs): “a
particular subset of psychometric models that yield classifications of respondents according
to multiple latent variables,” (their italics). The generality of that description means that it is
not surprising that the roots of such models have also been explored in other classes of models
which might be more familiar, including classical test theory, item response theory, confirmatory
factor analysis, structural equation modelling, and categorical data analysis. The authors say they
chose the term “diagnostic classification model” to emphasise that, although DCMs are statistical
tools, a theory about response processes grounded in applied cognitive psychology is desirable
in practical applications. The book by Skrondal and Rabe-Hesketh (2004) Generalized Latent
Variable Modeling covers similar topics from a more overtly statistical perspective.
The book is divided into three parts covering, respectively, the theory, methods, and
applications of DCMs, with the theory part essentially setting the context and framework for
such models. The methods section describes the sorts of statistical modelling tools used, covering
such as log-linear models, latent class models, and Bayesian networks, and in fact, in Chapter 7
the authors show that core DCMs can be expressed in a log-linear modelling framework.
The applications part of the book is not so much a collection of illustrative applications as an
elaboration of the methods part, looking at topics such as estimation, model fit, and complex
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
136 SHORT BOOK REVIEWS

sample designs. However, the book does include examples, and these are demonstrated using the
Mplus system, a software package based around latent variables. Although there are no exercises
in the book, the preface does say that exercises and solutions will be given on the associated
website (though when I tried, I was unable to get past the main page, but I imagine that was a
passing aberration.)
Given its central emphasis on diagnosis, I was a little surprised that the description of standard
measures of model fit given in Chapter 12 was not complemented by some discussion of the
parallel literature on evaluating the performance of diagnostic rules in cases when the true
diagnoses can (perhaps subsequently) be discovered. This literature is now vast – see, for
example, Zhou, Obuchowski, and McLish (2002) Statistical Methods in Diagnostic Medicine
and Pepe (2003) The Statistical Evaluation of Medical Tests for Classification and Prediction.
Perhaps routes into this related literature might be added in a second edition.
David J. Hand: d.j.hand@imperial.ac.uk
Mathematics Department, Imperial College
London SW7 2AZ, UK

Biplots in Practice
Michael Greenacre
Fundación BBVA, 2010, 237 pages, €32.00, softcover
ISBN: 978-84-923846-8-6

Table of contents

1. Biplots—the basic idea 12. Constrained biplots and triplots


2. Regression biplots 13. Case study 1—biomedicine: comparing cancer
3. Generalized linear model biplots types according to gene expression arrays
4. Multidimensional scaling biplots 14. Case study 2—socio-economics: positioning
5. Reduced-dimension biplots the “middle” category in survey research
6. Principal component analysis biplots 15. Case study 3—ecology: the relationship
7. Log-ratio biplots between fish morphology and diet
8. Correspondence analysis biplots Appendix A: Computation of biplots
9. Multiple correspondence analysis biplots I Appendix B: Bibliography
10. Multiple correspondence analysis biplots II Appendix C: Glossary of terms
11. Discriminant analysis biplots Appendix D: Epilogue

Readership: Researchers and students of both social and natural sciences interested in learning
more about multivariate data visualization.

First of all, I am happy to say that Michael Greenacre has written an extremely useful book!
Although the technique of biplots has been known for about 40 years, it has not yet become
a widely applied method, with the exception of correspondence analysis and related methods,
where it is absolutely crucial for presenting any results graphically. In general, the aim of biplot
is to visualize the maximum possible amount of information in the multivariate data. That sounds
like a typical aim in quite many other circumstances as well. Therefore I believe that biplots will
eventually become a popular technique in several new fields.
Before this book, there have not been too many books about biplots, or perhaps they have been
a bit too theoretical for most applications. Now, this book explains very clearly what biplots are,
how they are constructed and how they are used and interpreted in various applications, both in
social and natural sciences.
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 137

Twelve chapters go smoothly through the basics and generalizations of biplots in connection
with various statistical methods, such as multidimensional scaling, principal components
analysis, correspondence analysis, and discriminant analysis. They are followed by three
excellent case studies from different fields, demonstrating very well the possibilities of biplots
in practice.
The computational aspects are explained in an appendix and on the website of the book,
although some R code is also given within the chapters, when it is necessary to clarify some
particular details.
In all, the book is easy to read, because it follows a didactic format, with many nice figures
and tables, chapter summaries, a glossary of terms, an annotated bibliography and everything. I
feel that this book really supports learning! I have already used it on my course, which is easy,
as the whole book is also freely available on its website.
Kimmo Vehkalahti: kimmo.vehkalahti@helsinki.fi
Department of Social Research, Statistics
FI-00014 University of Helsinki, Finland

Regression Modeling with Actuarial and Financial Applications


Edward W. Frees
Cambridge University Press, 2010, xvii + 565 pages, £35.00/$56.99, softcover (also available
as hardcover)
ISBN: 978-0-521-13596-2

Table of contents

1. Regression and the normal distribution 13. Generalized linear models


Part I. Linear Regression 14. Survival models
2. Basic linear regression 15. Miscellaneous regression topics
3. Multiple linear regression—I Part IV. Actuarial Applications
4. Multiple linear regression—II 16. Frequency-severity models
5. Variable selection 17. Fat-tailed regression models
6. Interpreting regression results 18. Credibility and bonus-malus
Part II. Topics in Time Series 19. Claims triangles
7. Modeling trends 20. Report writing: communicating data analysis
8. Autocorrelations and autoregressive models results
9. Forecasting and time series models 21. Designing effective graphs
10. Longitudinal and panel data models Appendix 1: basic statistical inference
Part III. Topics in Nonlinear Regression Appendix 2: matrix algebra
11. Categorical dependent variables Appendix 3: probability tables
12. Count dependent variables

Readership: Academic: researcher and graduate students in applied mathematics, statistics and
applied finance; Industry: Actuaries and risk management professionals. The book assumes
knowledge comparable to a one-semester introduction to probability and statistics.

This is an excellent book written by an all-round writer. He is a Fellow of both the Society of
Actuaries and the American Statistical Association. Hence, it is not surprising that the book
fills the gap between modern statistics and traditional actuarial/risk management methods.
Need for this kind of book is obvious. As Jukka Rantala—the former Chairman of Groupe
Consultatif (European Actuarial Association)—stated, practicing actuaries will benefit from
close co-operation between practice and research; in particular statistics.
International Statistical Review (2011), 79, 1, 114–143

C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
138 SHORT BOOK REVIEWS

This book presents an overview of how statistics can be used to solve problems in actuarial
and financial applications. The depth of the topics covered varies from elementary to fairly
advanced ones.
The book describes applications of regression methods to important actuarial problems. The
author has chosen topics that have been relevant in his research and consultation work, that
is, Bonus-Malus systems and claim triangles. The material is supported by exercises towards
practical applications and a companion webpage containing datasets and SAS/R scripts.
This book has been developed as a textbook but it can also be used for self-study or as a
reference material for an “armchair actuary” who only passively reads. I highly recommend it
to any person who wishes to learn how to use statistical methods for actuarial and financial
applications.
Lasse Koskinen: lasse.koskinen@finanssivalvonta.fi
The Finnish Financial Supervisory Authority
P.O. Box 103, FI-00101 Helsinki, Finland

Creative Minds, Charmed Lives: Interviews at Institute for Mathematical Sciences,


National University of Singapore
Yu Kiang Leong (Editor)
World Scientific, 2010, xv + 333 pages, £48.00/$78.00, hardcover
ISBN: 978-981-4317-58-0

Table of contents

1. Béla Bollobás: Graphs Extremal and Random 21. Eric Maskin: Game Theory Master
2. Leonid Bunimovich: Stable Islands, Chaotic 22. Eduardo Massad: Infectious Diseases,
Seas Vaccines, Models
3. Tony Fan-Cheong Chan: On Her Majesty’s (the 23. Daniel McFadden: Choice Models, Maximal
Queen of Science’s) Service Preferences
4. Sun-Yung Alice Chang: Analyst in Conformal 24. Keith Moffatt: Magnetohydrodynamic
Land Attraction
5. Jennifer Tour Chayes: Basic Research, Hidden 25. Stanley Osher: Mathematician with an Edge
Returns 26. Doug Roble: Computer Vision, Digital Magic
6. Carl de Boor: On Wings of Splines 27. Ron Shamir. Unraveling Genes, Understanding
7. Persi Diaconis: The Lure of Magic and Diseases
Mathematics 28. Albert Nikolaevich Shiryaev: On the Shoulder
8. David Donoho: Sparse Data, Beautiful Mine of a Giant
9. Robert F. Engle: Archway to Nobel 29. David O. Siegmund: Change-Point, a
10. Hans Föllmer: Efficient Markets, Random Consequential Analysis
Paths 30. Theodore Slaman and W. Hugh Woodin: Logic
11. Avner Friedman: Mathematician in Control and Mathematics
12. Roe Goodman: Mathematics, Music, Masters 31. Terry Speed: Good Gene Hunting
13. Bryan T. Grenfell: Viral Visitations, Epidemic 32. Charles Stein: The Invariant, the Direct and the
Models “Pretentious”
14. Takeyuki Hida: Brownian Motion, White Noise 33. Gilbert Strang: The Changing Face of Applied
15. Roger Howe: Exceptional Lie Group Theorist Mathematics
16. Wilfrid Kendall: Dancing with Randomness 34. Eitan Tadmor: Zen of Computational Attraction
17. Lawrence Klein: Economist for All Seasons 35. Michael Todd: Optimization, an Interior Point
18. Brian E. Launder: Modeling and Harnessing of View
Turbulence 36. Sergio Verdú: Wireless Communications, at the
19. Fanghua Lin: Revolution, Transitions, Partial Shannon Limit
Differential Equations 37. Michael S. Waterman: Breathing Mathematics
20. Pao Chuen Lui: Of Science in Defense into Genes

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 139

Readership: Mathematicians, statisticians, mathematical scientists, historians of mathematics,


historians of ideas, students and general public.

This interview volume is a collection of all the interviews of prominent visitors to the Institute
for Mathematical Sciences, National University of Singapore, conducted by Yu Kiang Leong
and published in the Institute’s newsletter Imprints from 2003 to 2009.
In the Preface the Editor writes as follows: “Research papers very rarely give any inkling
as to how an important idea was conceived; the crucial idea is almost always presented as if it
appeared out of the blue in its final form to the discoverer. If he or she could be persuaded during
an interview to shed some light on the process that led to the discovery of the idea, it would have
been worth the interview.” I am happy to congratulate the Editor for his reaching the goal: this
is a very well prepared collection of views of influential mathematical scientists. Nicely printed,
great photographs! Students considering possible academic career, would certainly enjoy finding
the human persons behind the famous names—some almost legends.

— If a graduate student has to choose a field of research, what kind of advice would you give
him?
— What led you into formatting the innovative ARCH model?
— Do you think that there is still a gap in communication, if not in interaction, between the
majority of economists and the majority of mathematicians?
— It is often said that this century will be the century of molecular biology. In your opinion,
how much of this is hype and how much of it is scientifically justified?

If you are interested in learning the replies of Persi Diaconis, Robert F. Engle, Lawrence Klein,
and Terry Speed to these questions, please take a look at Creative Minds, Charmed Lives.
Simo Puntanen: simo.puntanen@uta.fi
School of Information Sciences
FI-33014 University of Tampere, Finland

Design of Experiments: An Introduction Based on Linear Models


Max D. Morris
Chapman & Hall/CRC, 2011, xviii + 355 pages, £39.99/$89.95, hardcover
ISBN: 978-1-58488-923-6

Table of contents

1. Introduction 11. Two-level factorial experiments: basics


2. Linear statistical models 12. Two-level factorial experiments: blocking
3. Completely randomized designs 13. Two-level factorial experiments: fractional
4. Randomized complete blocks and related factorials
designs 14. Factorial group screening experiments
5. Latin squares and related designs 15. Regression experiments: first-order polynomial
6. Some data analysis for CRDs and orthogonally models
blocked designs 16. Regression experiments: second-order
7. Balanced incomplete block designs polynomial models
8. Random block effects 17. Introduction to optimal design
9. Factorial treatment structure Appendix A: Calculations using R
10. Split-plot designs Appendix B: Solution notes for selected exercises

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
140 SHORT BOOK REVIEWS

Readership: Graduate or advanced undergraduate statisticians and experienced practitioners.

This book discusses experimental design with a strong emphasis on linear models. Although
with a very pragmatic and a practitioner-oriented point of view, this book is intended for
a thorough and deeper introduction to experimental design. It dwells with the derivation of
estimates, decomposition of error, precision and sample size (among other issues) for each class
of classical experimental designs. It also introduces some more advanced concepts, namely the
various types of optimality for an experimental design.
The first chapter deals with the basic concepts of creating an experiment. It starts with
the notions of experimental unit, treatment, block and other basic notions. The concept of
randomization, control and information are also treated.
The second chapter is a keystone in the book, since it deals with the theory of linear models.
Estimation in both a simple and partitioned models is treated, as well as the calculation of
the information associated to designs. Orthogonality conditions and hypothesis testing on the
model’s parameters is also covered.
The three following chapters describe completely randomized designs (CRD), randomized
complete block designs (RCBD), Latin squares and related models are treated. A data-driven
example is always presented, followed by the linear model that describes it. Besides the usual
estimation and analytical approach, a graphical analysis is made. Efficiency and precision is
calculated for each model, with comparisons between families of models are made.
The two subsequent chapters treat data analysis, model assumption tests and treatment
comparison for the above mentioned models. Next, balanced incomplete block designs and
related models are treated. Like with the previously mentioned model classes, examples, model
formulation, analysis and efficiency is covered as well as conditions for the existence of these
designs. Random effects for blocks is considered and presented, with intra and inter-block
estimation and inter-block information recovery. Application of these techniques to the before
discussed models is presented and discussed.
Chapter 9 discusses the general structure of factorial models, presenting the general framework
for this class of models. In subsequent chapters (11, 12 and 13) the base 2 factorial is presented
in detail, followed by blocking and fractional replication for this sub-class of factorial models.
Split-plot models are discussed in chapter 10.
The last four chapters deal with more advanced subjects. Chapter 14 studies factorial screening,
when the number of binary factors in experiment is large and factor group significance needs
to be inferred. Chapters 15 and 16 analyze polynomial regression models of first and second
order, respectively. Analysis and experiment design using these models is made. The last chapter
presents a brief introduction to optimality.
Miguel Fonseca: fmig@fct.unl.pt
Center of Mathematics and Applications
Faculty of Sciences and Technology, New University of Lisbon
2829-516 Caparica, Portugal

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 141

Algorithms and Theory of Computation Handbook, Second Edition—2 Volume Set


Mikhail J. Atallah, Marina Blanton (Editors)
Chapman & Hall/CRC, 2010, xv + 988 and xvi + 950 pages, £119.00/$185.95, hardcover
ISBN: 978-1-58488-818-5

Table of contents

Volume 1: General Concepts and Techniques 25. Parameterized algorithms (Rodney G. Downey,
1. Algorithm design and analysis techniques Catherine McCartin)
(Edward M. Reingold) 26. Computational learning theory (Sally A.
2. Searching (Ricardo Baeza-Yates, Patricio V. Goldman)
Poblete) 27. Algorithmic coding theory (Atri Rudra)
3. Sorting and order statistics (Vladimir 28. Parallel computation: models and complexity
Estivill-Castro) issues (Raymond Greenlaw,
4. Basic data structures (Roberto Tamassia, Bryan H. James Hoover)
Cantrill) 29. Distributed computing: a glimmer of a theory
5. Topics in data structures (Giuseppe F. Italiano, (Eli Gafni)
Rajeev Raman) 30. Linear programming (Vijay Chandru, M. R.
6. Multidimensional data structures for spatial Rao)
applications (Hanan Samet) 31. Integer programming (Vijay Chandru, M. R.
7. Basic graph algorithms (Samir Khuller, Balaji Rao)
Raghavachari) 32. Convex optimization (Florian Jarre, Stephen
8. Advanced combinatorial algorithms (Samir A. Vavasis)
Khuller, Balaji Raghavachari) 33. Simulated annealing techniques (Albert Y.
9. Dynamic graph algorithms (Camil Demetrescu, Zomaya, Rick Kazman)
David Eppstein, Zvi Galil, Giuseppe F. 34. Approximation algorithms for NP-hard
Italiano) optimization problems (Philip N. Klein, Neal
10. External memory algorithms and data E. Young)
structures (Lars Arge, Norbert Zeh) Volume 2: Special Topics and Techniques
11. Average case analysis of algorithms (Wojciech 1. Computational geometry I (D. T. Lee)
Szpankowski) 2. Computational geometry II (D. T. Lee)
12. Randomized algorithms (Rajeev Motwani, 3. Computational topology (Afra Zomorodian)
Prabhakar Raghavan) 4. Robot algorithms (Konstantinos Tsianos, Dan
13. Pattern matching in strings (Maxime Halperin, Lydia Kavraki, Jean-Claude
Crochemore, Christophe Hancart) Latombee)
14. Text data compression algorithms (Maxime 5. Vision and image processing algorithms
Crochemore, Thierry Lecroq) (Concettina Guerra)
15. General pattern matching (Alberto Apostolico) 6. Graph drawing algorithms (Peter Eades,
16. Computational number theory (Samuel S. Carsten Gutwenger, Seok-Hee Hong, Petra
Wagstaff, Jr.) Mutzel)
17. Algebraic and numerical algorithms (Ioannis Z. 7. Algorithmics in intensity-modulated radiation
Emiris, Victor Y. Pan, Elias P. Tsigaridas) therapy (Danny Z. Chen, Chao Wang)
18. Applications of FFT and structured matrices 8. VLSI layout algorithms (Andrea S. LaPaugh)
(Ioannis Z. Emiris, Victor Y. Pan) 9. Cryptographic foundations (Yvo Desmedt)
19. Basic notions in computational complexity 10. Encryption schemes (Yvo Desmedt)
(Tao Jiang, Ming Li, Bala Ravikumar) 11. Cryptanalysis (Samuel S. Wagstaff, Jr.)
20. Formal grammars and languages (Tao Jiang, 12. Crypto topics and applications I (Jennifer
Ming Li, Bala Ravikumar, Kenneth W. Regan) Seberry, Chris Charnes, Josef Pieprzyk, Rei
21. Computability (Tao Jiang, Ming Li, Bala Safavi-Naini)
Ravikumar, Kenneth W. Regan) 13. Crypto topics and applications II (Jennifer
22. Complexity classes (Eric Allender, Michael C. Seberry, Chris Charnes, Josef Pieprzyk, Rei
Loui, Kenneth W. Regan) Safavi-Naini)
23. Reducibility and completeness (Eric Allender, 14. Secure multiparty computation (Keith B.
Michael C. Loui, Kenneth W. Regan) Frikken)
24. Other complexity classes and measures (Eric 15. Voting schemes (Berry Schoenmakers)
Allender, Michael C. Loui, Kenneth W. Regan) 16. Auction protocols (Vincent Conitzer)

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International

Statistical Institute.
142 SHORT BOOK REVIEWS

17. Pseudorandom sequences and stream ciphers 25. Parallel algorithms (Guy E. Blelloch, Bruce M.
(Andrew Klapper) Maggs)
18. Theory of privacy and anonymity (Valentina 26. Self-stabilizing algorithms (Sébastien Tixeuil)
Ciriani, Sabrina De Capitani di Vimercati, 27. Theory of communication networks (Gopal
Sara Foresti, Pierangela Samarati) Pandurangan, Maleq Khan)
19. Database theory: query languages (Nicole 28. Network algorithmics (George Varghese)
Schweikardt, Thomas Schwentick, Luc 29. Algorithmic issues in grid computing (Yvers
Segoufin) Robert, Frédéric Vivien)
20. Scheduling algorithms (David Karger, Cliff 30. Uncheatable grid computing (Wenliang Du,
Stein, Joel Wein) Mummoorthy Murugesan, Jing Jia)
21. Computational game theory: an introduction 31. DNA computing: a research snapshot (Lila
(Paul G. Spirakis, Panagiota N. Panagopoulou) Kari, Kalpana Mahalingam)
22. Artificial intelligence search algorithms 32. Computational systems biology (T. M. Murali,
(Richard E. Korf ) Srinivas Aluru)
23. Algorithmic aspects of natural language 33. Pricing algorithms for financial derivatives
processing (Mark-Jan Nederhof, Giorgio Satta) (Ruppa K. Thulasiram, Parimala
24. Algorithmic techniques for regular networks of Thulariraman)
processors (Russ Miller, Quentin F. Stout)

Readership: Computer professionals and engineers.

The developments and applications in the design and analysis of algorithms continue to evolve
and emerge at an ever increasing pace. Keeping abreast of new research and published work is
not a trivial task. The authors in editing the current text present a compilation that will appeal to
professionals and students alike; together with those engaged in research and especially those
contemplating embarking upon research.
The field has expanded substantially since the first edition appeared and has resulted in
the current two-volume format. This second edition contains twenty-one new chapters; and a
thorough updating and revision of many of the chapters from the first edition. A consistent style
has been adopted but a common format has not been possible for each chapter. This does not
detract from the usefulness of the text and the sections detailing research issues and sources for
further information will ensure that this edition remains an excellent reference source for years
to come. I recommend this text as both a teaching aid and a reference source whose utility can
only but increase in the coming years.
Carl M. O’Brien: carl.obrien@cefas.co.uk
Centre for Environment, Fisheries & Aquaculture Science
Pakefield Road, Lowestoft, Suffolk NR33 0HT, UK

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.
SHORT BOOK REVIEWS 143

Stochastic Control and Mathematical Modeling: Applications in Economics


Hiroaki Morimoto
Cambridge University Press, 2010, xiii + 325 pages, £70.00/$110.00, hardcover
ISBN: 978-0-521-19503-4

Table of contents

Part I. Stochastic Calculus and Optimal Control 9. Optimal consumption models in economic
Theory growth
1. Foundations of stochastic calculus 10. Optimal pollution control with long-run
2. Stochastic differential equations: weak average criteria
formulation 11. Optimal stopping problems
3. Dynamic programming 12. Investment and exit decisions
4. Viscosity solutions of Part III. Appendices
Hamilton–Jacobi–Bellman equations A. Dini’s theorem
5. Classical solutions of B. The Stone–Weierstrass theorem
Hamilton–Jacobi–Bellman equations C. The Riesz representation theorem
Part II. Applications to Mathematical Models in D. Rademacher’s theorem
Economics E. Vitali’s covering theorem
6. Production planning and inventory F. The area formula
7. Optimal consumption/investment models G. The Brouwer fixed point theorem
8. Optimal exploitation of renewable resources H. The Ascoli–Arzela theorem

Readership: Graduate students and scientists in applied mathematics, economics, and engineer-
ing theory.

What is the optimal magnitude of a choice variable at each time in a dynamical system under
certainty? In addressing this fundamental question the author, in his mathematical treatise,
provides the reader with a description of stochastic control theory and its applications to dynamic
optimization. Applications in economics, mathematical finance and engineering are covered in
a highly readable, and technical, style.
The text is self-contained with the necessary mathematical requisites contained in the
preliminary chapters and in the appendices. The topic of the text is one that will appeal to
those engaged in theoretical research, and is appropriate both for private study and for taught
courses.
As the title indicates, the author’s presentation is clearly focussed on economics and provides
neither exercises nor questions for further investigation. However, the chapters should allow the
reader to identify future research areas. As such the text has much to commend it and the time
taken to understand the material presented will reap benefits. For those interested in theoretical
expositions, this is a text that I can recommend.
Carl M. O’Brien: carl.obrien@cefas.co.uk
Centre for Environment, Fisheries & Aquaculture Science
Pakefield Road, Lowestoft, Suffolk NR33 0HT, UK

International Statistical Review (2011), 79, 1, 114–143



C 2011 The Author. International Statistical Review  C 2011 International Statistical Institute.

S-ar putea să vă placă și