Documente Academic
Documente Profesional
Documente Cultură
Alia Maw
Sep 3 2015
Hello colleagues,
I sent the following to the full-time faculty in the mathematics department last spring
(3/25/2015). Some discussion ensued and I did a demo of teaching hypothesis testing using
simulation-based methods. I'd like to continue this discussion this school year and hope that we
can implement significant curricular changes for the coming school year.
I view the statewide changes in the curriculum of the MATH/STAT 1040 class as an opportunity
to align our curriculum with the current international reform efforts incorporating simulationbased methods in the introductory statistics course, and drawing closer to current practice in the
field of Statistics. I propose that we now move our course curriculum to emphasize simulation,
permutation tests, and randomization-based inference.
I think it will be helpful to approach this opportunity with some background about the field of
Statistics.
Early on in Statistics, the Central Limit Theorem put the Normal model at the center of all
statistical inference. Adjustments to this model (using Students t distribution, adjusting for
unequal standard deviations, etcetera) have made it increasingly complicated. In about 1937,
Fisher and Pitman found a significantly simpler model for inference, based on randomization.
However, the model couldnt be effectively used because we didnt have the computing power.
This is no longer the case. (Interview with George Cobb, Allan Rossman, California
Polytechnic State University, George Cobb, Mount Holyoke College. Journal of Statistics
Education Volume 23, Number 1 (2015),
www.amstat.org/publications/jse/v23n1/rossmanint.pdf (Links to an external site.))
In 1992 (over 20 years ago), George Cobb published an article in the MAA (Mathematical
Association of America) volume Heeding the Call for Change: Suggestions for Curricular
Action reporting on recommendations for teaching statistics from a focus group of statisticians
across the nation. The recommendations from this group were 1)Teach statistical thinking,
2)More data and concepts; less theory and fewer recipes, 3)Foster active learning. This charge to
emphasize data and concepts over a theoretical approach mirrors the work of professional
statisticians, where technology has informed theory for years.
In 2005 the ASA (American Statistical Association) endorsed the GAISE (Guidelines for
Assessment and Instruction in Statistics Education) report. This report had six recommendations,
expanding those of the earlier report:
1.
2.
3.
4.
5.
6.
A key point is this: Technology has changed the way statisticians work and should change what
and how we teach. For example, statistical tables such as a normal probability table are no longer
needed to find p-values, and we can implement computer-intensive methods. it is important to
view the use of technology not just as a way to compute numbers but as a way to explore
conceptual ideas and enhance student learning as well.
http://www.amstat.org/education/gaise/GaiseCollege_Full.pdf
Now ten years post-GAISE, there are two NSF (National Science Foundation) grant funded
randomization-based inference curricula currently under development in the nation, as well as
other smaller initiatives. I have spent considerable time with both of these curricula, including
attending conferences and workshops, participating in research studies, and reading professional
journals.
The first is the CATALYST project out of the University of Minnesota
(http://www.tc.umn.edu/~catalst/about (Links to an external site.)) with Joan Garfield, Bob
DelMas, et al heading it. CATALYST has been in development for several years. While I like
this curriculum (especially their metaphor that many introductory statistics classes teach students
how to follow recipes, but not how to really cook. That is, even if students leave a class able
to perform routine procedures and tests, they do not have the big picture of the statistical process
that will allow them to solve unfamiliar problems and to articulate and apply their
understanding.), I don't think this will be the best fit for us. The course depends strongly upon a
particular learning software called TinkerPlots http://www.tinkerplots.com/ (Links to an external
site.) and on a group-learning focus in the classroom.
I recommend that we adopt much of the second NSF project: ISI (Introduction to Statistical
Investigations) http://www.math.hope.edu/isi/ (Links to an external site.). This large project
involves key leaders in Statistics education across four universities. The materials developed
include a text, applets and datasets, instructional materials, and assessment.
The text differs from traditional texts in both content and pedagogy. Statistical inference is
introduced using simulation-based methods in Chapter 1. This allows students to intuitively
understand the process of inference from the very beginning of their course. Concepts of
statistical inference are then explored for the entire text instead of only the last half of many
traditional texts.
A semester might be organized like this:
Statistical Significance (One Proportion)
Normal Distributions
Sampling
Rossman, A., Chance, B., & Medina, E. Some key comparisons between Statistics and
Mathematics and why teachers should care. Chapter on the 2006 National Council of Teachers
of Mathematics Yearbook on Thinking and Reasoning with Data and Chance.
Combating anti-statistical thinking through the use of simulation-based methods throughout the
undergraduate curriculum white paper by the ISI team discussing the role of simulation-based
inference throughout the undergraduate statistics curriculum.
http://www.math.hope.edu/isi/presentations/white_paper_sim_inf_thru_curriculum.pdf (Links to
an external site.)
Here are some additional responses I sent out to the faculty last spring (4/12/2015).
main elements of nearly any QL examination would focus on different things than a standard
mathematics multiple-choice exam. Instead, QL assessments would place great emphasis on
determining if the student could identify a method for solving a problem or making a decision, do
the math involved, and then communicate this in narrative, numeric, and graphic form as
needed. http://www.slcc.edu/assessment/docs/Quantitative%20Literacy%20Rubric%20Develo
pment%20Guide%202014.pdf (Links to an external site.)
Mathematics is not what makes an introductory Statistics course a Quantitative Literacy course,
neither at our college nor across the nation.
Second, the place for probability
While I included the Moore/Cobb article as a reference, there are extensive other papers, as well
as the GAISE standards and other more recent work in statistics education, all arriving at a
strong consensus that probability theory is not the focus of an introductory statistics course. This
course needs to be data-centered, focused on reasoning in the face of uncertainty. As weve
discussed previously, topics like conditional probability in the formal presentation are
superfluous to inferential statistics at this level.
However, I disagree that the reformed curriculum removes probability from the course.
Understanding probabilities is crucial to modeling, simulations, and inference, including
confidence intervals and hypothesis tests. Perhaps you are responding to not seeing it as an
separate topic in my list of topics for the course. The list was meant to provide an idea of the
ordering of topics in the course, not to be exhaustive of the student learning outcomes.
For example, the proposed ordering of topics starts with Statistical Significance (One
Proportion). We could start the course with some reading of articles, discussing the statistical
investigation method. Wed need to run through the very basics of probabilities and how to
interpret probability as a long-run-frequency. Using articles, we can work on defining
observational units and variables, types of variables (quantitative or categorical), and introduce
concepts of distribution, shape, center, and variability. Then we start working on inference on the
proportion, and introduce the tool of a hypothesis test. By starting with concrete simulations
(cards, dice, other simple probability models), then moving to more abstract (computerized
simulations), and lastly to the very abstract (traditional parametric theoretical models), we help
advance students understanding. Students will start with interpreting p-values as a relative
frequency probability from their simulation, then move to the parametric test and interpret the
model-based p-value.
I assert that this curriculum will ensure that students have a lot of exposure to probability
throughout the course.
Third, introductory statistics as an applied course versus a theory course
The current intro statistics course offers very, very little in statistical theory. For example, we
declare the Central Limit Theorem to exist, but do nothing to prove it. This is because we
cannot; students would need at least three semesters of Calculus and an Analysis course before