Chris Malone, Winona State University
I started teaching at Winona State University in 2002. I am fortunate to be able to teach a variety of statistics courses to students with an even wider variety of backgrounds here at Winona State. About 10 years ago, at the advice of a senior faculty member, I started doing statistical consulting work to balance my teaching duties. This had a remarkable impact on my teaching at that time. Consulting continues to this day to shape my approach to teaching statistics.
[pullquote]The traditional sequence was too compartmentalized and did not allow much time for students to conduct a statistical analysis from start-to-finish.[/pullquote]
Completing a thorough analysis for a consulting client requires that one: (1) identify their research hypothesis, (2) assemble their data in a format appropriate for analysis, (3) obtain appropriate summary statistics and graphics, and (4) often apply some inferential procedure to formally test their research hypothesis. The procedural approach used to analyze data for a client did not match the approach I had been using to teach statistics. My approach to teaching statistics used the traditional sequence: (1) methods of collecting data, (2) summary statistics, (3) probability, (4) sampling distributions, and (5) inferential procedures. I realized that a substantial change was needed in my approach to teaching soon after I started consulting. The traditional sequence was too compartmentalized and did not allow much time for students to conduct a statistical analysis from start-to-finish.
Recommendation #1 of the College Report of the Guidelines for Assessment and Instruction in Statistics Education (GAISE) encourages the promotion of statistical thinking which is defined to be the type of thinking that statisticians use when approaching and solving statistical problems. Thus, to promote statistical thinking, students should be given ample opportunities to complete an analysis from start-to-finish. Malone et al. (2010) discuss a resequencing of topics for an introductory course that more closely follows the approach taken by an applied statistician when analyzing data for a client. An introductory course using this new sequencing would start with methodology for a single categorical variable (i.e. a single proportion). This new sequencing did have one substantial roadblock — how could one teach inference for a single proportion without the normal approximation granted by the Central Limit Theorem? Simulation was my answer and hence has been used since to overcome this roadblock.
Initially, simulation was introduced so that students could obtain empirical results that approximated the binomial distribution. My intent was to motivate the use of the binomial distribution and its associated binomial exact test for a single proportion — which was necessary given the fact that students had not yet been introduced to normal-based methods. I now believe that a simulation-based curriculum should not simply be introduced to defend a test or procedure; but, should be introduced to develop and enhance statistical thinking. One such example is provided next.[pullquote]I now believe that a simulation-based curriculum should not simply be introduced to defend a test or procedure; but, should be introduced to develop and enhance statistical thinking. One such example is provided next.[/pullquote]
An understanding of statistical inference requires one to be able to evaluate a study’s outcome from the perspective of the null hypothesis. Simulation is the mechanism by which this is done in my teaching. Simulation does not require burdensome mathematics. Simulation does not require taxing language. Simulation permits students to create, construct, and witness for themselves outcomes under the null hypothesis. The intrinsic details of statistical inference are presented through a discussion of likely versus unlikely outcomes. For example, a test’s direction is determined through the identification of outcomes that support the research hypothesis and not simply by the direction of the inequality in the alternative hypothesis. Simulation allows the concept of a p-value to be naturally developed and students more easily identify this as a reasonable measure of evidence against the null hypothesis. More recently, I have used simulation-based approaches to introduce the concepts associated with the margin-of-error. I have started to use simulation-based methods in my upper division courses as well (e.g. to provide a more conceptual understanding of the standard error of a prediction from a regression model).
I encourage you to consider the use of simulation-based methods in your teaching. Tintle et al. (2011) and others have provided direct evidence that simulation-based curriculum add value to the learning of statistics. One important question that remains unanswered is — how much simulation-based curriculum is necessary? The question remains unanswered and has implications for how the current simulation-based movement should evolve.
- Malone, C., Gabrosek, J., Curtiss, P., and Race, M. (2010). “Resequencing Topics in an Introductory Applied Statistics Course”. The American Statistician. 64(1): 52-58.
- Tintle NL, VanderStoep J, Holmes V-L, Quisenberry B, Swanson T. “Development and assessment of a preliminary randomization-based introductory statistics curriculum.” Journal of Statistics Education, Volume 19(1). March 2011.