Are Normal-Based Methods “Smothering” Our Students’ Understanding of Inference?

Tisha HooksTisha Hooks, Winona State University

As an undergraduate student, I learned a lot in my intro stat course about what formulas/tables to use and when to use them; unfortunately, I learned very little about why these methods worked. As a young professor, I set out to give my students an experience that was very different from mine. Fortunately for me I landed at Winona State University where I was able to work with Chris Malone, who had recently revamped his intro course. One of the first papers Chris encouraged me to read was written by George Cobb (referenced below), and the following quote hit home: “Our curriculum is needlessly complicated because we put the normal distribution, as an approximate sampling distribution for the mean, at the center of our curriculum, instead of putting the core logic of inference at the center.” Early on in my career, I’m pretty sure that Chris and I talked at least once a day about how to center our curriculum on core inferential concepts, and I started using a simulation-based curriculum which has allowed me to get to these core concepts early and often.

… my transition to normal-based methods involves using simulations/randomizations to introduce the logic of inference, connecting the empirical probabilities obtained from simulation studies to theoretical probabilities used in traditional tests…


Even though I use a simulation-based approach, I haven’t completely abandoned traditional or normal-based methods. My course begins with simulation studies that allow me to introduce inference for a single proportion. Students estimate p-values by finding probabilities associated with a distribution simulated under the null hypothesis, and I then transition to having them find exact p-values with binomial probabilities. Essentially, I teach them that we can replace “the dots” (i.e., the simulated outcomes) with the binomial distribution to set up the null (instead of setting up a simulation). Once we have discussed inference for a single categorical variable, I move on to tests for two categorical variables. Here, students use randomization procedures to obtain a distribution of outcomes that are expected under the null and then estimate p-values from this null distribution. Like before, I transition to an exact test for finding p-values (in this case Fisher’s exact test) by replacing “the dots” with the hypergeometric distribution. Finally, we move on to inference for a single mean, and I use a simulation-based approach for the last time. Students sample from various hypothetical finite populations set up under the null hypothesis to investigate the distribution of sample means and quickly see that, most of the time, a bell-shaped distribution emerges. At this point, I replace “the dots” one last time and introduce normal distribution theory. I discuss how the t-distribution can be used to set up the null model (instead of setting up a simulation) and how the t-test can be used to determine whether our observed result is extreme enough to be deemed statistically significant. When I move on to tests for comparing two means, I no longer use simulation or randomization-based methods; instead, I rely on the fact that most students at this point have already developed a strong, intuitive understanding of p-values, and I jump right into the t-test and use software to compute p-values. I also rely at this point on normal theory for constructing confidence intervals.

In summary, my transition to normal-based methods involves using simulations/randomizations to introduce the logic of inference, connecting the empirical probabilities obtained from simulation studies to theoretical probabilities used in traditional tests, and then forgoing the use of simulations and focusing instead on only the traditional t-test once students have developed an understanding of core inferential concepts. I can’t help but wonder what George Cobb would think of my approach. In the same paper referenced earlier, he wrote, “There’s a vital logical connection between randomized data production and inference, but it gets smothered by the heavy, sopping wet blanket of normal-based approximations to sampling distributions.” Is my use of the t-distribution for testing and confidence intervals “smothering” my students’ understanding? Would it be better to always take a conceptual approach and to never introduce normal-based theory in the intro course?

Even though I use a simulation-based approach, I haven’t completely abandoned traditional or normal-based methods.

My gut feeling is no. First of all, I think that professors from our client disciplines might revolt if their students left my class having never heard of a t-test. Many statisticians might say, “Who cares? We’re the experts!” I agree to an extent, but I also believe that we have to serve our client disciplines well and that we have to prepare students for future courses in statistics or research methods. More importantly (though this is also a debatable issue), I often use normal-based approaches as a consulting statistician. Why shy away from teaching students methods we actually use or from tests that our software routinely conducts? Normal theory is a large part of the foundation of our discipline. I agree wholeheartedly with Cobb that it shouldn’t be the center of our curriculum, but I see no problem with it being present in our curriculum.

In conclusion, I do sometimes worry that teaching students multiple approaches to solve the same problem (e.g., using both simulations and a more traditional exact or normal-based test to test a claim regarding a certain parameter) might be overwhelming to some. Just trying to keep up, they might end up worrying more about the little details and focusing less on the big conceptual ideas that are important to me. Also, I do see some validity in the argument that normal-theory approximations are no longer needed now that we are equipped with appropriate computing power. At this point, however, I am going to continue using both approaches. I believe that simulation/randomization procedures are essential for developing core concepts of inference, and as I argued above, I also believe that normal-based methods have a rightful place in our curriculum. What I’m doing in my intro course may not be perfect, but it’s certainly better than what I was doing ten years ago. Hopefully I can say the same ten years from now.

Reference:
Cobb, G. (2007) “The Introductory Statistics Course: A Ptolemaic Curriculum?” Technology Innovations in Statistics Education: Vol. 1: No. 1, Article 1. http://repositories.cdlib.org/uclastat/cts/tise/vol1/iss1/art1

Leave a Reply

Your email address will not be published. Required fields are marked *