Bruce Evan Blaine, John Fisher College, Rochester NY
I teach simulation-based statistical inference methods (using R) in my 100-level Introduction to Data Science course. This course is the required first course for all Data Science minors, and a service course to numerous departments. I love teaching statistical inference this way because it reconnects me (and my students) with Fisher’s original ideas and methods, and expresses Tukey’s ideas that we learn about populations by being in dialogue with data. In the context of this welcome return to the empirical framework through which we understand and teach statistical inference, I wonder why we still teach students null hypothesis significance testing (NHST) in the same old way. I expect we’re all aware of the vast literature accumulated over the past 40 years that is critical of NHST and its role in the reproducibility crises in many disciplines. I feel like an introductory statistics or data science course that embraces simulated-based inference should also move away from teaching students conventional NHST methods for learning about populations.
I’m just encouraging us to think about whether the formal, reflexive method of classical NHST fits within an SBI pedagogical framework. Cohen and many others have urged us to replace NHST with inferential tools such as parameter estimation, effect size estimation, replication, and meta-analysis—tools that help us learn much more about our population of interest..
Mine Cetinkaya-Rundel, Duke University
Just a couple years ago I would have answered the question “Why simulation based?” with the following:
- opportunity to introduce inference before (or without) discussing details of probability distributions
- conceptual understanding of p-values – both the “assume the null hypothesis is true” part and the “observed or more extreme” part
Being able to introduce computation as an essential tool for conducting statistical inference is a huge benefit of simulation based inference.
These are the reasons why in the first chapter of OpenIntro Statistics (link)
, a textbook I co-authored, we decided to include a section on randomization tests. The Introductory Statistics with Randomization and Simulation (link
) textbook takes these ideas a step further and provides an introduction to statistical inference completely from a simulation based perspective. I believe these are important reasons for teaching simulation based inference, and many have already discussed them at length. However, for this post I’d like to focus on a lesser-discussed reason for teaching simulation based inference: it provides an opportunity to teach computation.
Robin Lock, St Lawrence University
Around 1998, Allan Rossman and Beth Chance asked me to help out with a new edition of their popular Workshop Statistics book that would be adapted to use a new software package called Fathom that was being developed by Bill Finzer, then at KCP Technologies.
But I could detect light bulbs going on with students thinking, “Oh, that’s what he means by seeing what would happen if the null hypothesis is true!’
Fathom has a lot of neat tools designed to allow students to explore statistical concepts, including a facility to allow students to easily select a sample from a dataset, define any statistic for that sample, and then quickly generate a new dataset with values of that statistic for many new samples. Continue reading
Jill Vanderstoep – Hope College
To be completely honest, I didn’t choose to teach with simulation-based methods. Nathan Tintle made a curricular change, and I followed. To this day, I am so glad I did because the changes have been the most refreshing and fun changes I have made in my Statistics classes over the twenty plus years I have been teaching.
I am so glad I did because the changes have been the most refreshing and fun changes I have made in my Statistics classes over the twenty plus years I have been teaching.
The CATALST group – University of Minnesota
Inspired by George Cobb’s plenary address at the first USCOTS in 2005, we began to explore ways to turn his ideas into an actual curriculum. We decided to explore the use of models and modeling in the course, and, funded by a NSF grant, developed the CATALST curriculum.
Our guiding principle was to teach students to really cook, rather than follow recipes.
Our goal was to develop a course that focused on randomization methods and random sampling, taking away the traditional focus on the two-sample t
-test. The CATALST course went through many iterations and had input from a team of great collaborators, including courageous graduate students who taught early versions of this radically different course. Our guiding principle was to teach students to really cook, rather than follow recipes. The cooking method uses randomization and repeated sampling methods to make statistical inferences. Even though there were many challenges, we feel that we developed a course that engages students and stimulates them to think, build and test models, and understand the core ideas of statistical inference. Continue reading
Josh Tabor, Canyon del Oro High School
Short answer: I teach with simulation-based methods because I believe they make it easier for students to understand the logic of inference and see statistics as a complete investigative process from asking questions to drawing conclusions.
I settled on two guiding principles that would inform the way I designed the course:
- Emphasize that Statistics is an investigative process, not a set of isolated skills.
- Stay focused on the logic of inference.
Chris Malone, Winona State University
I started teaching at Winona State University in 2002. I am fortunate to be able to teach a variety of statistics courses to students with an even wider variety of backgrounds here at Winona State. About 10 years ago, at the advice of a senior faculty member, I started doing statistical consulting work to balance my teaching duties. This had a remarkable impact on my teaching at that time. Consulting continues to this day to shape my approach to teaching statistics.
The traditional sequence was too compartmentalized and did not allow much time for students to conduct a statistical analysis from start-to-finish.
Dave Klanderman , Trinity Christian College
It would be accurate to say that I was a skeptic when I showed up at a week-long MAA workshop at Dordt College in June 2013. At the urging of my departmental colleague, who is now serving as our Provost, I signed up to learn more about a new statistics textbook, a new paradigm for teaching and learning statistics, and a chance to connect with both friends and family in Sioux Center, Iowa.
Additional sessions convinced me that this approach had merit and the comparison data using the CAOS provided the final piece of assessment evidence.
Lacey Echols, Butler University
I knew I was in trouble teaching statistics when I always thought I was doing such a great job, but the students were totally lost the last four weeks of the semester! It didn’t matter how many times I used active learning in my classroom or the how many great lectures I thought I presented, they just did not get the concept of a hypothesis test.
it is wonderful to present the ideas of how to think about research questions, the proper way to write hypotheses, the meaning of a p-value, and the meaning of statistically significant within the first four days of a semester.
The early part of the semester with data description, probability, and experimental design had lulled them into a false sense of security.