Bruce Evan Blaine, John Fisher College, Rochester NY
I teach simulation-based statistical inference methods (using R) in my 100-level Introduction to Data Science course. This course is the required first course for all Data Science minors, and a service course to numerous departments. I love teaching statistical inference this way because it reconnects me (and my students) with Fisher’s original ideas and methods, and expresses Tukey’s ideas that we learn about populations by being in dialogue with data. In the context of this welcome return to the empirical framework through which we understand and teach statistical inference, I wonder why we still teach students null hypothesis significance testing (NHST) in the same old way. I expect we’re all aware of the vast literature accumulated over the past 40 years that is critical of NHST and its role in the reproducibility crises in many disciplines. I feel like an introductory statistics or data science course that embraces simulated-based inference should also move away from teaching students conventional NHST methods for learning about populations.
[pullquote] I’m just encouraging us to think about whether the formal, reflexive method of classical NHST fits within an SBI pedagogical framework. Cohen and many others have urged us to replace NHST with inferential tools such as parameter estimation, effect size estimation, replication, and meta-analysis—tools that help us learn much more about our population of interest..[/pullquote]
Continue reading →
Kie Van Ivanky Saputra, Kaprodi Matematika Terapan
We met Buzz and Doris when we wanted to learn statistics. They are dolphins who were trying to get some rewards if they were able to communicate while we were learning to statistically test if they were communicating. In 16 trials, Doris gave signs to Buzz as to which button to press and it turned out Buzz pushed the correct button in 15 out of the 16 trials. We, still not convinced that they were communicating, assumed that it was just a lucky day for them and tried to simulate 15 successes in 16 trials with tossing 16 coins to see whether or not we can get 15 heads out of 16 tosses. The first time we only get 9 heads out of 16, the second time we get 8 heads and we continued this until we had done 100 repetitions. It turned out we could only get a maximum of 12 heads out of 16 tosses. Let’s continue the repetitions until 1000 and out of 1000 there was only 1 simulation that gave us 15 heads out of 16 tosses. It seems impossible now that the dolphins had just had a lucky day. They had something more than just guessing which button to press. Since that day, we know more about a p-value, and null hypothesis.
[pullquote]The above was my first experience in teaching statistics with simulation-based inference.[/pullquote]
Continue reading →
Karsten Luebke, FOM, Germany
This post is based on joined work with Oliver Gansser, Matthias Gehrke, Bianca Krol, and Norman Markgraf.
The FOM is a private University of Applied Science in Germany for people studying while working. We are offering several, mainly economic related bachelor and master study programs in 29 study centers across Germany. The size of the courses with statistical content varies: from 15 to 150 students – or even more. [pullquote]We used a relaunch of our BA degree in Summer 2016 to rethink and rebuild our curriculum in the different introductory statistics courses.[/pullquote] Continue reading →
Matt Beckman, Penn State University
What this is & what this isn’t
This post is intended share some pragmatic thoughts for teaching SBI in a large class, and not necessarily converting your curriculum to the SBI framework. A number of suggestions on the latter have been published in this blog and elsewhere. Besides, my colleague–Kari Lock Morgan–had already done a remarkable job accomplishing that feat in the course to be described before I arrived. What follows are simply remarks about rubber-meets-the-road strategies from teaching an SBI course with 225 students to either capitalize on large class size or at least help navigate some logistical challenges that surface with increased enrollment.
[pullquote]What follows are simply remarks about rubber-meets-the-road strategies from teaching an SBI course with 225 students to either capitalize on large class size or at least help navigate some logistical challenges that surface with increased enrollment.[/pullquote]
Continue reading →
Karsten Maurer, Assistant Professor of Statistics, The Miami University
As statisticians, we tend to think that if we just have enough data in front of us then we can get at the heart of what is going on in any scenario and many statistics educators want to know what is going on with student learning outcomes from different curricula. So the solution is simple, right? Just collect a bunch of data on our students’ learning outcomes under different curricula and identify the strongest pedagogy. We can even get fancy and toss in some experimental design to structure the application of treatments to our experimental units to support causal conclusions about impacts on learning outcomes. Alright, I am being facetious here. It is never that straight forward. I will admit that this was my first instinct when I set out to do educational research as a graduate student. There are a number of issues that constrain plans for what would be a tidy and straightforward educational experiment: defining the curricular treatments, assigning students to curricula, applying the curricular treatments, measuring learning outcomes.
[pullquote]In order to reinforce the analyses from small-scale educational experiments like ours, we need to find a way to either eliminate or account for the classroom-based dependence structures.[/pullquote]
Continue reading →
David Diez, OpenIntro
The percentile bootstrap approach has made inroads to introductory statistics courses, sometimes with the incorrect declaration that it can be used without checking any conditions. Unfortunately, the percentile bootstrap performs worse than methods based on the t-distribution for small samples of numerical data. I would wager that the large majority of statisticians proselytize the opposite to be true, and I think this misplaced faith has created a small epidemic.
[pullquote]The percentile bootstrap is nothing new, but its weaknesses remain largely unknown in the community. I find myself wrestling with several considerations whenever I think about this topic.[/pullquote]
Continue reading →
Kari Lock Morgan, Assistant Professor of Statistics, Penn State University
Computers (or miniature versions such as smart phones) are necessary to do simulation-based inference. How then can we assess knowledge and understanding of these methods without computers? Never fear, this can be done! I personally choose to give exams without technology, despite teaching in a computer classroom once a week, largely to avoid the headache of proctoring a large class with internet access. Here are some general tips I’ve found helpful for assessing SBI without technology:
Much of the understanding to be assessed is NOT specific to SBI. In any given example, calculating a p-value or interval is but one small part of a larger context that often includes scope of inference, defining parameter(s), stating hypotheses, interpreting plot(s) of the data, calculating the statistic, interpreting the p-value or interval in context, and making relevant conclusions. The assessment of this content can be largely independent of whether SBI is used.
In lieu of technology, give pictures of randomization and bootstrap distributions. Eyeballing an interval or p-value from a picture of a bootstrap or randomization distribution can be difficult for students, difficult to grade, and an irrelevant skill to assess. Here are several alternative approaches to get from a picture and observed statistic to a p-value or interval without technology:
[pullquote]Choose examples with obviously small or not small p-values.[/pullquote]
Continue reading →
Robin Lock, Burry Professor of Statistics, St. Lawrence University
I have the luxury of teaching in a computer classroom with 28 workstations that are embedded in desks with glass tops to show the monitor below the work surface. This setup has several advantages (in addition to enforcing max class size cap of 28) since computing is readily available to use at any point in class, yet I can easily see all of the students, they can see me (no peeking around monitors), and they still have a nice big flat surface to spread out notes, handouts and, occasionally a text book (although many students now use an e-version of the text). I also have software on the instructor’s station (Smart Sync) that shows a thumbnail view of what’s on all student screens. Since the class is setup to use technology whenever needed and appropriate, it is natural to extend this to quizzes and exams, so my students routinely expect to use software as part of those activities.
[pullquote]Ideally I’d like to see what each student produces on the screen and how they interpret the output to make statistical conclusions, but it’s not practical to look over everyone’s shoulder as they work.[/pullquote] Continue reading →
Jo Hardin – Pomona College, Claremont, CA
Many of us will agree that using tactile demonstrations is super fun and can also be an excellent way to teach a particular concept. Students engage with the material differently when they can touch, smell, or taste the objects as opposed to only seeing or listening to a demonstration. The SBI blog has had many excellent articles describing in-class tactile simulations, see here and here and here.
However, sometimes the logistical constraints setting up the demonstration take away too much from an already packed 50 minute class session. And those details get even harder with large classes. One of the biggest challenges comes from collecting data or getting results back from the students. Although some classes have sophisticated clickers that make data collection easier, setting up and using clickers is also a logistical challenge (well worth it for using all semester, but not for a one day class demonstration).
[pullquote]The conversation that ensues about the experimental design is incredibly valuable for understanding paired design (and the motivation for the pairing) or survival analysis (and the need for tools to analyze censored data). [/pullquote]
Continue reading →
When I am attempting to test understanding of carrying out a simulation test about a single proportion, I like to use the following problem, or some variation of it. I’m fond of animals and studies that show that animals are clever, so this study and ones like it, appeals to me.
A chimpanzee named Sarah was the subject in a study of whether chimpanzees can solve problems. Sarah was shown 30-second videos of a human actor struggling with one of several problems (for example, not able to reach bananas hanging from the ceiling). Then Sarah was shown two photographs, one that depicted a solution to the problem (like stepping onto a box) and one that did not match that scenario. Researchers watched Sarah select one of the photos, and they kept track of whether Sarah chose the correct photo depicting a solution to the problem. Sarah chose the correct photo in 7 of 8 scenarios that she was presented. In order to judge whether Sarah understands how to solve problems we will define π to be the probability Sarah will pick the photo of the correct solution. [pullquote]I don’t let them get away with just claiming that the p-value is some particular number – they have to explain how they know it is that number.[/pullquote] Continue reading →