One of the biggest challenges we all face as teachers of statistics is testing students’ statistical knowledge. For example, how do we know if the assessment questions we write are assessing students’ understanding of the statistical material and not some irrelevant construct? How do we know how many questions should be on an assessment to truly see if students are “getting it?” But these questions are only the tip of the iceberg. I know we also grapple with finding interesting contexts and datasets for assessing a particular statistical method; It is hard and time consuming! I have often found an exciting example and dataset only to find that once I start digging into the data, it is beyond the level expected of my introductory statistics students (*Sigh…back to the drawing board). When I was asked to write this blog post, I thought it would be great to share an interesting question so that the task of assessment development isn’t so burdensome for others.[pullquote] I think it is important to note that I am not just asking them to conduct a randomization test, but am also asking them for interpretations and to think about how study design affects our conclusions. Asking them a multitude of questions for one context also saves me time for coming up with different contexts and reduces cognitive load for my students.[/pullquote] Continue reading

# Data Sets

The following data sets have been submitted by members of the SBI listserve.

- Hope College students (in 2003) were wondering if there are any gender differences when it comes to how long people talk on their cell phone. They asked a sample of other students and asked them their gender (0=female, 1=male) and how long their last cell phone call was as measured in seconds (they could find this data recorded on the phone). Dealing with the outlier in this data set makes it interesting. cellphonedata

# Random Sampling to Support Informal Inference using Census at School

**Alison Gibbs. University of Toronto**

In Canada, school curricula differ by province, but most Canadian mathematics curricula include glimpses of statistical thinking, typically in the middle grades. In the province of Ontario, tracing the statistics part of the curriculum through the grades reveals a progression in sophistication of tools for summarizing data, with some scattered mentions of the ideas of informal inference. Students are encouraged to make inferences from their observations, but typically without tools to support their generalizability. Teachers are aware that there are important statistical ideas their students need to understand to do this well. For example, they know that a larger sample size is usually better, but they don’t know how to show their students the effects of sample size on the inferences they can make. In addition, teachers often have the challenge of irregular access to technology and uneven expertise and support. In this context, I recently worked with a group of 15 middle school teachers on an activity that uses multiple random samples to better understand the effect of sample size, with only minimal need for technology.[pullquote]With the random sampler, students can draw random samples of data from the accumulated databases of questionnaire responses from students from participating countries. [/pullquote]

# Introducing resampling in Grade 10 in Tasmania, Australia

**Jane Watson, University of Tasmania**

I am a statistics educator in the Australian state of Tasmania. Recently I collaborated with a Grade 10 math teacher on a unit on statistics and probability to challenge her advanced mathematics class. The students’ backgrounds were traditional and procedural. There were eight extended lessons of 1½ hours using the *TinkerPlots* software.

[pullquote]…we gave students a variation on the famous Hospital problem: “Ted and Jed are each tossing a fair coin. Ted tosses his 10 times and Jed tosses his 30 times. Which one of them is more likely to get more than 60% heads or do they have the same chance?” Almost unanimously they said, “the same of course.”[/pullquote]

# Using simulation-based methods in New Zealand

**Stephanie Budgett, University of Aukland**

At the University of Auckland, our first year statistics course is large. By large we mean about 4500 students per year, with approximately 300 in our summer school semester (lasting 6 weeks, starting early January), 2500 in our first semester (lasting 12 weeks, starting early March) and 1800 in our second semester (starting 12 weeks, starting mid-July). Apart from summer school, we teach in multiple streams with class sizes ranging from 100 students to 400 students per class. Most of our students will not major in statistics and are taking the course because it is a requirement. Over one-half of our students will have taken a statistics course in their last year at school. These students will most likely have taken the *Use statistical methods to make a formal inference* standard which includes bootstrap confidence intervals. A smaller percentage, say about 20%, may have taken the *Conduct an experiment to investigate a situation using experimental design principles* standard which includes randomization tests.[pullquote] From a teaching perspective, we believe that the concept of the tail proportion in the randomization test enhances student understanding of *p*-values.[/pullquote]

# Teaching computation as an argument for simulation based inference

**Mine Cetinkaya-Rundel, Duke University**

Just a couple years ago I would have answered the question “Why simulation based?” with the following:

- opportunity to introduce inference before (or without) discussing details of probability distributions
- conceptual understanding of p-values – both the “assume the null hypothesis is true” part and the “observed or more extreme” part

[pullquote]Being able to introduce computation as an essential tool for conducting statistical inference is a huge benefit of simulation based inference. [/pullquote]These are the reasons why in the first chapter of OpenIntro Statistics (link), a textbook I co-authored, we decided to include a section on randomization tests. The Introductory Statistics with Randomization and Simulation (link) textbook takes these ideas a step further and provides an introduction to statistical inference completely from a simulation based perspective. I believe these are important reasons for teaching simulation based inference, and many have already discussed them at length. However, for this post I’d like to focus on a lesser-discussed reason for teaching simulation based inference: it provides an opportunity to teach computation.

# Reflections after two years of simulation-based inference in AP statistics

**Andrew Walter, Shawnee Mission East High School**

I am in my second year of implementing simulation-based methods, and I’m thrilled with how it has enhanced my AP Statistics course. My struggles teaching the course are probably familiar to others, and include: difficultly teaching vocabulary, difficulty spiraling review topics, and difficulty helping students grasp some of the key topics in ways that indicate true understanding. Using the simulation-based inference methods throughout the school year has helped me address all of these concerns and more. I will briefly explain how I use this method in my class, and then comment specifically about how it has helped.[pullquote]Simulation activities are a perfect way to blend “hands-on” learning with using technology.[/pullquote]

# Archived webinar/e-conference sessions

**Listed below are a series of webinar/e-conference style presentations given by faculty using, or considering the use of, SBI methods**

Batting for power (using a simulation-based approach)

Allan Rossman and Beth Chance (Cal Poly San Luis Obispo)

October 27, 2015

Reflections on making the switch to a simulation-based inference curriculum

Panelists: Julie Clark (Hollins), Lacey Echols (Butler), Dave Klanderman (Trinity), Laura Schultz (Rowan); Moderator: Nathan Tintle

September 8, 2015

Teaching the statistical investigation process with randomization-based inference

Nathan Tintle (Dordt College) and Beth Chance (Cal Poly San Luis Obispo)

eCOTS 2014

Teaching Randomization-based Methods in an Introductory Statistics Course: The CATALST Curriculum

Bob delMas, University of Minnesota

eCOTS 2014

StatKey – Online Tools for Teaching Bootstrap Intervals and Randomization Tests

with Robin Lock, St. Lawrence University

August 27^{th}, 2013

Using Simulation to Introduce Inference for Regression

with Josh Tabor, Canyon del Oro High School

May 28^{th}, 2013

Introducing inference with bootstrapping and randomization

with Kari Lock Morgan, Duke University

eCOTS 2012

Using Simulation Methods to Introduce Inference

with Kari Lock Morgan, Duke University

December 13^{th}, 2011

Bootstrapping and randomization: Seeing all the moving parts

with Chris Wild, University of Auckland, New Zealand

November 22^{nd}, 2011

Create an Iron Chef in statistics classes?

Rebekah Isaak, Laura Le, Laura Ziegler, and the CATALST Team

June 14^{th}, 2011

Golfballs In The Yard – Using Simulation To Teach Hypothesis Testing

Randall Pruim, Calvin College

January 25^{th}, 2011

“Using baboon “mothering” behavior to teach Permutation tests”

with Thomas Moore, Grinnell College

Sept 14, 2010

“Pedagogical simulations with StatCrunch”

with Webster West, Texas A&M University

July 13, 2010

Concepts of Statistical Inference: A Randomization-Based Curriculum

Allan Rossman & Beth Chance, Cal Poly – San Luis Obispo; and John Holcomb, Cleveland State University

April 14^{th}, 2009

Teaching Statistical Inference via Simulation using R

Daniel Kaplan, Macalester College

October 14^{th}, 2008

# Dragged kicking and screaming by an Algebraist!

I teach in a very small department (we just increased from 3.5 to 4.5 tenure track positions this year), but the support for statistics at Cornell College is pretty amazing. Consider, for instance that for at least 30 years, one of those tenure track positions in the math department has been held by a statistician (me for the last 22 years). I’m also proud of the fact that for 40+ years the college has had a single introductory statistics course with multiple sections. This course is required for several majors and is the prerequisite for courses across the curriculum. Finally, when I was hired, the department and I agreed that when I taught math, I’d teach it the way the mathematicians wanted me to teach it, and when they taught stat, they’d teach it the way I wanted them to teach stat. This agreement continues today, though I rarely teach math anymore.[pullquote]…in workshop fashion, I helped my math colleagues to explore the new material.[/pullquote]

# There’s no convincing necessary if you’re the boss: implementing the simulation-based approach with TA instructors

**Erin Blankenship, University of Nebraska-Lincoln**

Like many statistics faculty who completed their graduate training during the last century, my preparation for teaching went something like this: I was handed a book (Moore & McCabe, 2^{nd} edition—it’s still on my shelf). While TA training–at least at my institution–has evolved since then, it had to adapt further to prepare TAs for the simulation-based inference approach.[pullquote]… [TAs] also attended the large class meetings and so could see how I was implementing the simulation methods.[/pullquote]