Monthly Archives: January 2017

Don’t Forget the Conditions for the Bootstrap!

David Diez, OpenIntro

The percentile bootstrap approach has made inroads to introductory statistics courses, sometimes with the incorrect declaration that it can be used without checking any conditions. Unfortunately, the percentile bootstrap performs worse than methods based on the t-distribution for small samples of numerical data. I would wager that the large majority of statisticians proselytize the opposite to be true, and I think this misplaced faith has created a small epidemic.

The percentile bootstrap is nothing new, but its weaknesses remain largely unknown in the community. I find myself wrestling with several considerations whenever I think about this topic.

Continue reading

Assessing Knowledge and Understanding of Simulation-Based Inference Without Technology

Kari Lock Morgan, Assistant Professor of Statistics, Penn State University

Computers (or miniature versions such as smart phones) are necessary to do simulation-based inference.  How then can we assess knowledge and understanding of these methods without computers?  Never fear, this can be done!  I personally choose to give exams without technology, despite teaching in a computer classroom once a week, largely to avoid the headache of proctoring a large class with internet access. Here are some general tips I’ve found helpful for assessing SBI without technology:

Much of the understanding to be assessed is NOT specific to SBI.  In any given example, calculating a p-value or interval is but one small part of a larger context that often includes scope of inference, defining parameter(s), stating hypotheses, interpreting plot(s) of the data, calculating the statistic, interpreting the p-value or interval in context, and making relevant conclusions.  The assessment of this content can be largely independent of whether SBI is used.

In lieu of technology, give pictures of randomization and bootstrap distributions. Eyeballing an interval or p-value from a picture of a bootstrap or randomization distribution can be difficult for students, difficult to grade, and an irrelevant skill to assess.  Here are several alternative approaches to get from a picture and observed statistic to a p-value or interval without technology:

Choose examples with obviously small or not small p-values.

Continue reading

Assessing Knowledge and Understanding of Simulation-Based Inference With Technology

Robin Lock, Burry Professor of Statistics, St. Lawrence University

I have the luxury of teaching in a computer classroom with 28 workstations that are embedded in desks with glass tops to show the monitor below the work surface.  This setup has several advantages (in addition to enforcing max class size cap of 28) since computing is readily available to use at any point in class, yet I can easily see all of the students, they can see me (no peeking around monitors), and they still have a nice big flat surface to spread out notes, handouts and, occasionally a text book (although many students now use an e-version of the text).  I also have software on the instructor’s station (Smart Sync) that shows a thumbnail view of what’s on all student screens.  Since the class is setup to use technology whenever needed and appropriate, it is natural to extend this to quizzes and exams, so my students routinely expect to use software as part of those activities.

Ideally I’d like to see what each student produces on the screen and how they interpret the output to make statistical conclusions, but it’s not practical to look over everyone’s shoulder as they work.

Continue reading