Sorry, you need to enable JavaScript to visit this website.

USCOTS 2009 - Breakout Session #2

  1. Allan Rossman & Beth Chance, Cal Poly - San Luis Obispo

    "Which Topics Can Be Let Go in Stat 101?"

    Is more less? Does depth beat breadth? Can our students in introductory statistics develop a better understand of key concepts if we aim to cover fewer topics? Can we help these students learn to interpret, evaluate, and critique statistical studies more effectively if we stop trying to teach so many techniques? Looking at a standard introductory textbook reveals a huge number of topics, seemingly the union of all topics that an instructor could possibly want. In this session we will lead a discussion about which topics can be let go, in order to help students to better understand the big ideas of statistics.

  2. Jessica Utts, University of California, Irvine

    Seeing Through Statistics by Letting Go of Math

    In April 2008 a study was published in the Proceedings of the Royal Society B titled "You Are What Your Mother Eats," asserting that children born to mothers who eat breakfast cereal are more likely to be boys than are children born to mothers who do not eat breakfast cereal. In January 2009, in the online Proceedings of the Royal Society B, statistician Stan Young and colleagues published an analysis showing that the result was probably a false positive. The original study tested 132 food items over two time periods, for a total of 264 tests. Young et al noted that we should expect about 13 false positives, if indeed there are no true relationships between foods consumed and baby's sex, and the study indeed found 13 "positive" results. Many of our students who take introductory statistics come away from the course able to compute a standard deviation, yet unable to spot an egregious example of poor statistical reporting such as the one illustrated by this example. I think that means we are doing a poor job of educating the next generation of medical researchers, journal referees, policy-makers, journalists, and so on. We can do better, but how do we make the shift? Come prepared to discuss your ideas.

  3. Chris Wild, University of Auckland, New Zealand

    "Description, Inference and Animation"

    In this Breakout session we will do two things related to Chris Wild's talk on informal inference. First, we will think about the language we use when describing data by deconstructing some GAISE examples given for the school level. We will ask, "What are they doing right now?" and "How would/could students know that?" These examples will be taken as taken as exhibits of the way we statisticians think and write about data and our goal is to understand how that might impact on students. Participants will also experience an activity that provides a transition between hands-on activities and computer images, by sampling from populations of kiwi birds (actually cards in plastic bags), transforming into dot plots and box plots on overhead transparencies which can be made to relate very obviously to the computer animations in the talk.

  4. Bob delMas, University of Minnesota; & Marsha Lovett, Carnegie Mellon University

    "From Research to Learning Activities: Letting Go of the Old, Adding the New"

    Activities can be an effective way for students to learn, but they also take time. How do we identify what to remove and what to put into an activity to have the most impact on student learning? The purpose of this session is for participants to learn about designing instructional activities that are informed by statistics education and learning research. Participants will take part in a sample lesson excerpted from the NSF-funded Adapting and Implementing Innovative Material in Statistics (AIMS) curriculum. With this common ground of experience, participants will work in groups to identify aspects of the lesson that support student learning. The presenters will facilitate a discussion of how the identified instructional supports stem from learning and educational research findings to help participants generate a list of best practices. As a final activity, participants will work in groups to critique and revise another sample lesson, this time drawing on and applying the list of best practices to make the activity as effective as possible.

  5. Dennis Pearl, The Ohio State University

    "SCROUNGE: Statistics Can Respond to Opportunities Under NSF Grants for Education"

    Have an idea about teaching and learning that needs funding to implement? Let go of the idea that proposal writing is too difficult to attempt. In this session I will describe some of the key lessons I have learned about writing competitive grants for NSF's main college education funding source, the CCLI program. Working with ideas for topics arising from the audience, participants will be guided in preparing their own brief outline for the goals and work plan for a grant, explore possible partners in the endeavor, and prepare an outline of an evaluation plan.

  6. Amy Froelich, Iowa State University

    "Using JMP Statistical Discovery Software for Building Conceptual Understanding in Introductory Statistics" *

    Simulation is an important tool in teaching topics related to sampling distributions and inference in the introductory statistics class. Many of these simulations have been developed with Java applets and made available on the web. While these applets are easy to use and readily available to statistics instructors, they usually do not match classroom, laboratory or homework activities and are generally different than software used for data analysis. As a result, students can struggle with the transition between classroom activities and assignments, data analysis and computer simulation activities. In this session, I will lead the participants through classroom lessons, homework or laboratory activities that take advantage of JMP's powerful interface features to teach sampling and inference.

    * - Based on joint work with William M. Duckworth, Creighton University and Wayne Levin and Brian McFarlane, Predictum, Inc. and funded by JMP Statistical Discovery Software, a division of SAS Institute, Inc.