Friday, May 17th, 2019 – 8:30am - 9:30am
Jane Watson (University of Tasmania)
Abstract: The ASA GAISE Report for school outlines the practice of statistics (or statistical problem solving!) as appropriate at the school level. It also gives examples. But what actually happens in the classroom? This talk will introduce the Big Ideas underlying the practice of statistics at school and give some examples of Australian research in classrooms and the associated outcomes. In the classroom we want students to think like statisticians but of course they do not have the tools that statisticians have for evaluating the evidence arising from the data they collect. Three activities will be presented from Australian research. In Grade 3 students were introduced to the Big Idea of Variation through comparing a hand-made product with a manufactured one. In Grade 5, a different cohort of students carried out the practice of statistics by exploring the question, “Are we environmentally friendly?”. They first answered the question for their class, then sampled from a “population” of Australian Grade 6 students. In Grade 10, a class was introduced to TinkerPlots and Resampling to collect and evaluate evidence about a difference in two groups and a relationship in a two-way table.
Jane Watson studied at the Australian National University on a Fulbright Scholarship from Oklahoma University in 1967, later working there as a Research Assistant. She began as a Tutor in Mathematics at the University of Tasmania in 1972. During her time there she completed a PhD in Mathematics Education in her home town of Manhattan, at Kansas State University. She joined the Faculty of Education at UTas in 1985, where she taught courses in mathematics and mathematics education for preservice teachers and masters courses for in-service teachers. Her major research focus since 1993 has been Statistics Education, with constant Australian Research Council funding. Longitudinal data of students' understandings were collected for 10 years; studies considered higher order statistical thinking when students worked collaboratively and the influence of cognitive conflict on students' statistical reasoning. From 2000 interest focused on the importance of student understanding of variation, including collaboration with Mike Shaughnessy in the US. Then interest turned to teachers and the professional development required in order to implement the curriculum given the results of research with students. Most recently research returned to the classroom with intervention projects on beginning inference in Grades 4-6 and data modelling in relation to STEM in Grades 3-6.
Friday, May 17th, 2019 – 1:45pm - 2:45pm
Ron Wasserstein (American Statistical Association), & Allen Schirm (Mathematica Policy Research; retired)
Abstract: The speakers will discuss what they learned from editing (with Nicole Lazar) the 43 articles in the special issue of The American Statistician on “Statistical Inference in the 21st Century: A World Beyond P < 0.05.” After briefly reviewing the “Don’ts” set forth in the ASA Statement on P-Values and Statistical Significance—and adding a new one—they offer their distillation of the wisdom of the many voices in the special issue (and the broader literature): specific Do’s for teaching, doing research, and informing decisions as well as underlying principles for good statistical practice.
Ron Wasserstein is the executive director of the American Statistical Association, a position he has held since 2007. Previously he was a faculty member and the academic vice president at Washburn University. He is a fellow of the American Statistical Association and the American Association for the Advancement of Science. He also is a co-editor of the special issue of The American Statistician. Prior to becoming executive director of the ASA, he served in numerous volunteer positions for the association and was a member of the ASA Board of Directors. Wasserstein received the BA in mathematics from Washburn University and the MS and PhD in statistics from Kansas State University.
Allen Schirm retired from Mathematica Policy Research in 2016 after more than 27 years, during which he held several positions, including Vice President, Director of Human Services Research, Director of Methods, and Senior Fellow. He is a fellow of the American Statistical Association and a former chair of its Social Statistics Section. In 2016, Allen was designated a National Associate of the National Academies of Sciences, Engineering, and Medicine “in recognition of extraordinary service” to the National Academies. Recently, he served as co-editor of a special issue of The American Statistician entitled “Statistical Inference in the 21st Century: A World Beyond ‘P<0.05’,” to be published in March, 2019. Allen received an A.B., summa cum laude, in statistics from Princeton University and a Ph.D. in economics from the University of Pennsylvania.
Saturday, May 18th, 2019 – 8:30am - 9:30am
John Kruschke, Provost Professor of Psychological and Brain Sciences, (Indiana University)
Abstract: Frequentist and Bayesian statistical methods typically are taught in separate courses. I propose instead that frequentist and Bayesian methods can be fruitfully taught together. By setting the two approaches side by side, the goals and concepts of each approach are delineated more clearly. The teaching framework incorporates the essential topics of (i) hypothesis testing and (ii) estimation with uncertainty. The progression of topics is guided with a two-by-two table (based on the framework of Kruschke & Liddell, 2018). The table has columns for frequentist and Bayesian approaches, and rows for hypothesis testing and estimation with uncertainty. Teaching and learning are facilitated by an interactive Shiny app, arranged in a corresponding table. The app juxtaposes the different information provided by the different approaches, and interactively reveals dependencies of each approach on different assumptions. The teaching framework integrates the introduction of Bayesian methods and clarifies frequentist ideas.
Reference cited: Kruschke, J. K. and Liddell, T. M. (2018). The Bayesian new statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective. Psychonomic Bulletin & Review, 25, 178-206. DOI: https://doi.org/10.3758/s13423-016-1221-4
John K. Kruschke is Provost Professor of Psychological and Brain Sciences at Indiana University in Bloomington. He is eight-time winner of Teaching Excellence Recognition Awards from Indiana University, and was named Provost Professor for achieving distinction in both teaching and research. He has authored tutorial articles on Bayesian data analysis published in scientific journals, and he is author of the textbook, Doing Bayesian Data Analysis, 2nd Edition: A Tutorial with R, JAGS, and Stan (2015). Kruschke has taught introductory applied Bayesian or frequentist data analysis for 30 years, to graduate students from many disciplines across the sciences. He has also presented 45 workshops on Bayesian data analysis, most of them extended multiday courses, to diverse audiences from academia, government, business, and industry.
Kruschke has a bachelor’s degree in mathematics (with high distinction in general scholarship) from the University of California at Berkeley, and he has a doctoral degree in Psychology also from U. C. Berkeley. For his early theoretical and empirical work on mathematical models of attention in learning, Kruschke won a FIRST Award from NIMH and the Troland Research Award from the National Academy of Sciences (USA). He then switched focus to disseminating Bayesian methods for data analysis. In recent years his research has turned to the science of moral judgment, for which he won the Remak Distinguished Scholar Award from Indiana University.
(For a more lighthearted biographical sketch, see http://www.indiana.edu/~kruschke/ProvostBioSketchComparisonForWeb2.html)
Saturday, May 18th, 2019 – 1:45pm - 2:45pm
Kari Lock Morgan, Department of Statistics, Pennsylvania State University
Abstract: There is a lot a statistician can (and should) consider when evaluating evidence, and not all of it can be mastered after a single course. What aspects of evaluating evidence are most important for our introductory students to understand? I’ll provide my answer here, drawing from backgrounds in causal inference and statistics education, and will open this up for more general discussion in the breakout to follow. Suppose we want to evaluate evidence that A causes better outcomes than B. If better outcomes are observed in the A group, I want my students to deeply understand that there are (at least) 3 possible explanations for this:
- random chance
- the groups differed at baseline (confounding)
- A causes better outcomes than B.
Therefore, evaluating evidence for (c) requires first evaluating evidence against (a) and (b). While inference and confounding are certainly not new to introductory courses, I will argue that these should remain as cornerstones in an introductory course, but with modern approaches that focus on understanding and unite under the umbrella of evaluating evidence
Bio: Kari Lock Morgan received her PhD in statistics from Harvard University and is now an assistant professor of statistics at Penn State. She was the recipient of the national 2018 Robert V. Hogg Award for Excellence in Teaching Introductory Statistics. She is a past program chair of eCOTS, Associate Director for Professional Development for CAUSE, on the ASA/MAA Joint Committee for Undergraduate Statistics Education, and has served on the Executive Committee for the ASA Section on Statistics Education. As one of the five Locks, she is a co-author of Statistics: Unlocking the Power of Data, and co-developer of StatKey. Her research interests include causal inference, simulation-based inference, and statistics education.