What are the Clues for Intuitive Assessment of Variability?


Book: 
Papers Presented at SRTL-3
Authors: 
Lann, A. & Falk, R.
Editors: 
Lee, C. & Satterlee, A.
Category: 
Year: 
2003
Publisher: 
SRTL-3, Lincoln, Nebraska
Abstract: 

The most prominent characteristic of people's dealings with variability (that we are aware of to date) is their tendency to eliminate, or underestimate the dispersion of the data (e.g., Kareev, Arnon &amp; Horowitz-Zeliger, 2002), that is, the differences among individual observations and among means of samples from a population (Tversky &amp; Kahneman, 1971). One typically focuses on the average, and forgets about the individual differences in the material.<br><br>Shaughnessy and Pfannkuch (2002) report that when asked to analyze a set of data, many students just calculated a mean or a median. They claim that past teaching and textbooks concentrated heavily on such measures and neglected variation. Shaughnessy and Pfannkuch maintain, however, that variability is important. It exists in all processes. Understanding of variation is the central element of any definition of statistical thinking. They quote David Moore's slogan "variation matters" (p. 255).<br><br>In the history of statistics, the tendency to eliminate human variability was represented (in the first half of the 19th century) by Quetelet, who focused on regularities. According to Gigerenzer, et al. (1989), Quetelet understood variation within species as something akin to measurement- or replication-error: The average expressed the "essence" of humankind. "Variations from the average man were accidental - matters of chance - in the same sense that measurement errors were" (p. 142). Quetelet's conception of variation was diametrically different from Darwin's, who focused on variability itself and regarded variations from the mean as the crucial materials of evolution by natural selection.<br><br>One example of the tendency to ignore variability is obtained when assignment of probabilities (or weights) to a set of possible outcomes is called for. People often tend to distribute these probabilities equally over the available options (Falk, 1992; Lann &amp; Falk, 2002; Pollatsek, Lima &amp; Well, 1981; Zabell, 1988), employing what we call the uniformity heuristic. Equi-probability, or zero variability among the probabilities, is the simplest and easiest choice to fall back on.

The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education

register