Theory

  • In describing the work of the nineteenth-century statistician Quetelet, Porter (1986)<br>suggested that his major contribution was in:<br><br>...persuading some illustrious successors of the advantage that could be gained in certain cases by turning attention away from the concrete causes of individual phenomena and concentrating instead on the statistical information presented by the larger whole (pg. 55).<br><br>This observation describes the essence of a statistical perspective - attending to features of aggregates as opposed to features of individuals. In attending to where a collection of values is entered and how those values are distributed, statistics deals for the most part with features belonging not to any of the individual elements, but to the aggregate which they comprise. While statistical assertions such as "50% of marriages in the U.S. result in divorce" or "the life expectancy of women born in the U.S. is 78.3 years" might be used to make individual forecasts, they are more typically interpreted as group tendencies or propensities. In this article, we raise the possibility that some of the difficulty people have in formulating and interpreting statistical arguments results from their not having adopted such a perspective, and that they make sense of statistics by interpreting them using more familiar, but inappropriate, comparison schemes.

  • In everyday teaching, the mathematical meaning of new knowledge is frequently devalued during the course of ritualized formats of communication, such as the "funnel pattern", and is replaced by social conventions. Problems of understanding occurring during the interactively organized elaboration of the new knowledge require an analysis of the interplay between the social constraints of the communicative process and the epistemological structure of the mathematical knowledge. Specific aspects of the problem of meaning development are investigated in the course of two exemplary second-grade teaching episodes. These are then used to develop and discuss decisive requirements for the maintenance of an interactive constitution of meaning for mathematical knowledge. Reprinted by permission of the publisher.

  • Counterbalanced designs are ubiquitous in cognitive psychology. Researchers, however, rarely perform optimal analyses of these designs and, as a result, reduce the power of their experiments. In the context of a simple priming experiment, several idealized data sets are used to illustrate the possible costs of ignoring counterbalancing, and recommendations are made for more appropriate analyses. These recommendations apply to assessment of both reliability of effects over subjects and reliability of effects over stimulus items.

  • Twenty-two university students who did not initially know the quantitative rule for predicting whether a configuration of weights placed on a balance beam would cause the mean to balance, tip left , or tip right were asked to induce the rule in a training procedure adapted form Siegler (1976). For each of a series of balance beam problems, subjects predicted the action of the beam and explained how they arrived at their prediction. Protocols revealed that although all subjects realized early on that both weight and distance were relevant to their predictions, they used a variety of heuristics prior to inducing the correct quantitative rule. There heuristic included instance-based reasoning, qualitative estimation of istance, and the use of quantitative rules of limited generality. The commohn use of instance-based reasoning suggests that learning to understand the balance beam cannot be described completely in terms of a simple rule acquisition theory. Also, the variability in the use of heuristics across subjects suggests that no simple theory that depicts subjects as linearly progressing through a hierarchy of levels can adquately describe the development of balance understanding.

  • Measures of biologic and behavioural variables on a patient often estimate longer term latent values, with the two connected by a simple response error model. For example, a subject's measured total cholesterol is an estimate (equal to the best linear unbiased estimate (BLUE)) of a subject's latent total cholesterol. With known (or estimated) variances, an alternative estimate is the best linear unbiased predictor (BLUP). We illustrate and discuss when the BLUE or BLUP will be a better estimate of a subject's latent value given a single measure on a subject, concluding that the BLUP estimator should be routinely used for total cholesterol and per cent kcal from fat, with a modified BLUP estimator used for large observed values of leisure time activity. Data from a large longitudinal study of seasonal variation in serum cholesterol forms the backdrop for the illustrations. Simulations which mimic the empirical and response error distributions are used to guide choice of an estimator. We use the simulations to describe criteria for estimator choice, to identify parameter ranges where BLUE or BLUP estimates are superior, and discuss key ideas that underlie the results.

  • Higher education faces an environment of financial constraints, changing customer demands, and loss of public confidence. Technological advances may at last bring widespread change to college teaching. The movement for education reform also urges widespread change. What will be the state of statistics teaching at the university level at the end of the century? This article attempts to imagine plausible futures as stimuli to discussion. It takes the form of provocations by the first author, with responses from the others on three themes: the impact of technology, the reform of teaching, and challenges to the internal culture of higher education.

  • This article begins with some context setting on new views of statistics and statistical education. These views are reflected, in particular, in the introduction of exploratory data analysis (EDA) into the statistics curriculum. Then, a detailed example of EDA learning activity in the middle school is introduced, which makes use of the power of the spreadsheet to mediate students' construction of meanings for statistical conceptions. Through this example, I endeavor to illustrate how an attempt at serious integration of computers in teaching and learning statistics brings about a cascade of changes in curriculum materials, classroom praxis, and students' ways of learning. A theoretical discussion follows that underpins the impact of technological tools on teaching and learning statistics by emphasizing how the computer lends itself to supporting cognitive and sociocultural processes. Subsequently, I present a sample of educational technologies, which represents the sorts of software that have typically been used in statistics instruction: statistical packages (tools), microworlds, tutorials, resources (including Internet resources); and teachers' metatools. Finally, certain implications and recommendations for the use of computers in the statistical educational milieu are suggested.

  • The community of statisticians and statistics educators should take responsibility for the evaluation and improvement of software quality from the perspective of education. The paper will develop a perspective, an ideal system of requirements to critically evaluate existing software and to prodcue future software more adequate both for learning and doing statistics in introductory courses. Different kinds of tools and microworlds are needed. After discussion general requirements for such programs, a prototypical ideal software system will be presented in detail. It will be illustrated how such a system could be used to construct learning environments and to support elementary data analysis with exploratory working style.

  • Twenty-five years ago, the term "technology" had a rather different meaning than it does today. Anything other than chalk-and-talk or paper-and-pencil was considered technology for teaching. This might have included anything from fuzzy-felt boards to mechanical gadgets, as well as the multimedia of that period (i.e., television, tape recordings, films, and 35mm slides). The title of this Round Table talk refers to "technology"; however, the papers are concerned mainly with computers and software. The occasional reference to calculators is really only a variation on this theme, because they are essentially hand-held computers. This is merely an observation--not a criticism. The re-invention of the meaning of the term 'technology' is something to which we have all been a party.<br>The developments in computers and computing during the past quarter of a century have been so profound that it is not surprising that they replaced other technological teaching aids. This does not mean that we should forget such alternative aids altogether, nor the need to research their effective use. However, it is obvious that computers have significantly increased the range, sophistication, and complexity of possible classroom activities. Computer-based technology has also brought with it many new challenges for the teacher who seeks to determine what it has to offer and how that should be delivered to students.<br>Innovations in this area tend to be accompanied by a number of myths that have crept into our folklore and belief systems. Myths are not necessarily totally incorrect: They often have some valid foundation. However, if allowed to go unchallenged, a myth may influence our strategies in inappropriate ways. This Round Table conference provides a timely opportunity to recognize and examine the myths that govern innovations and implementations of technology in the classroom, and to establish the extent to which our approaches are justified.

  • Based on a review of research and a cognitive devleopment model (Bigg &amp; Collis, 1991), we formulated a framework for characterizing elementary children's statistical thinking and refined it through a validation process. The 4 constructs in this framework were describing, organizing, representing, and analyzing and interpreting data. For each construct, we hypothesized 4 thinking levels, which represnt a continuum from idiosyncratic to analytic reasoning. We developed statistical thinking descriptors for each level and construct and used these to design an interview protocol. We refined and validated the framework using data from portocols of 20 target students in Grades 1 through 5. Results of the study confirm that children's statistical thinking can be described according to the 4 framework levels and that the framework provides a coherent picture of children's thinking, in that 80% of them exhibited thinking that was stable on at lest 3 constructs. The framework contributes domain-specific theory fo characterising chilren't statistical thinding and for planning instruction in data handling.

Pages