• Mathematicians from the Greeks on have used simple physical or visual models to understand and create new mathematics. The history of innovation in geometry, probability and calculus is full of examples of commonplace or mundane models explicating and motivating new ideas. Modern research statisticians also use the same strategies. Ask an expert in experimental design what he knows about and how he thinks about an industrial experiment. Often you will get an extraordinarily naive answer. You discover that he has cheerfully ignored important, even critical, physical details of the industrial process., and yet industry amply compensates our apparently naive experimental design colleagues. Perhaps industry has learned some lessons that we as teachers of statistics have forgotten. In this paper we argue that our undergraduate students need to be able to view, construct and manipulate mundane models and that this is a critical part of undergraduate mathematics and statistics education. All this may seem obvious, but in the past decades a number of forces have contributed to a decline of our students ability to approach statistics using visual model approaches to mathematics.

  • In the best academic tradition, we start with a definition: the "statistically literate" individual can "dis-aggregate and re-aggregate" to operate effectively as a statistical consumer. That is, he/she can extract information and interpret this information and can transform and interpret this information and can transmit it in terms your father -- your boss -- your client can understand and use. Evidently, this is a specialized version of the criteria for the label "quantitatively literate" which we will also use. We have been concerned for some time about the ways in which introductory statistics courses can better contribute to the goal of quantitative literacy for all adults and, in particular, for those whose secondary school experience may have left them mathematically dysfunctional or underskilled. In particular, we have wondered about the large number of students in the two-year colleges.

  • This is the report of a working group on Technology and Statistics that explores the role of technology in improving how students model and analyze data to understand the world around them. The paper first addresses curricular goals for students at different grade levels. A section on modeling issues examines data analysis as an investigative process in which students construct, examine, and interpret models of the world. Other sections of the paper describe the unique capabilities technology offers for teaching and learning statistics, provide examples of existing technology designed to help students develop important concepts, and offer recommendations for technology to be developed over the next decade. Research issues and teacher preparation concerns are also addressed.

  • Statistics education has become a significant part of the school mathematics curriculum. Now that it is well established, it is timely to look at some aspects of it to ensure high quality results which involves research in statistics education, as many of us are not aware of the work that is being done. The intention of this paper is to initiate discussion, to establish a research agenda and to use this framework as a means of relating present and desired research activity.

  • This paper offers an overview of research on teaching and learning statistics, what research is needed and for what purpose. The author suggests that future research must concentrate on establishing the best ways to teach statistical concepts so that meaningful long-term learning takes place.

  • This article described three heuristics that are employed in making judgments under uncertainty: (i) representativness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgments and decisions in situations of uncertainty.

  • Current curricular thinking in mathematics, science and computing displays a recurring theme: the value of working with data and the importance of learning the skills and concepts associated with such work. Recent statements issued by the National Council of Teachers of Mathematics (NCTM, 1987) and the Mathematical Sciences Education Board (Ralston 1988) advocate a sharply increased emphasis on data analysis in school mathematics at all levels. The concern with computer literacy of the early 1980s is maturing into a discussion of the kinds of "information studies" that are needed to prepare students for a society in which information technologies play an essential and ever-expanding role (White , 1987). The call for the use of real data in the natural and social sciences goes back considerably farther (Hawkins, 1964; Morrison, 1964; Taba, 1967), but has recently gained new impetus from technological advances which multiply the potential for powerful, realistic investigation by science students (Hawkins, Brunner, et al. 1987; Tinker, 1987). In support of curricular developments such as these, new technologies offer a potential that is largely untapped. The large bitmapped screens and fast processors which are available on today's new workstations, and will be available on the school computers of the middle to late 1990's make possible a whole new class of tools for working with data, tools whose transparency and rich interactivity can support qualitatively new styles of inquiry and bring unprecedented analytic power to students of all ages. We have designed and partially prototyped an exemplar of this new class: a highly visual, highly interactive environment for creating, organizing, exploring and analyzing "attribute data" -- the kinds of data that are used in statistics and many of the sciences, and which conventional database systems are designed to store. The environment achieves a striking combination of simplicity, directness, power and flexibility. We are truly excited by the potential for tools of this kind to support a new level of data analysis and theory building in mathematics and the sciences. More than tools are needed, however. Essential to all of these curricular trends, it seems, is a fundamental set of concepts and skills about which more needs to be known.

  • This paper proposes a framework for the development of instruments to measure content learning and problem-solving skills for the introductory statistics course. This framework is based upon a model of the problem-solving process central to statistical reasoning. The framework defines and interrelates six measurement tasks: (1) subjective reports; (2) reports concerning truth, falsity, or equivalence; (3) supply the appropriate missing information in a message; (4) answer a question based upon a specific message; (5) reproduce a message; and (6) carry out a procedure.

  • Artificial data sets are often used to demonstrate statistical methods in applied statistics courses and textbooks. We believe that this practice removes much of the intrinsic interest in learning to do good data analysis and contributes to the myth that statistics is dry and dull, In this article, we argue that artificial data sets should be eliminated from the curriculum and that they should be replaced with real data sets. Real data supplemented by suitable background material enable students to acquire analytic skills in an authentic research context and enable instructors to demonstrate how statistical analysis is used to model real data into applied statistics curricula, we identify seven characteristics that make data sets particularly good for instructional use and present an annotated bibliography of more than 100 primary and secondary data sources.

  • There is a growing body of evidence indicating that people often overestimate the similarity between characteristics of random samples and those of the populations from which they are drawn. In the first section of the paper, we review some studies that have attempted to determine whether the basic heuristic employed in thinking about random samples is passive and descriptive or whether it is deducible from a belief in active balancing. In the second section, we discuss the importance of sample size on judgments about the characteristics of random samples.