Research

  • This report discusses the goals and accomplishments of PROJECT STARC over Year 1.

  • My personal dissatisfaction with the grounds I had for believing that software which I had produced was efficacious in developing probabilistic understanding led to a small research investigation which is reported in this paper.

  • This study involves an experiment that guages the reaction of students in a university 100 level statistics course. The experimental group was randomly assigned to work with Minitab program (analysis and graphing) and to do normal course work. The control group just did the course work. Questionnaires measuring students' attitude towards computers were given to both groups at the beginning and end of term. Results indicate that students thought computers were useful before and after the course. Students who had actually used the computers were more inclined to strongly agree. It is important that computer use is taught properly otherwise students will become frustrated. Differences between final exam scores of the two groups were not significant (computer group was slightly higher). Volunteers (who participated in either group for the experiment) did have better final grades than students who did not participate. Computer-use group students may have had better marks due simply to the greater exposure with the material and instructors. In conclusion, students can benefit from the use of a statistical data analysis package. However, much of the benefit may derive from the students' increased understanding of the computer system rather than the analysis package itself. Students stated that the use of a computing package did not increase their understanding of statistics but was useful as an ancillary to the statistics course.

  • We are attempting to identify conceptual challenges that students encounter as they design, collect and analyze data about a real situation. We propose the term "data modeling" to describe this process and present a new computerized tool for working with data called the Tabletop. While the Tabletop is a tool for analyzing data, we conjecture that it can help students become better designers of data. Examples from clinical research in progress help to show how closely intertwined are the phases of data modeling, and thus begin to resolve the apparent paradox of how a technological tool for one phase can benefit others.

  • A three stage model was used in developing and evaluating an instructional unit on probability. The focus of this paper is on the first and third stages of the model, both of which depend on the design of ways to identify misconceptions. In previous studies, researchers have used changes in performance on individual items to evaluate the effectiveness of instructional interventions. The instrument used in the present study borrows heavily from earlier research. The instrument differs from previous instruments not in the content of the items but in the way responses to items are analyzed. Pairs of items are designed so that meaningful error patterns can be identified when responses to both items are considered. The identification of error patterns can be identified when responses to both items are considered. The identification of error patterns allows assessment that goes beyond the reporting of gain scores. Once error patterns are identified, an intervention can be evaluated according to the types of misconceptions (i.e., error patterns) that are affected.

  • This article discusses reasons why it is not sufficient to provide teachers with one day workshops or brief refresher courses and expect them to acquire the knowledge they need. It then describes new courses that must be designed to let teachers acquire expertise in statistical problem solving.

  • The topics discussed in this volume are of interest for several disciplines. The impact of the contributions presented in this volume on DECISION RESEARCH (FISCHHOFF), COGNITIVE PSYCHOLOGY (ZIMMER), DEVELOPMENTAL PSYCHOLOGY (WALLER), SOCIAL PSYCHOLOGY (BORCHERDING), ECONOMIC THEORY (SELTEN), and MATHEMATICS EDUCATION (STEINER) is outlined by researchers from these disciplines who were present at the Symposium. Of course, other disciplines, e.g. medicine, social sciences, mathematics, that were not represented, might also affected or challenged by the results and propositions documented in this volume. Naturally, the representatives of the different disciplines emphasize different aspects. Although from a decision theoretical perspective, methodological problems and questions of research strategies (e.g. top down vs. bottom up) seem to be most significant, conceptual issues about the nature of human knowledge are regarded as creating important research problems in other fields. The comments made by the mathematics educational view show the innovative power of a growing discipline. The methodology of mathematization is shown to be inextricably connected with the social dimensions of learning and instruction. Mathematics in general (and not only statistics and probability theory) is loosing its unique feature of always being either right or wrong when put into a social context (e.g. the classroom). Furthermore, as several of the papers point out, if the dynamic views were to be emphasized, we may not only expect decision research to have an impact on mathematics research but also the other way round.

  • In Experiment 1, subjects estimated a) the mean of a random sample of ten scores consisting of nine unknown scores and a known score that was divergent from the population mean; and b) the mean of the nine unknown scores. The modal answer (about 40% of the responses) for both sample means was the population mean. The results extend the work of Tversky and Kahneman by demonstrating that subjects hold a passive, descriptive view of random sampling rather than an active balancing model. This result was explored further in in-depth interviews, wherein subjects solved the problem while explaining their reasoning. The interview data replicated Experiment 1 and further showed (a) that subjects' solutions were fairly stable-- when presented with alternative solutions including the correct one, few subjects changed their answer; (b) little evidence of a balancing mechanism; and (c) that acceptance of both means as 400 is largely a result of the perceived unpredictability of "random samples."

  • In this short paper important strands of research in probabilistic notions will be critically presented, followed by an indication of the author's own research.

  • This packet is a collection of several separate papers from ICOTS IIII. It includes the following papers:

Pages