Journal Article

  • In this paper we will introduce a new and powerful algorithm which trivializes an extensive class of discrete stochastic processes. The algorithm was discovered by the author in March 1974 when he tried to teach some nontrivial probability to a below average 4th grade in Carbondale, Illinois.

  • In the article "The Probabilistic Abacus", by A. Engel (Educational Studies in Mathematics 6, 1975, p. 1 - 22), we have introduced the probabilistic abacus and we have applied it to absorbing Markov chains. We have explained in detail how to compute absorption probabilities and expected times to absorption, but we gave no proofs. In this paper we give more applications of the abacus and we supply proofs for most of its properties.

  • The Woods Hole conference of September 1959 was outstanding of its kind as a meeting of about 35 people interested in education - educationists, psychologists, medical men, and mathematicians. The results of the meeting were summarised by J. S. Brunner in a chairman's Report, which, impregnated by his own ideas, evolved into his booklet The Process of Education. In the last decade this work has strongly influenced curriculum development, in particular, in mathematics. Brunner's contribution seems to me of a particular interest for the instruction in stochastics, which is now entering our schools. On the one hand advocates of this subject matter are advancing its fundamental (or central) ideas, on the other hand stress is laid on the importance to tie instruction in stochastics to intuitive experiences. Both points, however, are rarely elaborated or made more concrete. In particular the following questions deserve attention: (a) What would a list of fundamental ideas of stochastical concepts look like? (b) Why should intuition mean so much for stochastics? (c) What does "(stochastic) intuition" mean? (d) How does it develop, and how can it be improved? In the following we will advance some ideas on (a) and will also touch on the other points.

  • The study was conducted to (a) determine the development of children's understanding of seven properties of the arithmetic mean and (b) assess the effects of the material used in the testing (continuous, discontinuous) and the medium of presentation (story, concrete, and numerical). Twenty children were selected at each of the ages 8, 10, 12, and 14 years. Different development courses of the children's reasoning were found on some tasks measuring the properties of the the average. No significant effects were found for the materials used or the medium of presentation. The findings are discussed in terms of their importance for developmental psychology and educational practice.

  • In this study, the schema-theoretic perspective of understanding general discourse was extended to include graph comprehension. Fourth graders (n = 204) and seventh graders (n = 185) were given a prior-knowledge inventory, a graph test, and the SRA Reading and Mathematics Achievement Tests during four testing sessions. The unique predictors of graph comprehension for Grade 4 included reading achievement, mathematics achievement, and prior knowledge of the topic, mathematical content, and form of the graph. The unique predictors for Grade 7 were the same except that prior knowledge of topic and graphical form were not included. The results suggest that children should be involved in graphing activities to build and expand relevant schemata needed for comprehension,

  • This study investigates which formal principles govern subjective probability, and whether the validity of these principles depends on age. Two types of tasks were administered to 144 subjects from 3;8 to 19 years: a gambling task (with objective probabilities) and a sporting task (without objective probabilities). Six formal principles of the mathematical concept of qualitative probability (a non-numerical concept based on ordinal scale properties) were tested. Results indicate that these principles are valid as principles of subjective probability for all age groups. Only the youngest age group (4 years or younger) had a smaller degree of confirmation.

  • Children's understanding of what variables and relations are important in problem structures, and their use of these variables and relations in problem solving, were examined. One hypothesis suggests that knowledge of relevant solution variables is a prerequisite for encoding those variables, which in turn is a prerequisite for learning new strategies that use those variables. An alternative hypothesis holds that knowledge of relevant variables is an outcome, rather than a precursor, of efforts to invent new strategies. In the current studies, children between the ages of 5 and 13 years were given Piaget and Inhelder's (1975, The origin of the idea of chance in children, New York: Norton) two -set alternative choice probability problems. In Experiment 1, problem understanding was assessed by asking children to construct two-set problems that could test whether a learner understood how to solve a model problem type. In Experiment 2, understanding was assessed by asking children to modify model problems to make them harder for a learner to solve. In both experiments, children modified or reproduced only those properties of model problems used either correctly or incorrectly in solving the models. These results partially support both hypotheses, and suggest a mechanism by which problem solving knowledge develops.

  • Research on human judgment demonstrates that people's theories often bias their evaluation of evidence and suggest that people might be more accurate if they were unbiased by prior beliefs. Rather than comparing people's judgments of data when they do or do not have a prior theory, most studies compare people's estimates to conventional statistical standards, even though the status of these measures as normative criteria is controversial. We propose that people's theories may have beneficial consequences not examined in previous research. In two paradigms (the covariation estimation problem and the t-test problem), we compare judgments made by people who have potentially biasing prior information. We vary the quality of the data, presenting subjects with data that are either well-behaved or contaminated with outliers. In both paradigms, people's judgments approximated robust statistical measures rather than the conventional measures typically used as normative criteria. We find the usual biasing effects of prior beliefs but also find an advantage for subjects who have prior theories - even incorrect ones - over subjects who are completely "objective." Potentially biasing beliefs both enhanced people's sensitivity to the bulk of the data and reduced the influence atypical scores had on their estimates. Evidence is provided that this robustness results from the fact that prior theories make judgment problems more meaningful. We discuss the conditions under which prior beliefs are likely to help and hinder human judgment.

  • Several strategies are proposed as bases for judgments of covariation between events. Covariation problems were structured in such a way that patterns of correct and incorrect judgments would index the judgment rule being used by a given subject. In two experiments, 10th-grade or college subjects viewed a set of covariation problems, each of which consisted of a set of observations in which each of two events was defined as present or absent. Subjects were asked to identify the relationship between the events. Subjects' response patterns suggested that the modal strategy was to compare frequency of confirming and disconfirming events in defining the relationship. Response accuracy was influenced by pretraining on the concept of covariation and by response format. Instructions to sort the observations did not influence judgment accuracy.

  • Almost all studies of adult notions of correlation between dichotomous variables show that people do not incorporate two conditional probabilities as they should according to normative definitions. However, these studies disagree considerably about what correlational notions people do have. This paper identifies three factors that contribute to the variability in research results. The first two factors were mentioned in the literature, and the evidence concerning them is summarized: (1) the way data are presented and (2) the instructions subjects receive. A third factor is suggested and studied; the type of variables between which correlation is judged may affect subjects' notion of correlation. Specifically, asymmetric, present/absent variables (e.g., symptom: present, absent) may strengthen the incorrect notion of correlation as the tendency of two events to coexist (e.g., presence of symptom and presence of disease) disregarding the complementary events. In three experiments, subjects were asked to choose among five interpretations of the sentence "A strong (or no) relationship exists between (two variables)." The above prediction was confirmed.

Pages

register