Literature Index

Displaying 2951 - 2960 of 3326
  • Author(s):
    Fletcvher, M. & Mooney, C.
    Editors:
    Goodall, G.
    Year:
    2003
    Abstract:
    Summary This article discusses optimal strategies for contestants in a well-known television game show. [
  • Author(s):
    Shkedy, Z., Aerts, M., & Callaert, H.
    Editors:
    Stephenson, W. R.
    Year:
    2006
    Abstract:
    Classical regression models, ANOVA models and linear mixed models are just three examples (out of many) in which the normal distribution of the response is an essential assumption of the model. In this paper we use a dataset of 2000 euro coins containing information (up to the milligram) about the weight of each coin, to illustrate that the normality assumption might be incorrect. As the physical coin production process is subject to a multitude of (very small) variability sources, it seems reasonable to expect that the empirical distribution of the weight of euro coins does agree with the normal distribution. Goodness of fit tests however show that this is not the case. Moreover, some outliers complicate the analysis. As alternative approaches, mixtures of normal distributions and skew normal distributions are fitted to the data and reveal that the distribution of the weight of euro coins is not as normal as expected.
  • Author(s):
    Richardson, A. M.
    Year:
    2001
    Abstract:
    This article describes 'The World of Chance', an unconventional statistics course taught at the University of Canberra since 1998, modeled on the Chance courses devised at Dartmouth College. Statistical concepts are introduced via a mixture of lectures, class distributions of news stories, and activities.
  • Author(s):
    Barbieri, G. A., & Giacché, P.
    Editors:
    Rossman, A., & Chance, B.
    Year:
    2006
    Abstract:
    Istat, the Italian national statistical institute, in co-operation with professors of statistics, scientific societies and experts in web communication, produced The Worth of Data, hypertext materials for promoting and improving statistical literacy. We present the experience from two viewpoints: (i) the process for designing and implementing hypertext; (ii) and the ways selected for improving statistical literacy. The first aspect involved the decision to focus on the concept of awareness: not only as to when and how to use statistical data, but also on how to be discerning about sources, their quality and reliability … The second aspect concerned the language and confirmed that to deliver content in plain language, without losing scientific precision, is indeed a difficult task. To achieve good results, it is necessary to make use of the various skills within a good team. Each expert should give up a little turf and contribute knowledge to attain a common outcome worth communicating.
  • Author(s):
    Lesser, L. M.
    Year:
    1999
    Abstract:
    This article presents a sequence of explorations and responses to student questions ( Why not use perpendicular deviations? Why not minimize the sum of the vertical deviations? Why not minimize the sum of the absolute deviations? Why minimize the sum of the squared deviations?) about the rationale for the commonly used tool of line of best fit. A noncalculus-based motivation is more feasible than is often assumed for each aspect of the least-squares criterion "minimize the sum of the squares of the vertical deviations between the fitted line and the observed data points."
  • Author(s):
    M. Ryan Haley
    Year:
    2013
    Abstract:
    This paper describes a flexible paradigm for creating an electronic “Core Concepts Plus” textbook (CCP-text) for a course in Introductory Business and Economic Statistics (IBES). In general terms, “core concepts” constitute the intersection of IBES course material taught by all IBES professors at the author’s university. The “Plus” component of the paradigm is embodied in self-written, professor-specific sections that are combined with the core-concepts material to produce professor-specific versions of the IBES CCP-text. The paradigm entails a vertically integrated text creation process with two primary aspects: first, non-IBES faculty members that ultimately receive former IBES students are included in the text-writing process; second, some former IBES students (e.g., tutors) are included in the text-writing process. Student learning experiences with the CCP-text are summarized with survey results; the learning outcomes are assessed using three semesters of pre- and post-test data; and a textbook cost study is used to contextualize the savings to students. The CCP-text appears to be efficacious in all three of these areas. Recommendations concerning how and where the paradigm might be replicated are also presented.
  • Author(s):
    Kelly, A. E.
    Year:
    2003
    Abstract:
    Inspired by the seminal work of Ann Brown, Allan Collins, Roy Pea, and Jan Hawkins, a growing number of researchers have begun to adopt the metaphors and methods of the design and engineering fields. This special issue highlights the work of some of these active reserachers and provides a number of commentaries on it.
  • Author(s):
    Abrahamson, D., Janusz, R. M., & Wilensky, U.
    Editors:
    Stephenson, W. R.
    Year:
    2006
    Abstract:
    ProbLab is a probability-and-statistics unit developed at the Center for Connected Learning and Computer-Based Modeling, Northwestern University. Students analyze the combinatorial space of the 9-block, a 3-by-3 grid of squares, in which each square can be either green or blue. All 512 possible 9-blocks are constructed and assembled in a ?bar chart? poster according to the number of green squares in each, resulting in a narrow and very tall display. This combinations tower is the same shape as the normal distribution received when 9-blocks are generated randomly in computer-based simulated probability experiments. The resemblance between the display and the distribution is key to student insight into relations between theoretical and empirical probability and between determinism and randomness. The 9-block also functions as a sampling format in a computer-based statistics activity, where students sample from a ?population? of squares and then input and pool their guesses as to the greenness of the population. We report on an implementation of the design in two Grade 6 classrooms, focusing on student inventions and learning as well as emergent classroom socio-mathematical behaviors in the combinations-tower activity. We propose an application of the 9-block framework that affords insight into the Central Limit Theorem in science.
  • Author(s):
    Shaughnessy, J. M., & Bergman, B.
    Editors:
    Wilson, P. S.
    Year:
    1993
    Abstract:
    This chapter discusses the attempts to included probability and statistics in the curriculum at the secondary level.
  • Author(s):
    Pfannkuch, M.
    Year:
    2005
    Abstract:
    This article discusses five papers focused on "Research on Reasoning about Variation and Variability", by Hammerman and Rubin, Ben-Zvi, Bakker, Reading, and Gould, which appeared in a special issue of the Statistics Education Research Journal (No. 3(2) November 2004). Three issues emerged from these papers. First, there is a link between the types of tools that students use and the type of reasoning about variation that is observed. Second, students' reasoning about variation is interconnected to all parts of the statistical investigation cycle. Third, learning to reason about variation with tools and to understand phenomena are two elements that should be reflected in teaching. The discussion points to the need to expand instruction to include both exploratory data analysis and classical inference approaches and points to directions for future research.

Pages

The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education