This chapter explains the structure/steps of hypothesis testing, the concept of significance, the relationship between confidence intervals and hypothesis testing, and Type I/II errors.
This chapter explains the structure/steps of hypothesis testing, the concept of significance, the relationship between confidence intervals and hypothesis testing, and Type I/II errors.
This text explains the differences between t-tests, z-tests, tests with proportions, and tests of correlation.
Analysis of variance (ANOVA) is used to test hypotheses about differences between two or more means. The t-test based on the standard error of the difference between two means can only be used to test differences between two means. When there are more than two means, it is possible to compare each mean with each other mean using t-tests. However, conducting multiple t-tests can lead to severe inflation of the Type I error rate. (Click here to see why) Analysis of variance can be used to test differences among several means for significance without increasing the Type I error rate. This chapter covers designs with between-subject variables.
When an experimenter is interested in the effects of two or more independent variables, it is usually more efficient to manipulate these variables in one experiment than to run a separate experiment for each variable. Moreover, only in experiments with more than one independent variable is it possible to test for interactions among variables. Experimental designs in which every level of every variable is paired with every level of every other variable are called factorial designs.
This article describes an activity that illustrates contingency table (two-way table) analysis. Students use contingency tables to analyze the "unusual episode" (the sinking of the ocean liner Titanic)data (from Dawson 1995) and attempt to use their analysis to deduce the origin of the data. The activity is appropriate for use in an introductory college statistics course or in a high school AP statistics course. Key words: contingency table (two-way table), conditional distribution
In these activities designed to introduce sampling distributions and the Central Limit Theorem, students generate several small samples and note patterns in the distributions of the means and proportions that they themselves calculate from these samples. Outside of class, students generate samples of dice rolls and coin spins and draw random samples from small populations for which data is given on each individual. Students report their sample means and proportions to the instructor who then compiles the results into a single data file for in-class exploration of sampling distributions and the Central Limit Theorem. Key words: Sampling distribution, sample mean, sample proportion, central limit theorem
Within-subject designs are designs in which one or more of the independent variables are within-subject variables. Within-subjects designs are often called repeated-measures designs since within-subjects variables always involve taking repeated measurements from each subject. Within-subject designs are extremely common in psychological and biomedical research.
When two variables are related, it is possible to predict a person's score on one variable from their score on the second variable with better than chance accuracy. This section describes how these predictions are made and what can be learned about the relationship between the variables by developing a prediction equation.
This resource gives a thorough definition of confidence intervals. It shows the user how to compute a confidence interval and how to interpret them. It goes into detail on how to construct a confidence interval for the difference between means, correlations, and proportions. It also gives a detailed explanation of Pearson's correlation. It also includes exercises for the user.
This chapter discusses a collection of tests called distribution-free tests, or nonparametric tests, that do not make any assumptions about the distribution from which the numbers were sampled. The main advantage of distribution-free tests is that they provide more power than traditional tests when the samples are from highly-skewed distributions.