Sandbox

From ChanceWiki
Jump to navigation Jump to search

Odds are, it's wrong

Odds are, it's wrong
by Tom Siegfried, Science News, 27 March 2010

This is a provocative essay on the limitations of significance testing in scientific research. The main themes are that it is easy to do the such tests incorrectly, and, even when they are done correctly, they are subject to widespread misinterpretation.

For example, the article cites the following misinterpretation of significance at the 5% level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.” Indeed, versions of this are all too often seen in print!

Also discussed are the problems that arise when multiple tests are run simulataneously.

Submitted by Bill Peterson, based on a suggestion from Scott Pardee

Discussion Questions

1. [suggested by Bill Jefferys] Box 2, paragraph 1 of the article states "Actually, the P value gives the probability of observing a result if the null hypothesis is true, and there is no real effect of a treatment or difference between groups being tested. A P value of .05, for instance, means that there is only a 5 percent chance of getting the observed results if the null hypothesis is correct." Why is this statement wrong?