Sandbox: Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
Line 1: Line 1:
==Odds are, it's wrong==
==Quotations==
[http://www.sciencenews.org/view/feature/id/57091/title/Odds_Are,_Its_Wrong Odds are, it's wrong]<br>
by Tom Siegfried, ''Science News'', 27 March 2010


This is a long and provocative essay on the limitations of significance testing in scientific research.  The main themes are that it is easy to do the such tests incorrectly, and, even when they are done correctly, they are subject to widespread misinterpretation.
==Forsooth==


For example, the article cites the following misinterpretation of significance at the 5% level:  “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”  Indeed, versions of this are all too often seen in print!
==Item 1==
 
==Item 2==
Also discussed are the multiple comparisons problem, the challenges of interpreting meta-analyses, and the disagreements between frequentists and Bayesians.
 
Submitted by Bill Peterson, based on a suggestion from Scott Pardee
 
'''Discussion Questions'''
 
1.  [suggested by Bill Jefferys] Box 2, paragraph 1 of the article states "Actually, the P value gives the probability of observing a result if the null hypothesis is true, and there is no real effect of a treatment or difference between groups being tested. A P value of .05, for instance, means that there is only a 5 percent chance of getting the observed results if the null hypothesis is correct." Why is this statement wrong?

Revision as of 16:44, 4 May 2010

Quotations

Forsooth

Item 1

Item 2