Chance News (September-October 2005): Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
Line 6: Line 6:
In this installment of his online column, Paulos discusses a recent JAMA article about contradictions in health research ( John P. A. Ioannidis, J.P.A. Contradicted and initially stronger effects in highly cited clinical research. JAMA, July14, 2005; 294:218-228 ).  You can find an abstract of the study [http://jama.ama-assn.org/cgi/content/abstract/294/2/218 here].   
In this installment of his online column, Paulos discusses a recent JAMA article about contradictions in health research ( John P. A. Ioannidis, J.P.A. Contradicted and initially stronger effects in highly cited clinical research. JAMA, July14, 2005; 294:218-228 ).  You can find an abstract of the study [http://jama.ama-assn.org/cgi/content/abstract/294/2/218 here].   


The JAMA article followed up on 45 studies that appeared in JAMA, the New England Journal of Medicine, and the Lancet over the years 1990-2003.  All led to widely publicized claims of positive effects for some medical treatment (hormone replacement therapy for post-menopausal women is a prominent example).  For seven of these studies, later research contradicted the original claims;  for seven others, later research found the benefits to be substantially smaller than originally stated.  Popular news accounts of these findings announced that one third of medical studies are wrong!
The JAMA article followed up on 45 studies that appeared in JAMA, the New England Journal of Medicine, and the Lancet over the years 1990-2003.  All led to widely publicized claims of positive effects for some medical treatment. Hormone replacement therapy for post-menopausal women is a prominent example.  For seven of these studies, later research contradicted the original claims;  for seven others, later research found the benefits to be substantially smaller than originally stated.  Popular news stories summarized these results by saying one third of medical studies are wrong (for example, see this [http://www.livescience.com/othernews/ap_050714_medical_studies.html Associated Press report]).
 
Paulos cites a number of reasons for the problems.  A single study is rarely definitive, but headlines and soundbites usually don't wait for scientific consensus to develop.  People fail to appreciate differences in quality of research.  Experiments are stronger than observational studies;  in particular, surveys that depend patients' self-reporting of lifestyle habits can obviously be unreliable.  Finally, he discusses some conflicting psychological responses.  Wishful thinking can make people overly ready to believe in a new treatment.  On the other side of the coin, in what he calls the "tyranny of the anecdote," people also overreact to stories of negative side-effects, even though such incidents may be isolated.
 
DISCUSSION QUESTION <BR>
On the last point, Paulos writes:<blockquote>
A distinction from statistics is marginally relevant. We're said to commit a Type I error when we reject a truth and a Type II error when we accept a falsehood. In listening to news reports people often have an inclination to suspend their initial disbelief in order to be cheered and thereby risk making a Type II error. In evaluating medical claims, however, researchers generally have an opposite inclination to suspend their initial belief in order not to be beguiled and thereby risk making a Type I error.
</blockquote>
Do you understand the distinction being drawn?  To what hypotheses does this discussion refer?


== Do Car Seats Really Work? ==
== Do Car Seats Really Work? ==

Revision as of 20:53, 2 September 2005

Why Medical Studies are Often Wrong

Why medical studies are often wrong; John Allen Paulos explains how bad math haunts heath research
Who's Counting, ABCNews.com, 7 August 2005

In this installment of his online column, Paulos discusses a recent JAMA article about contradictions in health research ( John P. A. Ioannidis, J.P.A. Contradicted and initially stronger effects in highly cited clinical research. JAMA, July14, 2005; 294:218-228 ). You can find an abstract of the study here.

The JAMA article followed up on 45 studies that appeared in JAMA, the New England Journal of Medicine, and the Lancet over the years 1990-2003. All led to widely publicized claims of positive effects for some medical treatment. Hormone replacement therapy for post-menopausal women is a prominent example. For seven of these studies, later research contradicted the original claims; for seven others, later research found the benefits to be substantially smaller than originally stated. Popular news stories summarized these results by saying one third of medical studies are wrong (for example, see this Associated Press report).

Paulos cites a number of reasons for the problems. A single study is rarely definitive, but headlines and soundbites usually don't wait for scientific consensus to develop. People fail to appreciate differences in quality of research. Experiments are stronger than observational studies; in particular, surveys that depend patients' self-reporting of lifestyle habits can obviously be unreliable. Finally, he discusses some conflicting psychological responses. Wishful thinking can make people overly ready to believe in a new treatment. On the other side of the coin, in what he calls the "tyranny of the anecdote," people also overreact to stories of negative side-effects, even though such incidents may be isolated.

DISCUSSION QUESTION

On the last point, Paulos writes:

A distinction from statistics is marginally relevant. We're said to commit a Type I error when we reject a truth and a Type II error when we accept a falsehood. In listening to news reports people often have an inclination to suspend their initial disbelief in order to be cheered and thereby risk making a Type II error. In evaluating medical claims, however, researchers generally have an opposite inclination to suspend their initial belief in order not to be beguiled and thereby risk making a Type I error.

Do you understand the distinction being drawn? To what hypotheses does this discussion refer?

Do Car Seats Really Work?

Freakonomics: the seat-belt solution
New York Times, 10 July 2005,
Steven J. Dubner and Steven D. Levitt

Dubner and Levitt are the authors of Freakonomics: A Rogue Economist Explains the Hidden Side of Everything (HarperCollins, 2005), which raises a host of provocative questions, including "Why do drug dealers still live with their mothers?" and "What do schoolteachers and sumo wrestlers have in common?"

In the present article, the authors challenge the conventional wisdom on car seats.