Chance News (September-October 2005)

From ChanceWiki
Jump to navigation Jump to search

Why Medical Studies are Often Wrong

Why medical studies are often wrong; John Allen Paulos explains how bad math haunts heath research
Who's Counting, ABCNews.com, 7 August 2005

In this installment of his online column, Paulos discusses a recent JAMA article about contradictions in health research ( John P. A. Ioannidis, J.P.A. Contradicted and initially stronger effects in highly cited clinical research. JAMA, July14, 2005; 294:218-228 ). You can find an abstract of the study here.

The JAMA article followed up on 45 studies that appeared in JAMA, the New England Journal of Medicine, and the Lancet over the years 1990-2003. All led to widely publicized claims of positive effects for some medical treatment. Hormone replacement therapy for post-menopausal women is a prominent example. For seven of these studies, later research contradicted the original claims; for seven others, later research found the benefits to be substantially smaller than originally stated. Popular news accounts summarized these results by saying one third of medical studies are wrong (for example, see this Associated Press report)!

Paulos cites a number of reasons for the problems. A single study is rarely definitive, but headlines and soundbites usually don't wait for scientific consensus to develop. People fail to appreciate differences in quality of research. Experiments are stronger than observational studies; in particular, surveys that depend on patients' self-reporting of lifestyle habits can obviously be unreliable. These points echo responses made by the medical journals themselves. Finally, Paulos discusses some conflicting psychological responses to medical news. People can be overly eager to believe that a new treatment will work. On the other side of the coin, in what he calls the "tyranny of the anecdote," people also overreact to stories of negative side-effects, even though such incidents may be isolated.

DISCUSSION QUESTIONS:

(1) On the last point, Paulos writes:

A distinction from statistics is marginally relevant. We're said to commit a Type I error when we reject a truth and a Type II error when we accept a falsehood. In listening to news reports people often have an inclination to suspend their initial disbelief in order to be cheered and thereby risk making a Type II error. In evaluating medical claims, however, researchers generally have an opposite inclination to suspend their initial belief in order not to be beguiled and thereby risk making a Type I error.

Do you understand the distinction being drawn? To what hypotheses does this discussion refer?

(2) Should we wait for a subsequent analysis to see if the one-third figure stands up?

Do Car Seats Really Work?

Freakonomics: the seat-belt solution
New York Times, 10 July 2005,
Steven J. Dubner and Steven D. Levitt

Dubner and Levitt are the authors of Freakonomics: A Rogue Economist Explains the Hidden Side of Everything (HarperCollins, 2005), which raises a host of provocative questions, including "Why do drug dealers still live with their mothers?" and "What do schoolteachers and sumo wrestlers have in common?"

In the present article, the authors challenge the conventional wisdom on car seats.