Chance News 75: Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
Line 40: Line 40:


<blockquote>But the research at Duke turned out to be wrong. Its gene-based tests proved worthless, and the research behind them was discredited. Ms. Jacobs died a few months after treatment, and her husband and other patients’ relatives are suing Duke.</blockquote>
<blockquote>But the research at Duke turned out to be wrong. Its gene-based tests proved worthless, and the research behind them was discredited. Ms. Jacobs died a few months after treatment, and her husband and other patients’ relatives are suing Duke.</blockquote>
The problems at Duke are not an isolated problem.
<blockquote>The Duke case came right after two other claims that gave medical researchers pause. Like the Duke case, they used complex analyses to detect patterns of genes or cell proteins. But these were tests that were supposed to find ovarian cancer in patients’ blood. One, OvaSure, was developed by a Yale scientist, Dr. Gil G. Mor, licensed by the university and sold to patients before it was found to be useless.</blockquote>
<blockquote>The other, OvaCheck, was developed by a company, Correlogic, with contributions from scientists from the National Cancer Institute and the Food and Drug Administration. Major commercial labs licensed it and were about to start using it before two statisticians from M. D. Anderson discovered and publicized its faults.</blockquote>
The two statisticians, Keith Baggerly and Kevin Coombes, have made a career of debunking medical claims. In 2004, they (along with another M.D. Anderson statistician, Jeffrey Morris, [http://bioinformatics.oxfordjournals.org/content/20/5/777.long published a paper] that demonstrated serious flaws in the use of proteomic mass spectra to identify early ovarian cancer from normal tissue. The complex method proposed by [http://www.ncbi.nlm.nih.gov/pubmed/11867112 Petrocoin et al in 2002] was apparently an artefact of equipment drift that would have been prevented if the original researchers had taken simple steps like randomizing the order of analysis of cancer and normal tissues.
Baggerly and Coombes had also found problems with the data supporting the Duke test.
<blockquote>Dr. Baggerly and Dr. Coombes found errors almost immediately. Some seemed careless — moving a row or a column over by one in a giant spreadsheet — while others seemed inexplicable. The Duke team shrugged them off as "clerical errors."</blockquote>
Even though Baggerly and Coombes published a critique in a statistics journal, the Duke team continued to promote their genetic test. It was something else entirely that caused the problems with the Duke test to be treated seriously by the broader research community.
<blockquote>The situation finally grabbed the cancer world’s attention last July, not because of the efforts of Dr. Baggerly and Dr. Coombes, but because a trade publication, The Cancer Letter, reported that the lead researcher, Dr. Potti, had falsified parts of his résumé. He claimed, among other things, that he had been a Rhodes scholar.</blockquote>
Researchers in this area have a new-found sense of humility.
<blockquote>With such huge data sets and complicated analyses, researchers can no longer trust their hunches that a result does — or does not — make sense.</blockquote>
Submitted by Steve Simon

Revision as of 05:29, 9 July 2011

Quotations

"I suspect the amygdala did not evolve to store odds ratios and heterogeneity P scores, but when an adverse event has prompted me to review the literature, I come away with a clearer understanding. There’s nothing like a baby free-floating in the abdomen to drive home the lessons from a prospective study of risk factors for uterine rupture. And that clarity of understanding will serve the next at-risk patient I encounter."

Alison M. Stuebe

As quoted in her article posted on the New England Journal of Medicine Health Policy and Reform blog.

Submitted by Steve Simon.

Forsooth

Discussion of Ariely

A post in Chance News 74 described Paul Ariely's 2008 book Predictably Irrational: The Hidden Forces That Shape Our Decisions as a great summer read [1], while pointing out that it was not written as an academic work. Paul Alper wrote to say that he had occasion to review the book in the context of some related work, and had identified some statistical concerns. As Paul writes:

Ariely enjoys concocting experiments to demonstrate the irrationality. For example, he finds that satisfaction with a product depends on the price paid for the product--for example Bayer aspirin vs. the identical generic. Or, the enticing but utterly misleading “Free gift” will alter a decision. Reviewers loved his book. Nonetheless, there are some serious shortcomings.

  • He invariably gives the average value of one group (e.g., satisfaction of Bayer aspirin users) compared to the other group (e.g., satisfaction of generic aspirin users) but he almost never indicates the variability. Averages alone are meaningless.
  • Almost never does he state how many subjects are involved in each arm of a study.
  • Almost all of his samples are convenience ones, rather than random samples.
  • Almost all of his samples are MIT students, but his implicit inference is to the world at large.
  • His examples of predictable irrationality appear unfailingly successful leading me to suspect a “file-drawer” issue--experiments which showed nothing in particular or the negative of what he theorizes, are put aside and not counted.

The earlier post also noted that Ariely has a new book, The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home. This was reviewed by the New York Times; you can find a link to the review and read Ariely's reaction on his blog here. He notes that that he had consciously adopted a more conversational style in the book, and that this had drawn some criticism from the Times. He invited readers to submit their own opinions on this. Readers come down on both sides, and it is interesting to read the comments. One statistically minded reader wrote:

Of course your [sic] irrationally asking for personal thoughts in comments instead of a (slightly) more accurate poll or a (very) accurate scientific survey.

Submitted by Bill Peterson, based on a message from Paul Alper

The perils of genetic testing

How Bright Promise in Cancer Testing Fell Apart by Gina Kolata, The New York Times, July 7, 2011.

We have seen a lot of advances in genetics recently, and there has been the hope that these would translate into better clinical care. But making the bridge from the laboratory to clinical practice has been much more difficult than expected. A program at Duke, for example, was supposed to identify weak spots in a cancer genome so that drugs could be targeted to that weak spot rather than just trying a range of different drugs in sequence.

But the research at Duke turned out to be wrong. Its gene-based tests proved worthless, and the research behind them was discredited. Ms. Jacobs died a few months after treatment, and her husband and other patients’ relatives are suing Duke.

The problems at Duke are not an isolated problem.

The Duke case came right after two other claims that gave medical researchers pause. Like the Duke case, they used complex analyses to detect patterns of genes or cell proteins. But these were tests that were supposed to find ovarian cancer in patients’ blood. One, OvaSure, was developed by a Yale scientist, Dr. Gil G. Mor, licensed by the university and sold to patients before it was found to be useless.

The other, OvaCheck, was developed by a company, Correlogic, with contributions from scientists from the National Cancer Institute and the Food and Drug Administration. Major commercial labs licensed it and were about to start using it before two statisticians from M. D. Anderson discovered and publicized its faults.

The two statisticians, Keith Baggerly and Kevin Coombes, have made a career of debunking medical claims. In 2004, they (along with another M.D. Anderson statistician, Jeffrey Morris, published a paper that demonstrated serious flaws in the use of proteomic mass spectra to identify early ovarian cancer from normal tissue. The complex method proposed by Petrocoin et al in 2002 was apparently an artefact of equipment drift that would have been prevented if the original researchers had taken simple steps like randomizing the order of analysis of cancer and normal tissues.

Baggerly and Coombes had also found problems with the data supporting the Duke test.

Dr. Baggerly and Dr. Coombes found errors almost immediately. Some seemed careless — moving a row or a column over by one in a giant spreadsheet — while others seemed inexplicable. The Duke team shrugged them off as "clerical errors."

Even though Baggerly and Coombes published a critique in a statistics journal, the Duke team continued to promote their genetic test. It was something else entirely that caused the problems with the Duke test to be treated seriously by the broader research community.

The situation finally grabbed the cancer world’s attention last July, not because of the efforts of Dr. Baggerly and Dr. Coombes, but because a trade publication, The Cancer Letter, reported that the lead researcher, Dr. Potti, had falsified parts of his résumé. He claimed, among other things, that he had been a Rhodes scholar.

Researchers in this area have a new-found sense of humility.

With such huge data sets and complicated analyses, researchers can no longer trust their hunches that a result does — or does not — make sense.

Submitted by Steve Simon