Chance News 12
Here is a Forsooth from the January 2006 issue of RSS News with a comment by the editor.
Alcohol is now 49% more affordable than it was in 1978
20 November 2005
From the doctors' perspective, early detection has other appealing features: ordering a test is quick and easy, and it has an established billing process--unlike health promotion counseling.
--H. Gilbert Welch
For a related story, see this page.
One thing almost all people know is that it is prudent to be screened for diseases because that will add to their longevity. However, according to H. Gilbert Welch, a medical doctor at Dartmouth College, it isn't necessarily so.
His book, Should I Be Tested For Cancer? Maybe Not And Here's Why [University of California Press, 2004], focuses on screening which is a particular form of testing and he deals exclusively with cancer as opposed to other afflictions. Screening "means the systematic examination of asymptomatic people to detect and treat disease." His contention is that screening for cancer is inefficient in that very few people who actually have the particular cancer are both discovered and then cured. Moreover, the false positives result in many problems of which the general public is not aware. On the other hand, false negatives of cancer screening are barely mentioned in his book "because we do not biopsy people with negative screening tests." That is, we can't distinguish between a false negative and a rapidly-growing cancer that emerges in between screenings.
In a nutshell, randomized clinical screening trials for those cancers discussed in the book--lung cancer, cervical cancer, breast cancer, prostate cancer and colon cancer-- have statistically shown that screening has provided very little benefit in terms of mortality. Welch argues that with the new, exquisite devices such as CAT scans, MRIs, etc., now available, it is possible to detect cancer earlier so that it seems that the 5-year survival rates have improved; victims are living longer not because the treatments are better but only because the diagnoses were made earlier. Further, these devices are detecting what he calls "pseudodiseases," cancers which will never develop into a cancer that will cause a problem. It follows that this detection of cancers which would never have been discovered years ago when there was a lack of technology, further inflates the 5-year survival rate, a figure of merit which he would like to see abolished because it is so misleading.
He argues that the side effects of a false positive are not to be taken lightly. Chapters 2 and 3 are entitled "You may have a cancer 'scare' and face an endless cycle of testing" and "You may receive unnecessary treatment," respectively. Certainly, in bygone days being told that you had cancer was frightening in the extreme. Perhaps not so much in these enlightened times, but a stay in a hospital, especially for an unnecessary procedure, can definitely lead to unpleasant side effects such as infection or worse.
Welch points out that there are vested interests in the screening industry: doctors, hospitals, clinics, insurance companies and lay organizations which depend for their existence, financial and otherwise, on keeping Americans fully screened and uninformed about the problems connected with screening. For example, although it has been statistically shown via randomized clinical screening trials that mammography, an unpleasant procedure at best, is not useful for women under 50, the "mammography lobby," made up of manufacturers, radiologists, ideologues and feminists who considered the studies to be a male plot, went ballistic and wanted to substitute emotion for science: The National Cancer Institute reconsidered and by 17 to 1 decided "in favor of recommending mammography to all women in their 40s."
The same sort of situation applies to prostate cancer. The accepted, conventional wisdom in the United States is that screening must be worthwhile because it is self-evident even though a careful look at the data points in the opposite direction. Watchful waiting, a much used medical treatment in Europe for prostate cancer is frequently ridiculed in this country by both laymen and urologists.
Welch fully realizes his thesis--screening for most cancers is, by and large, ineffective and/or harmful--will not go over well because it "flies in the face of medical dogma." His "book is not about what to do if you know you have cancer; it is about informing the decision of whether to look for cancer when you are well." This distinction has been lost on the people I have spoken to. The conventional wisdom that cancer screening must be desirable is a notion that, as far as I can tell from my experience when discussing it with others, is unchallengeable. To be even more cynical, any doctor who doesn't order a screening test for a patient who eventually gets cancer is likely to be sued successfully, so ingrained is the conventional wisdom among the general public and judges alike.
Submitted by Paul Alper
Is the human brain a Bayesian-reasoning machine?
Bayes rules, Jan 5th 2006, The Economist.
The lead article in this weeks Science & Technology section of The Economist claims that Bayesian statistics may help to explain how the mind works and even argues that the human mind is a Bayesian one.
The Economist article begins with a summary of Bayes' ideas:
[Bayes ideas] about the prediction of future events from one or two examples were popular for a while, and have never been fundamentally challenged. But they were eventually overwhelmed by those of the frequentist school, which developed the methods based on sampling from a large population that now dominate the field and are used to predict things as diverse as the outcomes of elections and preferences for chocolate bars.
But, Bayes has recently started a comeback, among computer scientists designing software with human-like intelligence, such as internet search engines and automated 'help wizards'. In many situations, the true answer cannot be determined based on the limited data available, yet common sense suggests at least a reasonable guess. For example,
- how much longer will a 60-year old man live?
- can you identify a three-dimensional object from a two-dimensional diagram?
- what is the total gross from a movie that has made $40m at the box-office, so far?
That has prompted some psychologists to ask if the human brain itself might be a Bayesian-reasoning machine. Accounts of human perception and memory suggest that these systems effectively approximate optimal statistical inference, correctly combining new data with an accurate probabilistic model of the environment. The Economist article suggests that
The Bayesian capacity to draw strong inferences from sparse data could be crucial to the way the mind perceives the world, plans actions, comprehends and learns language, reasons from correlation to causation, and even understands the goals and beliefs of other minds.
It goes on to summarises how Bayesian reasoning works
The key to successful Bayesian reasoning is not in having an extensive, unbiased sample, which is the eternal worry of frequentists, but rather in having an appropriate “prior”, as it is known to the cognoscenti. This prior is an assumption about the way the world works-in essence, a hypothesis about reality-that can be expressed as a mathematical probability distribution of the frequency with which events of a particular magnitude happen.
It claims that frequentism is thus a more robust approach but it is not well suited to making decisions on the basis of limited information - which is something that people have to do all the time - and this is where Bayesian statistics excels.
The article discusses four prior distributions: Gaussian, Poisson, Erlang and power-law and an experiement that the scientists, Thomas Griffiths at Brown and Joshua Tenenbaum at MIT, conducted by giving individual nuggets of information to each of the participants in their study and asking them to draw a general conclusion.
The experiment found that people could make accurate predictions about the duration or extent of everyday phenomena, given limited data, such as: (The authors used publicly available data to identify the true prior distributions shown below in brackets.)
- estimate what its total box-office “gross” takings of a movie, even though they were not told for how long it had been on release so far (power-law)
- the number of lines in a poem, given how far into the poem a single line is (power-law)
- the time it takes to bake a cake, given how long it has already been in the oven (a complex and irregular distribution, according to the authors)
- the total length of the term that would be served by an American congressman, given how long he has already been in the House of Representatives (Erlang)
- an individual's lifespan given his current age (approx Gaussian)
- the run-time of a film (approx Gaussian)
- the amount of time spent on hold in a telephone queuing system (traditionally a Poisson but the experiment's results suggests a power-law distribution which matches other recent research)
- reigns of Pharaohs (approx Erlang)
Accounts of human perception and memory suggest that these systems effectively approximate optimal statistical inference, correctly combining new data with an accurate probabilistic model of the environment. People’s prediction functions took on very different shapes in domains characterized by Gaussian, power-law, or Erlang priors, just as expected under the ideal Bayesian analysis.
There were exceptions, such as an inability of the human brain to estimate the length of the reign of an Egyptian Pharaoh in the fourth millennium BC. People consistently overestimated this. The analysis showed that the prior they were applying was an Erlang distribution, which was the correct type. They just got the parameters wrong, presumably through lack of knowledge of political and medical conditions in fourth-millennium BC Egypt.
The authors claim that
everyday cognitive judgments follow the same optimal statistical principles as perception and memory [which are often explained as optimal statistical inferences, informed by accurate prior probabilities], and reveal a close correspondence between people’s implicit probabilistic models and the statistics of the world.
How the priors are themselves constructed in the mind has yet to be investigated in detail. Obviously they are learned by experience, but the exact process is not properly understood. The Economist article finishes with a cautionary note for both Bayesians and frequentists
Things dont always go smoothly with a Bayesian approach. Sometimes the process goes further and further off-track and the authors speculate that that might explain the emergence of superstitious behaviour, with an accidental correlation or two being misinterpreted by the brain as causal. A frequentist way of doing things would reduce the risk of that happening. But by the time the frequentist had enough data to draw a conclusion, he might already be dead.
- Bayes rules, Jan 5th 2006, The Economist. - the full article is worth reading.
- Optimal predictions in everyday cognition, Thomas L. Griffths, Department of Cognitive and Linguistic Sciences, Brown University & Joshua B. Tenenbaum, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology.
- The paper shows the emprical distributions for the each of the variables being estimated along with more details about the experiment.
Submitted by John Gavin.
Superfluous Medical Studies
Superfluous Medical Studies
When a patient volunteers for a randomized clinical trial, he or she strikes an implicit bargain with the researcher. The patient may benefit, but even if he does not, others will. That is because the study will produce new knowledge. But if the question is already settled, then the patient's sacrifice and altruism are for naught.
Steven N. Goodman, Johns Hopkins University biostatistician
Clinical trials have been the bread and butter for many a statistician. A frequent tagline to such studies is, "More research needs to be done" which implies further employment for statisticians. If the results are overall underwhelming, perhaps the procedure/medication works better on women, or Hispanics, or the elderly or some other subgroup and so the studies proliferate. David Brown's article in the Washington Post of January 2, 2006 looks at several instances where, on the contrary, the evidence is so convincing that no more studies need or should be done. As he puts it, "What part of 'yes' don't doctors understand." Specifically, he cites the use of aprotinin in heart surgery, SIDS (sudden infant death syndrome) prevention and the use of streptokinase to treat heart attacks.
According to Brown, there have been 64 studies of aprotinin since 1987 but by the 12th in 1992 it was clear that aprotinin reduced bleeding. "On average, each new paper listed only one-fifth of the previous studies in its references." Although "Being given a placebo long after aprotinin's value had been proved probably did not cost lives, the same cannot be said of medicine's failure to pay attention to studies of infant sleep position."
A child health expert alleges that "if researchers had pooled the results of the oldest studies [40 studies back to 1965] and analyzed them, they might have gotten a big hint by 1970 that putting babies to sleep on their stomachs raised the risk of SIDS" sevenfold. By the 1990s,"at least 50,000 excess [SIDS] deaths were attributable to harmful health advice." With regard to streptokinase, it lowered death rates by 25%; "that conclusion and the percentage, did not budge while 34,542 more patients were enrolled in 25 more trials of streptokinase over the next 15 years" from 1973 to 1988.
In order to rectify this excessive zeal on the part of researchers, "The Lancet, a British journal, announced last summer that it will require that authors submitting papers show they performed a meta-analysis of previous research or consulted an existing one." Goodman claims that "In 10 years we are going to look back on this time, and we won't believe this wasn't done as a matter of course."
Submitted by Paul Alper
Are We Descended from Cannibals?
Are We Descended from Cannibals? Micheal Balter, ScienceNOW Daily News,
6 January 2006.
A study published 2 years ago in Science (25 April 2003, p. 640), led by John Collinge of University College London (UCL), claimed that modern humans harbor a gene that allowed our ancestors to engage in cannibalism. The gene, called PRNP, codes for prions, thought to be responsible for several neurodegenerative diseases, including Creutzfeldt-Jacob Disease (CJD) and kuru. Individuals with certain variations in this gene are more resistant to those diseases.
The claim was based on a sample of 1,000 people from populations around the world and suggested that variations on this gene had survived for 500,000 years. The researchers hypothesized that the gene survived due to widespread cannibalistic practices that had made early humans susceptible to prion diseases.
But a recent second paper in Genome Research, by Jaume Bertranpetit and his coworkers at the Pompeu Fabra University in Barcelona, contradicts this result. It rejects the model of selection and claimed that the Science paper was statistically skewed because its study ignored low frequency variations of the gene, an error known as ascertainment bias. This second paper used a sample of 174 people from around the world.
Lead author of the Science paper, Simon Mead of UCL, stands by his original claim and argues that his paper's conclusions were based on several different lines of evidence that trump criticisms of ascertainment bias.
- Is a sample size of 1,000 people sufficient to extrapotate to the world population over the last 500,000 years?
- Is the much smaller sample size of 174 in the second paper justifiable?
Incidently, the wikipedia link above to prions warns 'This article has been identified as possibly containing errors', referring to a study in the journal Nature comparing Wikipedia to Britanica. This comparison was the subject of a previous Chance news item.
Submitted by John Gavin.
Data Mining 101: Finding Subversives with Amazon Wishlists
Data Mining 101: Finding Subversives with Amazon Wishlists, Tom Owad. applefritter.com, January 4, 2006.
This article explains a novel source for data mining, the information contained in the popular Amazon wishlists, and discusses the political implications of its use. It is not written from a statistical point of view but it offers an interesting case study in data-mining and exploratory data analysis (EDA).
The author uses readily-available, open-source software to access over 260,000 wishlists from U.S. citizens. He says
All the tools used in this project are standard and free. The services, likewise, are all free. The technical skills required to implement this project are well within the abilities of anybody who has done any programming.
Owad suggests that based on this information, it is possible to compile a list of people who expressed an interest in certain books. The author offers a sample of the list he compiled and invites everyone to make up their own list and explore the data. As an example he asks
What books are most dangerous? Send it to the FBI. I'm sure they'll appreciate your help in fighting terrorism.
Owad offers some examples of 'subversive' authors, such as Michael Moore (the fringe left) or Rush Limbaugh (the fringe right).
As part of his EDA, he impressively converted City and state information on each person to latitude and longitude coordinates, using the free on-line Ontok Geocoder service and then mapped those locations using Google's Maps API. For example, you could see the locations of all people who expressed an interest in a certain book and live in a certain city or even a certain street. Two interactive examples are offered which plot all of the locations on a satellite image of the United States that can be zoomed in to house level:
There are many comments on this article posted on the same webpage.
Submitted by John Gavin.