Chance News 56
Quotations
I can calculate the motion of heavenly
bodies but not the madness of people
After losing a fortune in the
South Sea Company bubble of 1720
Trying is the first step towards failure. -- Homer Simpson
Forsooths
This forsooth is from the October 2009 RSS Forsooth.
Of course in those days we worked on the assumption that everything was normally distributed and we have seen in the last few months that there is no such thing as a normal distribution.
Scientific Computing World
February/March 2009
You can see the context of this comment here.
University of North Dakota researchers found that pilots who ate the fattiest foods such as butter or gravy had the quickest response times in mental tests and made fewer mistakes when flying in tricky cloud conditions.
According to a New Yorker (October 12, 2009) review [1] of Matthew Stewart's The Management Myth: Why the Experts Keep Getting It Wrong, Stewart tells a story about how "his boss taught his twenty-something[-old] trainees ... how to conduct a 'two-handed regression'":
"When a scatter plot failed to show the signifiant correlation between two variables that we all knew was there, he would place a pair of meaty hands over the offending clouds of data points and thereby reveal the straight line hiding from conventional mathematics." Management consulting isn't a science, Stewart says; it's a party trick.
Minimizing the number of coins jingling in your pocket
Do We Need a 37-Cent Coin? Steven d. Levitt, October 6, 2009, Freakonomics Blog, The New York Times.
The current system of coins in the United States is inefficient. Patrick DeJarnette studied this problem and his work was highlighted in the Freakonomics blog. Dr. DeJarnette makes two assumptions.
1. Some combination of coins must reach every integer value in [0,99].
2. Probability of a transaction resulting in value v is uniform from [0,99].
Under this system, the average number of coins that you would receive in change during a random transaction would be 4.7. The system that would work better is rather bizzarre.
The most efficient systems? The penny, 3-cent piece, 11-cent piece, 37-cent piece, and (1,3,11,38) are tied at 4.10 coins per transaction.
Such a set of coins would be evocative of the monetary system in the Harry Potter books.
The article goes on to discuss systems where the coins are more conveniently priced and which single change in coins would lead to the greatest savings.
Submitted by Steve Simon
Questions
1. Minimizing the number of coins received in change is not the only criteria for a set of coin denominations. What other criteria make sense.
2. Is it logical to assume a uniform distribution in this problem?
3. What coin could be added to the current mix of coins to minimize the number of coins given in change.
Failure to disclose
“Data Call Into Question HIV Study Results”
by Gautam Naik and Mark Schoofs, The Wall Street Journal, October 10, 2009
Researchers from the U.S. Army and Thailand failed to disclose that some results of a potential HIV vaccine trial were not statistically significant, although they had this information when they announced the discovery.
"We thought very hard about how to provide the clearest, most honest message," [one researcher] said. "We stand by the fact that this is a vaccine with a modest protective effect." He called the trial results "complex."
The first analysis, a “modified intent to treat” analysis, included “virtually everyone who enrolled in the study, regardless of whether they ended up getting the full course of the vaccine. …. By this measure, the vaccine tested in Thailand reduced by 31% the chance of infection with HIV ….”
New infections occurred in 51 of the 8,197 people who got the vaccine, compared with 74 of the 8,198 volunteers who got placebo shots. Statistical calculations showed there was a 3.9% probability that chance accounted for the difference. In drug and vaccine trials, anything above a 5% probability of a chance result is deemed statistically insignificant.
The second analysis, a “per protocol” analysis, included only the “study participants who got the full regimen of vaccine shots at the right time.” Apparently, for this group, in which 86 people were infected, there is a “16% chance the study results were a fluke.” It reduced by 26% the chance of infection with HIV.
The article’s authors comment:
It isn't clear why the vaccine was seemingly ineffective among participants who followed the guidelines to the letter.
Submitted by Margaret Cibes
More on AIDS Vaccine
“Hardly ever believe what you read” is a maxim that will stand you in good stead. Googling “aids vaccine Thailand” will get 248,000 hits, most of which are misleading. In essence, the URLs say that for the first time an effective vaccine against AIDS has been manufactured. But that was last month. Reality has now set in.
The following chart found in the Wall Street Journal of October 9, 2009 paints a different picture. “New infections occurred in 51 of the 8,197 people who got the vaccine, compared with 74 of the 8,198 volunteers who got placebo shots.” Note that the “125” infections represent “51 + 74.”
The announcement on September 24, 2009 indicated that the p-value is 3.9%. A Minitab run shows that, in fact, the p-value is higher (i.e., worse) as indicated by the Fisher exact test. However, the .048 is still under the mystical .05:
Test and CI for Two Proportions
Sample |
X |
N |
Sample p |
1 |
51 |
8197 |
0.006222 |
2 |
74 |
8198 |
0.009027 |
Difference = p (1) - p (2)
Estimate for difference: -0.00280480
95% CI for difference: (-0.00546736, -0.000142249)
Test for difference = 0 (vs not = 0): Z = -2.06 P-Value = 0.039
Fisher's exact test: P-Value = 0.048
“Efficacy” of 31.2% seems to be determined from
(74 - 51)/ 74 = .310
In the final column of the chart--“Strictly adheres to trial design”--appears the unreleased
“per protocol” version. According to
Science Magazine:
- The second analysis is called “per protocol” and adheres strictly to how the trial was designed by only including the study participants who got the full regimen of vaccine shots at the right time. Because it excludes study participants who didn't get the full vaccine regimen, it usually provides corroboration to the looser “intent to treat” findings.
The article doesn’t say what the breakdown of the 86 infections is. Nevertheless, it indicates that the p-value of 16% puts a damper on enthusiasm for the vaccine.
- The press conference was not a scholarly, rigorously honest presentation,” said one leading HIV/AIDS investigator, who like others asked that his name not be used. “It doesn’t meet the standards that have been set for other trials, and it doesn’t fully present the borderline results. It’s wrong.”
Discussion
1. “Strictly adheres to trial design” has an efficacy of 26.2% and 86 infections. Show that this leads approximately to 36 and 50 infections, respectively.
2. The articles fail to tell us the number of participants in the “per protocol” situation. However, use the 36 and 50 cited above and show via a statistics package such as Minitab that the Fisher exact test comes up with about 16% for the p-value regardless of whether the sample sizes are the original ones or 4000 each, 5000 each, etc.
3. The “researchers with the U.S. Army who helped run the study, strongly objected to the assertion that they gave the data a positive spin… The debate over the way the results were presented will have no immediate practical impact because even under the most optimistic assessment, the vaccine offered too little protection to be a serious candidate for widespread use.” If this is so, why was there so much positive publicity in September?
Submitted by Paul Alper
Carrying a gun increases risk of getting shot and killed
The NewScientist
October 06 2009
Ewen Callaway
In this article we read
People who carry guns are far likelier to get shot – and killed – than those who are unarmed, a study of shooting victims in Philadelphia, Pennsylvania, has found. It would be impractical – not to say unethical – to randomly assign volunteers to carry a gun or not and see what happens. So Charles Branas's team at the University of Pennsylvania analyzed 677 shootings over two-and-a-half years to discover whether victims were carrying at the time, and compared them to other Philly residents of similar age, sex and ethnicity. The team also accounted for other potentially confounding differences, such as the socioeconomic status of their neighborhood.
Their article will appear in the American Journal of Public Health. The current version of this article can be found here and the most resent abstract can be found here in this abstract we read:
Objectives. We investigated the possible relationship between being shot in an assault and possession of a gun at the time.
Methods. We enrolled 677 case participants that had been shot in an assault and 684 population-based control participants within Philadelphia, PA, from 2003 to 2006. We adjusted odds ratios for confounding variables.
Results. After adjustment, individuals in possession of a gun were 4.46 (P<.05) times more likely to be shot in an assault than those not in possession. Among gun assaults where the victim had at least some chance to resist, this adjusted odds ratio increased to 5.45 (P<.05).
Conclusions. On average, guns did not protect those who possessed them from being shot in an assault. Although successful defensive gun uses occur each year, the probability of success may be low for civilian gun users in urban areas. Such users should reconsider their possession of guns or, at least, understand that regular possession necessitates careful safety countermeasures.
Discussion
Why do you think the New Science and other's discussing this study titled there article "Carrying a gun increases risk of getting shot and killed" rather than the title of of the article "Investigating the Link Between Gun Possession and Gun Assault"?
Of course this is the kind of article that lends iself to interesting comments. For example:
I am definitely going to have to find the complete article. I want to see how they determined which victims of being shot were included in the study and how they determined which civilians would be included in the study. With out that information, this study doesn't really mean anything.
Follow this advice and see if you think the study really means anything.
Sounds to me like a completely ignorant study and weighted to get the result they want. If you check a place like Philidelphia, of course this is the result you would get, because the people carrying guns are more likely to be involved in crimes or living in crime ridden areas. Check Dallas, or Oklahoma City. You wouldn't get that result at all. And that's because dang near everybody has guns, and we have far fewer shootings.
Does this suggest that the study is completely ignorant?
This article was suggested by Gordon Fox
Identifying financial market cycles - or not
“The Secret Cycle”, by Nick Paumgarten, The New Yorker, October 12, 2009
This article focuses on the work of Martin Armstrong, a technical financial analyst, who found that, "on average, there had been a panic every 8.6 years" over the period 1683-1907:
He discerned a recurrence of major turning points in the economy and in world affairs that followed a distinct and unwavering 8.6-year rhythm.
Then he found that the October 1987 crash “took place on the minor halfway point up the first leg of the 8.6-year cycle, at 2.15 years,” noting that "8.6 years was exactly … 3,141 [days], the number pi times a thousand.”
Eventually:
The model … failed, among other things, to foresee its developer’s demise. In September, 1999, Armstrong was charged with defrauding Japanese investors of nearly a billion dollars. …. The upshot, though, is that he has now spent more than nine years in jail – a pi cycle and then some.
The article includes discussions of Fibonacci-based market behavior models and the "reasoning" behind them.
Submitted by Margaret Cibes
Learning by the petabyte
Training to Climb an Everest of Digital Data. Ashlee Vance, The New York Times, October 11, 2009.
Some Statistics textbooks have been criticized for having small "toy" problems that do not reflect the complexity of data analysis out in the real world. What sort of data sets are out in the real world?
Facebook, for example, uses more than 1 petabyte of storage space to manage its users’ 40 billion photos. It was not long ago that the notion of one company having anything close to 40 billion photos would have seemed tough to fathom. Google, meanwhile, churns through 20 times that amount of information every single day just running data analysis jobs. In short order, DNA sequencing systems too will generate many petabytes of information a year.
Even at the best universities, students are not asked to handle data sets this large. And this is a problem.
For the most part, university students have used rather modest computing systems to support their studies. They are learning to collect and manipulate information on personal computers or what are known as clusters, where computer servers are cabled together to form a larger computer. But even these machines fail to churn through enough data to really challenge and train a young mind meant to ponder the mega-scale problems of tomorrow. "If they imprint on these small systems, that becomes their frame of reference and what they’re always thinking about," said Jim Spohrer, a director at I.B.M.'s Almaden Research Center.
Two companies with lots of experience tackling petabyte sized data sets want to change this.
Two years ago, I.B.M. and Google set out to change the mindset at universities by giving students broad access to some of the largest computers on the planet. The companies then outfitted the computers with software that Internet companies use to tackle their toughest data analysis jobs. And, rather than building a big computer at each university, the companies created a system that let students and researchers tap into giant computers over the Internet. This year, the National Science Foundation, a federal government agency, issued a vote of confidence for the project by splitting $5 million among 14 universities that want to teach their students how to grapple with big data questions.
Submitted by Steve Simon
Questions
1. What is the size of the largest data set that you have ever analyzed. Did the size of the data set force you to use a different computing system, different software, or a different statistical method?
2. Could a random sample of a few megabytes from a petabyte of data be sufficiently useful to learn on? Note that a megabyte is six orders of magnitude smaller than a petabyte. Is it possible to have a representative sample with a data set sampled this sparsely?
3. Moore's Law says (more or less) that computing capacity doubles every two years (some sources say 18 months). If Moore's Law applies, calculate how long will it take before we see petabyte sized hard drives on laptop computers?
The unluckiest fan
Nats follower may be unluckiest fan
All Things Considered, NPR, 16 October 2009.
The Washington Nationals baseball team posted a dismal won-lost record of 59-103 for the 2009 season. From the link above, you can listen to an interview with season-ticket holder Stephen Krupin, who watched the team lose all 19 games he attended this year. The host speculates that this must be a record for bad luck. In fact, Mr. Krupin reports that his cousin, a PhD economist, calculated the chance that this would happen as 1 in 131,204.
In comments posted on the NPR site, listeners attempt to puzzle out this calculation, but conclude that the event is more likely than reported. It turns out, though, that their analyses are based on the full season record, whereas it comes out in the interview that Mr. Krupin attended only home games. From the Major League Baseball standings we see that the Nationals were 33–48 at home and 26–55 on the road.
The chance that 19 randomly selected home games are all losses is <math>{48 \choose 19}/ {81 \choose 19} </math>, which equals 1 in 131203.8, in agreement with Mr. Krupin's report.
Submitted by Bill Peterson
Happiness
“The Happiness Gap is back is back is back is back”
by Mark Liberman, Language Log (online blog), September 20, 2009
Liberman updates his 2007 Language Log blog “The ‘Happiness Gap’ and the Rhetoric of Statistics”, which was posted in reaction to David Leonhardt’s 2007 New York Times article “He’s Happier, She’s Less So”, about a study by two Penn researchers of self-reported results.
He writes now in reaction to 2008 updated data from the two Penn researchers who started the study and to the spate of 2009 articles on this topic:
(a) NYT’s Ross Douthat in “Liberated and Unhappy”
(b) Huffington Post’s “The Sad, Shocking Truth About How Women Are Feeling”, “What’s Happening To Women’s Happiness?”, etc.
(c) NYT’s Marueen Dowd in “Blue Is the New Black”.
Here are the updated percents:
1972-74
Men 31.9 very happy, 53.0 pretty happy, 15.1 not too happy
Women 37.0 very happy, 49.4 pretty happy, 13.6 not too happy
2004-08
Men 29.8 very happy, 56.1 pretty happy, 14.0 not too happy
Women 31.2 very happy, 54.9 pretty happy, 13.9 not too happy
Liberman provided a more detailed discussion of the statistical issues – sample size, self-reporting, statistical vs. practical significance – in “The ‘Gender Happiness Gap’: Statistical, practical and Rhetorical Significance”, as well as a long list of references.
Submitted by Margaret Cibes