Chance News 57
An undefined problem has an infinite number of solutions.
The following quotation can be found here in an article by Gelman and Weakliem entitled, "Of beauty, sex and power: Statistical challenges in estimating small effects":
This ability of the theory to explain findings in any direction is also pointed out by Freese (2007), who describes this sort of argument as "more 'vampirical' than 'empirical'--unable to be killed by mere evidence."
Gelman and Weakliem are criticizing research which putatively detects an effect merely because statistical significance is obtained on either side of zero or, in the case of ratio of females to males, 50%. In particular, they contest the results of studies which claim that “beautiful parents have more daughters, violent men have more sons and other sex-related patterns.” They also analyze so-called Type M (magnitude) errors and Type S (sign) errors.
This is a Type M (magnitude) error: the study is constructed in such a way that any statistically-significant finding will almost certainly be a huge overestimate of the true effect. In addition there will be Type S (sign) errors, in which the estimate will be in the opposite direction as the true effect.
1. As a long-term research project, determine via literature and art how the notion of “beautiful” has changed through the ages and across cultures.
2. The imbalance between baby daughters and baby sons produced by beautiful people somehow went from the original article’s (not statistically significant) 4.7% to 8% when dealing with the largest comparison (the most beautiful parents on a scale of 1 to 5) to 26% and finally to 36% via a typo in the New York Times.
3. The authors, based on their analysis, say “There is no compelling evidence that “Beautiful parents produce more daughters.” Nevertheless, why did the original paper have so much appeal?
4. As a check, the authors used People magazine’s “list of the fifty most beautiful people” from 1995 to 2000 to find the offsprings. There were “157 girls out of 329 children, or 47.7% girls (with a standard error 2.8%).” Instead of more females, fewer were produced.
5. The authors note “the structure of scientific publication and media attention seem to have a biasing effect on social science research.” Explain what they mean by a “biasing effect.”
Submitted by Paul Alper for Halloween.
How anyone can detect election fraud
Why Russians Ignore Ballot Fraud Clifford J. Levy, The New York Times, October 24, 2009.
Russian Election Fraud? Steven D. Levitt, Freakonomics Blog, The New York Times, April 16, 2008.
All it takes is a bit of common sense and a careful review of the data to expose election fraud, at least in Russia.
Soon after polls closed in regional elections this month, a blogger who refers to himself as Uborshizzza huddled away in his Moscow apartment and began dicing up the results on his computer. It took him only a few hours to detect what he saw as a pattern of unabashed ballot-stuffing: how else was it possible that in districts with suspiciously high turnouts in this city, Vladimir V. Putin’s party received heaps of votes?
Here's a specific example.
Overall turnout was 18 percent in one Moscow district, and United Russia garnered 33 percent. In an adjacent district, turnout was 94 percent, and the party got 78 percent.
This was done by a statistician in his spare time, with access only to publicly available records.
Uborshizzza, who by day is a 50-year-old medical statistician named Andrei N. Gerasimov, sketched charts to accompany his conclusions and posted a report on his blog. It spread on the Russian Internet, along with similar findings by a small band of amateur sleuths, numbers junkies and assorted other muckrakers.
A similar study of open election records in 2008 also yielded obvious evidence of fraud.
Analyzing official returns on the Central Elections Committee Web site, blogger Sergei Shpilkin has concluded that a disproportionate number of polling stations nationwide reported round numbers — that is, numbers ending in zero and five — both for voter turnout and for Medvedev’s percentage of the vote.
It wasn't just any numbers though, but the numbers on the high end of the distribution.
In most elections, one would expect turnout and returns to follow a normal, or Gaussian, distribution — meaning that a chart of the number of polling stations reporting a certain turnout or percentage of votes for a candidate would be shaped like a bell curve, with the top of the bell representing the average, median, and most popular value. But according to Shpilkin’s analysis, which he published on his LiveJournal blog, podmoskovnik.livejournal.com, the distribution both for turnout and Medvedev’s percentage looks normal only until it hits 60 percent. After that, it looks like sharks’ teeth. The spikes on multiples of five indicate a much greater number of polling stations reporting a specific turnout than a normal distribution would predict.
Sadly, though, the reaction of the Russian people has been a collective shrug.
There was none of the sort of outrage on the streets that occurred in Iran in June, when backers of the incumbent president, Mahmoud Ahmadinejad, were accused of rigging the election for him. Nor the international clamor that greeted the voting in Afghanistan, which last week was deemed so tainted that President Hamid Karzai was forced into a runoff. The apparent brazenness of the fraud and the absence of a spirited reaction says a lot about the deep apathy in Russia, where people grew disillusioned with politics under Communism and have seen little reason to alter their view.
This disillusionment is easily demonstrated in public polling.
Opinion polls ... showed that 94 percent of respondents believed that they could not influence events in Russia. According to another, 62 percent did not think that elections reflect the people’s will.
Submitted by Steve Simon
1. Compare the reaction of the Russians to these results to the reactions in the United States to the anomalously high votes for Patrick Buchanan in Palm Beach County during the 2000 election. What explains the difference?
2. What other measures of publicly available election records might be used to detect fraud?