Chance News 71
"Regression, it seems has a particular ability to reduce otherwise emotionally healthy adults to an infantile state, blubbing hysterically and looking for someone's hand to hold. My guess is that this suits most statisticians just fine--a textbook on regression might look like a bunch of formulas to you; to statisticians like me, it 450 pages of job security."
(Addison-Wesley, 2009), p. 78
"It is an odd feeling when you love what you do and everyone else seems to hate it. I get to peer into lists of numbers and tease out knowledge that can help people live longer, healthier lives. But if I tell friends I get a kick out of statistics, they inch away as if I have a communicable disease."
Submitted by Paul Alper
When did they start doing factoids?
12%: The percentage higher for searches of the word "guacamole" in Wisconsin than in Pennsylvania.
5%: The percentage higher for "baba ganoush" searches in Pennsylvania than in Wisconsin.
Submitted by Paul Alper
A novelist who might have stopped when she was ahead:
“Statistics aside, Lewis would go down in history as being the economist who’d conceived a mathematical formula for happiness: R/E, or, Reality divided by Expectations. There were two ways to be happy: improve your reality, or lower your expectations. Once, at a neighborhood dinner party, Lacy had asked him what happened if you had no expectations. You couldn’t divide by zero. Did that mean if you just let yourself roll with all of life’s punches, you could never be happy?”
Getting what you pay for in college
Flurry of Data as Rules Near for Commercial Colleges Tamar Lewin, The New York Times, February 4, 2011.
It costs a lot of money to go to college. If you are able to get a better job as a result, that is money well invested. But that is not always the case, and it may be that commercial colleges have more problems with this.
On Thursday, the department issued new data showing that many commercial colleges leave large numbers of their graduates unable to pay back their loans. The data — covering all institutions of higher education — found that among students whose loans came due in 2008, 25 percent of those who attended commercial colleges defaulted within three years, compared with 10.8 percent at public institutions and 7.6 percent at private nonprofit colleges and universities.
That's not a fair comparison, according to some.
"Our schools are primarily educating working adults and lower income students, which is not true of traditional higher education," said Harris Miller, president of the Association of Private Sector Colleges and Universities. "My expectation is that if you compared schools with our demographics, they would have similar rates, and I don’t understand why the Department of Education can’t break it down that way."
There will soon be penalties for colleges with poor data on loan repayment performance.
Starting next year, colleges that have default rates greater than 30 percent for three consecutive years will, as of 2014, lose their eligibility for federal student aid.
There are differing opinions on whether this is a good thing.
The commercial colleges say the rule, as proposed, would cut off education opportunities for low-income and minority students with too few educational options. But consumer advocacy groups say that it would eliminate only the programs whose students have the highest loan-default rates, and, in the process, help protect both students and taxpayers from programs that take in millions of dollars of federal aid but leave students mired in debt.
1. Should loan default rates be adjusted for the demographics of the student population?
2. What sort of data, other than loan default rates could be collected to measure how effective colleges are.
Gladwell on college ranks
The order of things: What college rankings really tell us
by Malcolm Gladwell, New Yorker, 14 February 2011
To be continued...
Submitted by Bill Peterson, based on a suggestion from Priscilla Bremser
Bayesians and Bem's ESP paper
Bayesian statisticians have many criticisms of Bem’s paper. Perhaps the major one is Bem’s reliance on low p-value to show that ESP exists. In the Bayesian world, unlike the frequentist one, p-value is viewed as a flawed metric for testing hypotheses. The following is a hypothetical example from Freeman:
| Number of Patients
Receiving A and B
| two-sided |
The p-value is numerically the same regardless of sample size; in particular, the last row is evidence that treatment A and B are equivalent despite the continuing low p-value. To bring things back to Bem’s paper, assume that treatment A has ESP and B does not.
Here is another example known as Lindley’s paradox which demonstrates that p-value is a flawed metric; such analysis was first published about 80 years ago. Although this example involves 98,451 births of boys and girls, it could just as well deal with ESP successes and failures.
Let's imagine a certain town where 49,581 boys and 48,870 girls have been born over a certain time period. The observed proportion (x) of male births is thus 49,581/98,451 = 0.5036. We are interested in testing whether the true proportion (θ) is 0.5. That is, our null hypothesis is H0: θ = 0.5 and the alternative is H1: θ ≠ 0.5.
Because the sample size is very large, the normal approximation to the binomial holds; the mean proportion under the null is .5 and the variance is σ2 ≈ x(1−x)/n, or (.5036)(.4964)/98,451.
Using the normal approximation above, the upper tail probability is the one-sided p-value
By symmetry, the two-sided p-value is double that, .0234 which indicates statistical significance.
However, if we assume we have no reason to believe that the proportion of male births should be different from 0.5, so we assign prior probabilities P(θ = 0.5) = 0.5 and P(θ ≠ 0.5) = 0.5, the latter uniformly distributed between 0 and 1. The prior distribution is thus a mixture of point mass 0.5 and a uniform distribution U(0,1) . This leads to
This is strong evidence in favor of H0: θ = 0.5. Consequently, despite the low p-value, we have a high probability the null is correct.
If p-value is so flawed, the natural question is: why is it so ubiquitous? One answer is that it is a mathematical procedure which is much easier to perform; indeed, before the availability of stats packages, students ignorant of calculus could readily use the standard normal table to carry out the frequentist calculation. Further, the pesky (but fundamentally important to Bayesians) issue of prior probabilities is sidestepped entirely. For decades the Bayesian triumph has been predicted but thus far, the U.S. remains a frequentist stronghold and p-values galore are published.
But there is another issue regarding Bem’s paper which is outside of the domain of statistics. Why do so many people passionately believe in ESP even though there has never been any credible evidence for it outside of a low p-value? Perhaps the answer lies in a weird perversion of the notion of democratic opinion. If ESP exists then physical laws, the specialty of the scientifically and mathematically educated, no longer hold and everyone has an equal say. Beauty may lie in the eyes of the beholder, but it is incontestable that the speed of light is exactly 299,792,458 meters per second, the harmonic series diverges and the planet on which we reside is considerably older than a few thousand years. Such items are not up for a vote and should not be subject to the ballot box of public estimation.
Submitted by Paul Alper