Chance News 43

From ChanceWiki
Revision as of 20:53, 4 February 2009 by Jls (talk | contribs) (→‎Forsooth!)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Quotation

Statistical and applied probabilistic knowledge is the core of knowledge; statistics is what tells you if something is true, false, or merely anecdotal; it is the "logic of science"; it is the instrument of risk-taking; it is the applied tools of epistemology; you can't be a modern intellectual and not think probabilistically—but... let's not be suckers. The problem is much more complicated than it seems to the casual, mechanistic user who picked it up in graduate school. Statistics can fool you. In fact it is fooling your government right now. It can even bankrupt the system (let's face it: use of probabilistic methods for the estimation of risks did just blow up the banking system)

Nassim Nicholas Taleb
The Fourth Quadrant A Map of the Limits of Statistics


Events with a-million-to-one odds happen 295 times a day in America.
Michael Shermer
Why People Believe Weird Things

Forsooth!

The following Forooth! is from the January 2009 RSS News

Ageing Britain: Pensioners outnumber under-16's for the first time

Gordon Lishman, of Age Concern, pointed out that not only was the average Briton getting older, but also faced longer periods of ill health in later life.

The guardian

22 August 2008


[W]e asked 1,300 people 45 and over what they thought about miracles, and the results were striking: fully 80 percent said they believe in them, 41 percent said they happen every day--and 37 percent said they have actually witnessed one. Intriguingly, though, the older you are, the less likely you are to believe in miracles."

Further, "of those who believe in miracles, 84 percent say they happen because of God. About three quarters further identify Jesus and the Holy Spirit as sources of miracles, while lesser numbers attribute them to angels (47 percent), saints (32 percent), deceased relatives or others who have passed on (19 percent) and other spirits (18 percent).

AARP Magazine

Page 52 of the January & February 2009 issue

Second thoughts about test of racial bias

In bias test, shades of gray. John Tierney, The New York Times, November 17, 2008.

A recent study showed racial bias in the way that doctors treat their patients. Or maybe not. At the heart of the study of racial bias is the I.A.T. (Implicit Association Test). This is a computerized test that measures how quickly you associate good words with faces of white subjects, bad words with faces of black subjects. If you do this more rapidly than when you associate good words with faces of white subjects and bad words with the faces of black subjects, then you have a racial bias.

The test is widely used in research, and some critics acknowledge that it’s a useful tool for detecting unconscious attitudes and studying cognitive processes. But they say it’s misleading for I.A.T. researchers to give individuals ratings like “slight,” “moderate” or “strong” — and advice on dealing with their bias — when there isn’t even that much consistency in the same person’s scores if the test is taken again.

The researchers who have developed the I.A.T. argue that the test is very useful.

In a new a meta-analysis of more than 100 studies, Dr. Greenwald, Dr. Banaji and fellow psychologists conclude that scores on I.A.T. reliably predict people’s behavior and attitudes, and that the test is a better predictor of interracial behavior than self-description.

There have been calls to try to mediate the dispute between researchers who developed the I.A.T. and those who criticize it, but these have not led to anything yet.

After all the mutual invective in the I.A.T. debate, maybe it’s unrealistic to expect the two sides to collaborate. But these social scientists are supposed to be experts in overcoming bias and promoting social harmony. If they can’t figure out how to get along with their own colleagues, how seriously should we take their advice for everyone else?

Submitted by Steve Simon

Questions

1. What is the technical term for "lack of consistency" in results if the test is taken again? Why is this a problem?

2. What is the technical term for the ability of a test to "reliably predict people's behavior and attitudes"? Why is this important?

3. Take the test yourself at implicit.harvard.edu. Do you think it accurately reflects your personal prejudices (or lack thereof)? How else might you measure your personal prejudices?

BBC 4 discusses probability

The BBC4 has a series of programs which are described as follows: "The big ideas which form the intellectual agenda of our age are illuminated by some of the best minds. Melvyn Bragg and three guests investigate the history".

A recent program discussed probability. The guests were Marcus du Sautoy from Oxford University,Colva Roney Dougal from St. Andrews University, and Ian Stewart from Warwick University. None of these are probabilists but Ian Stewart had discussed probability in several of his very nice books and discussed probability in his popular lectures.

The discussion is quite good. Of course they have to include the birthday problem and the Monty Hall problem but towards the end they discuss more serious probability contributions such as Bolzmanns's work in statistical physics and the role of probability in quantum theory and here of course they include Einstein's famous God does not play dice. It would be nice to have more time to spent on some of the more interesting topics. However you, your students, and your Aunt Mary or Uncle George should enjoy this discussion.

Discussion

Read the comments about the program here and see what you think of the audience reaction.

Submitted by Laurie Snell

Eternal Life

Heaven for the Godless?
New York Times, December 26, 2008
Op Ed, By Charles M. Blow

http://graphics8.nytimes.com/images/2008/12/27/opinion/27blowlarge.jpg

Gaming the Vote

William Poundstone is a wonderful writer. A previous book of his, Fortune’s Formula, was reviewed admiringly twice in Chance News: here and here. His latest book, Gaming the Vote: Why Elections Aren't fair (And what we can do about it [Hill and Wang], 2008 is almost as good and that is high praise. The book is that rare beast: amusing, informative, historical and technical. Many statistics textbooks treat the subject of data manipulation but this one provides an entirely new slant on exploitation of voting systems and how the procedures have been employed in the United States and elsewhere on the globe.

Kenneth Arrow’s Impossibility Theorem here dryly “says that if the decision-making body has at least two members and at least three options to decide among, then it is impossible to design a social welfare function that satisfies all these conditions [Arrow’s axioms] at once.” In loose, lay terms, "No voting method is fair", "Every ranked voting method is flawed", or "The only voting method that isn't flawed is a dictatorship." Poundstone brings the theorem to life with examples using Condorcet’s method and Borda’s method while giving the reader the background of disdain between the two 18th century French aristocrats. “Instant runoff” and “approval voting” are also discussed at length and found wanting but for different reasons. Poundstone devotes later pages to range voting as a democratic, practical way of avoiding Arrow; in fact, range voting is already used when it comes to rating restaurants, athletes, Utube videos, and a host of other activities but never thus far in political elections.

Go here to see short videos of him discussing the general ideas of his book.

Discussion

1. Statisticians are quick to despair over the lack of statistical literacy on the part of the lay public but rarely concern themselves with their own lack of historical literacy. On page 65 can be found this surprising table regarding the presidential election of 1860:

Popular Vote (%)
Electoral Votes
Lincoln
39.8
180
Breckinridge
18.1
72
Bell
12.6
39
Douglas
29.5
12

What percentage of Chance News readers and contributors knew that Breckinridge and Bell each received more electoral votes than Douglas? What percentage of Chance News readers and contributors knew that Breckinridge and Bell together received a larger popular vote than Douglas? Does anyone know the first names of Breckinridge and Bell? We read here that “At least five U.S. presidential elections have been won by the second most popular candidate. The reason was a ‘spoiler’—a minor candidate who takes enough votes away from the most popular candidate to tip the election to someone else. The spoiler effect is more than a glitch.” What where those five elections?

2. Facing off two at a time is known as Condorcet voting because of its 18th century advocate, Marquis de Condorcet, Marie Jean Antoine Nicolas de Caritat Condorcet, and makes perfect sense as in: if A is taller than B and B is taller than C, then A is taller than C. Unfortunately, while the concept of “taller than” always has this property of transitivity, “preferable to” often lacks transitivity and can thus, lead to the so-called Condorcet cycle. Find some common examples in life where transitivity is violated.

3. There is also the problem of the irrelevant alternative, an excellent example of which is found on page 50. When faced with the choice of apple pie vs. blueberry pie only, the customer chooses apple. However, moments later the waiter comes back and says cherry pie is also available and the customer switches to blueberry. Why does this sound totally irrational? Poundstone also has a real example from an ice skating competition where the gold and silver medal winners were forced to exchange places due to a subsequent skater finishing sixth. Conjure up a defense of these seemingly unreasonable switches.

4. Borda count is another way of voting. “The voter ranks all the candidates, from most to least preferred. This can be done by putting numbers next to the names on the ballot.” You merely “add up the numerical rankings given each candidate on all the ballots.” It turns out that the Borda count “may be better known to sports fans than to voters. The ’voters’ are sportswriters and the ‘candidates’ are players” and this method is used to determine trophies and standings in university as well as professional sports. Borda counts are, unfortunately, subject to “burying.” His example of burying is the Kennedy/Nixon race if there were a Nazi candidate as well. Democrats would have the following ranking in order to bury Nixon:

 1.Kennedy
 2.Schickelguber
 3.Nixon

Republicans would have the following ranking in order to bury Kennedy:

 1.Nixon
 2.Schickelguber
 3. Kennedy

If all the Republicans and Democrats engaged in burying, there is a good chance that Schickelgruber could win. Naturally, not all Democrats and Republicans will vote dishonestly, that is, “bury” the closest opposition, but Poundstone points out that the winner is “likely to be the major candidate whose supporters are less honest.” Justify the disheartening conclusion of that last sentence.

5. If you find this these results counterintuitive, indeed weird, note that a century after Condorcet and Borda, Charles Dodgson on his own reinvented each of the procedures because of his intense dislike of Henry George Liddell—the father of the eponymous Alice. To make matters even more bizarre, most of the animosity was due to the choice of the architectural design of a new belfry at Oxford’s Christ Church College.

6. Range voting” is not covered by Arrow’s theorem because it doesn’t deal with ranks and hence, at least in Poundstone’s evaluation, is a voting scheme which “captures voter sentiments admirably when everyone is completely honest. The surprising thing is that it also works well when people ‘cheat’.” Each candidate is given a score and rather like an American instructor who grades rather than ranks his students so that conceivably, all can get the top marks or all can get failing grades—highly unlikely should the instructor want to keep his job—or anything in between. Another supposed advantage of range voting is that it “can be run on every voting machine presently in use in America.” He adds, “Perhaps the most surprising thing about range voting is that no one seems (yet) to have found anything dreadfully wrong with it. The main concern you hear is that it is difficult. Go here for the latest regarding the on-going, interminable U.S. Senate election recount in Minnesota and the unfair comparison to Florida’s 2000 presidential vote. Now, what are your views on the difficulty—for the voter and for the machine—of range voting?

7. On page 265, Poundstone quotes someone named Ivan Stang who wrote, "A heretic is someone who shares ALMOST all your beliefs. Kill him." The notion is that the nearer a person's views are to yours, the more you dislike them for leading others astray as for example, socialists/communists, Harvard/Yale, Protestant/Catholics. Range voting advocates and instant runoff supporters thoroughly are at odds with each other. The technical issue is nonmonotonicity whereby instant runoff can lead to winner becoming loser. From here is found the following example:

Suppose a president were being elected by instant runoff. Also suppose there are 3 candidates, and 100 votes cast. The number of votes required to win is therefore 51. Suppose the votes are cast as follows:

Number of votes
1st Preference
2nd Preference
39
Andrea
Belinda
35
Belinda
Cynthia
26
Cynthia
Andrea

Cynthia is eliminated, thus transferring votes to Andrea, who is elected with a majority. She then serves a full term, and does such a good job that she persuades ten of Belinda's supporters to change their votes to her at the next election. This election looks thus:

Number of votes
1st Preference
2nd Preference
49
Andrea
Belinda
25
Belinda
Cynthia
26
Cynthia
Andrea

Because of the votes Belinda loses, she is eliminated first this time, and her second preferences are transferred to Cynthia, who now wins 51 to 49. In this case Andrea's preferential ranking increased between elections - more electors put her first - but this increase in support appears to have caused her to lose. In fact, of course, it was not the increase in support for Andrea that hurt her. Non-monotionic scenarios for IRV are frequently miss-presented along the lines of; "Having more voters support candidate A can cause A to switch from being a winner to being a loser." Note that it is not the fact that A gets more votes that causes A to lose. In fact that, by itself, can never cause a candidate to lose with IRV. The actual cause is the shift of support among other candidates, (in the example above, the decline in support for Belinda) which changes which candidate A faces in the final match-up.

Submitted by Paul Alper

Financial Meltdown?

Risk Mismangement
New York Times Magazine
January 4, 2009
Joe Nocera

This is a great article that anyone will enjoy reading and it would also be a fine article to discuss in a probability or statistics course. It discusses a method for estimating the value of investments called "Value at Risk" (VaR) and asks the question: did it lead to the current financial meltdown? While you should certainly read this article, it would help to do some other reading along with it. For example the article does not give a very complete explanation how the VaR works. For this we recommend one of the many articles on the Web that discuss the VaR. For example it would help to read part 1 and 2 of "Introduction to Value at Risk" found here. You will find a description of the three different ways the VaR are calculated and examples of these three methods. From this we read:

Value at risk is a special type of downside risk measure. Rather than produce a single statistic or express absolute certainty, it makes a probabilistic estimate. With a given confidence level (usually 95% or 99%). It asks, "What is our maximum expected loss over a specified time period?" There are three methods by which VaR can be calculated: the historical simulation, the variance-covariance method and the Monte Carlo simulation.

The variance-covariance method is easiest because you need to estimate only two factors: average return and standard deviation. However, it assumes returns are well-behaved according to the symmetrical normal curve and that historical patterns will repeat into the future. The historical simulation improves on the accuracy of the VAR calculation, but requires more computational data; it also assumes that "past is prologue". The Monte Carlo simulation is complex, but has the advantage of allowing users to tailor ideas about future patterns that depart from historical patterns.

The Times article comments:

Given the calamity that has since occurred, there has been a great deal of talk, even in quant circles (those who use mathematics for Financial analysis), that this widespread institutional reliance on VaR was a terrible mistake. At the very least, the risks that VaR measured did not include the biggest risk of all: the possibility of a financial meltdown. “Risk modeling didn’t help as much as it should have,” says Aaron Brown, a former risk manager at Morgan Stanley who now works at AQR, a big quant-oriented hedge fund.

The Times article discusses the criticism of the VaR by Nassim Nicholas Teleb, author of two recent best seller books, "Fooled by Randomness: The Hidden Role of Chance in the Markets and Life" and 'The Black Swan: The Impact of the Highly Improbable." Nocera writes:

Taleb says that Wall Street risk models, no matter how mathematically sophisticated, are bogus; indeed, he is the leader of the camp that believes that risk models have done far more harm than good. And the essential reason for this is that the greatest risks are never the ones you can see and measure, but the ones you can’t see and therefore can never measure. The ones that seem so far outside the boundary of normal probability that you can’t imagine they could happen in your lifetime — even though, of course, they do happen, more often than you care to realize. Devastating hurricanes happen. Earthquakes happen. And once in a great while, huge financial catastrophes happen. Catastrophes that risk models somehow always manage to miss.

You can read more about Taleb's opinion about the use of statistics in finance here.

The Times article provides a field day for the Blogs. For example the January 4, 2009 issue of the "naked capitalism" Blog. has as headline "Woefully Misleading Piece on Value at Risk in New York Times" It starts by remarking "VaR assumes that the asset prices follow a normal distribution but it is well known that financial assets do not exhibit normal distributions. And NO WHERE, not once, does the article mention this fundamental important fact".

Some of the contributors agree with this while others defend the Times article. Also Taleb's criticisms are debated. By the time the discussion is over we have a 59 page document!

Discussion:

(1) What is your opinion of the Times article?

(2) Do you think the current Financial Meltdown could be attributed to the use of VaR?

Additional reading:

The Fourth Quadrant: A Map of the Limits of Statistics By Nassim Nicholas Taleb.

Value at Risk, from Wikipedia

“Perfect Storms” – Beautiful & True Lies In Risk Management

Value At Risk: The New Benchmark for Managing Financial Risk by Jorion, Philippe.
This is available as an eBook.

Submitted by Laurie Snell

Nature vs Nurture and Sexuality

For statisticians, Nature vs. Nurture is the gift that keeps on giving. Back in the 19th century it was craniometry. The 20th century focused on intelligence. Now that society is more liberated and less prudish, the spotlight has moved from criminality and ethnicity to sexual preference. There is also an interesting switch in the politics of nature vs. nurture. In the past, those who supported the status quo regarding intelligence were conservatives who believed that success in life reflected what nature intended; social programs were of no value. Now, the side in the debate that claims that the individual’s genetic makeup (nature) determines whether or not an individual is a homosexual are the progressives; the conservative are on the other side (nurture) alleging that the individual’s homosexuality is because of the undue influence of Hollywood, television, liberalism and, of course, Hillary Clinton.

As given by here we find

http://nymag.com/news/features/gaydar070625_1_560.jpg

EXAMPLE A: Hair Whorl (Men) Gay men are more likely than straight men to have a counterclockwise whorl.

This article also contains

http://nymag.com/news/features/gaydar070625_2_560.jpg


EXAMPLE B: Thumbprint Density (Male) Gay men and straight women have an increased density of fingerprint ridges on the thumb and pinkie of the left hand.

as well as

http://nymag.com/news/features/gaydar070625_3_560.jpg

EXAMPLE C: Digit Proportions (Female)

The index fingers of most straight men are shorter than their ring fingers, and for most women they are the same length or longer. Gay men and lesbians tend to have reversed ratios.

and,

http://nymag.com/news/features/gaydar070625_4_560.jpg

EXAMPLE D: Hand Dexterity (Male) Gay men and lesbians have a 50 percent greater chance of being left-handed or ambidextrous than their straight counterparts.

Discussion

1. According to this article, Professor Richard Lippa, a psychologist from California State University at Fullerton, attended “the Long Beach Pride Festival” and gathered “survey data from more than 50 short-haired men and photographed their pates (women were excluded because their hairstyles, even at the pride festival, were too long for simple determination; crewcuts are the ideal Rorschach, he explains). About 23 percent had counterclockwise hair whorls. In the general population, that figure is 8 percent.” See if your favorite librarian can find anything in the literature that indicates that a counterclockwise whorl is found in 8 percent of men.

2. Assume that the 23 percent of all homosexual males have a counterclockwise whirl and 8 percent of the male population have a counterclockwise whirl. Using Bayes Theorem, if you meet a male with a counterclockwise whirl, what is the probability that he is a homosexual. Assume that the homosexual males comprise about 3 percent of the population.

3. Left-handedness, unlike hair whirls, is a fascinating subject in itself and a Google search will turn up many intriguing concepts. Roughly speaking, somewhere around 10 percent of the population is lefthanded. The article indicates that homosexuals “have a 50 percent greater chance of being left-handed” than their straight counterparts. Using Bayes Theorem, if you meet a lefthanded male, what is the probability that he is a homosexual?

4. John T. Manning's book, Digit Ratio: A Pointer to Fertility, Behavior, and Health [Rutgers University Press, 2002] contains over 170 pages devoted to the ratio of the length of the index finger to the length of the ring finger [2D:4D ratio]. He and others believe that the 2D:4D ratio is able to explain such disparate entities as sex and population difference, assertiveness, status, aggression, attractiveness, the wearing of rings, reproductive success, hand preference, verbal fluency, autism, depression, birth weight, breast cancer, sex dependent diseases, mate choice, sporting ability, running speed, spatial perception homosexuality and more. From Professor S.M. Breedlove of Michigan State University: “Animal models have indicated that androgenic steroids acting before birth might influence the sexual orientation of adult humans. Here we examine the androgen-sensitive pattern of finger lengths, and find evidence that homosexual women are exposed to more prenatal androgen than heterosexual women are; also, men with more than one older brother, who are more likely than first-born males to be homosexual in adulthood, are exposed to more prenatal androgen than eldest sons. Prenatal androgens may therefore influence adult human sexual orientation in both sexes, and a mother's body appears to 'remember' previously carried sons, altering the fetal development of subsequent sons and increasing the likelihood of homosexuality in adulthood.” The following data were obtained at gay pride celebrations in the San Francisco Bay Area here and here.


http://www.dartmouth.edu/~chance/forwiki/figure1.gif

Figure 1 Finger-length patterns vary with gender, sexual orientation and birth order.


http://www.dartmouth.edu/~chance/forwiki/figure2.gif


Fig 2. Finger length ratios in self-identified femme and butch lesbians. Means and standard errors of the means are depicted. A smaller 2D:4D is thought to reflect greater exposure to androgen during the perinatal period. Because the sex difference in 2D:4D is greater on the right hand than on the left (see text), the right hand may provide a more sensitive measure of early androgen than does the left.

Data acquisition can be costly and time consuming. The trick is to find data which is inexpensive to attain and appealing to the general public. Discuss how Lappa, Manning and Breedlove have found an area guaranteed to be successful.


5. In the works of Manning and Breedlove a small p-values is used to claim a result is striking. Given the large sample sizes, why is a small p-value not necessarily impressive? Given the multiple comparisons, why is a small p-value not necessarily impressive?

6. Why are social conservatives unhappy with the implication that homosexuality is innately determined?

Submitted by Paul Alper