Chance News 29
Contents
Quotations
"There are few things that are so unpardonably neglected in our country as poker. The upper class knows very little about it. Now and then you find ambassadors who have sort of a general knowledge of the game, but the ignorance of the people is fearful. Why, I have known clergymen, good men, kindhearted, liberal, sincere, and all that, who did not know the meaning of a "flush." It is enough to make one ashamed of the species".
Forsooth
The following Forsooths are from the September 07 issue of the RSS NEWS.
Heart disease claimed the lives of one in five men
and about one in six women last year, figures indicate.The Times
26 May 2006
See the end of this Chance News for the data that was the basis for this claim.
[Hanson plc is the]] Largest aggregates producer
in the world and 3rd largest in the USADaily Telegraph
3 March, 2006
This Forsooth was suggested by Jerry Grossman.
In addition, a person's odds of becoming obese increased by 57 percent if he or she had a friend who became obese over a certain time interval. If the two people were mutual friends, the odds increased to 171 percent.
This discussion relates to an article The Spread of Obesity in a Large Social Network over 32 Years that appeared in the July 26, 2007 issue of the New England Journal of Medicine and seems to be freely available. Of course, here the "increased to 171 percent" is "increased by 171%."
Jerry remarks "The NEJM article is interesting to those of us interested in the mathematical aspects of the social network."
This forsooth was suggested by Paul Alper
I've done 120 shortterm energy outlooks, and I've probably gotten two of them right.
Mark Rodekohr, a veteran Department of Energy (DOE) economist
Minnesota Star Tribune
August 12, 2007
Is Poker predominantly skill or luck?
Harvard ponders just what it takes to excel at poker.
Wall Street Journal, May 3, 2007, A1
Neil King JR
The WSJ article reports on a oneday meeting in the Harvard Faculty Club of poker pros, game theorists, statisticians, law students and gambling lobbyists to develop a strategy to show that poker is not predominantly a game of chance.
In the article we read:
The skill debate has been a preoccupation in poker circles since September (2006), when Congress barred the use of credit cards for online wagers. Horse racing and stock trading were exempt, but otherwise the new law hit any game "predominantly subject to chance". Included among such games was poker, which is increasingly played on Internet sites hosting players from all over the world.
This, of course, is not a new issue. For example it is the subject of the Mark Twain's short story "Science vs. Luck" published in the October 1870 issue of The Galaxy. The Galaxy no longer exists but cofounder Francis Church will always be remembered for his reply to Virginia's letter to the New York Sun: "Yes, Virginia, there is a Santa Claus".
In Mark Twain's story a number of boys were arrested for playing "old sledge" for money. Old sledge was a popular card game in those times and often played for money. In the trial the judge finds that half the experts say that old sledge is a game of science and half that it is a game of skill. The lawyer for the boys suggests:
Impanel a jury of six of each, Luck versus Science  give them candles and a couple of decks of cards, send them into the jury room, and just abide by the result!
The Judge agrees to do this and so four deacons and the two dominies (Clergymen) were sworn in as the "chance" jurymen, and six inveterate old sevenup professors were chosen to represent the "science" side of the issue. They retired to the jury room. When they came out, the professors had ended up with all the money. So the Judge ruled that the boys were innocent.
Today more sophisticated ways to determine if a gambling game is predominantly skill or luck are being studied. Ryne Sherman has written two articles on this topic: A Conceptualization and Quantification of Skill and More on Skill and Individual Îifferences in which he proposes a way to estimate luck and skill in poker and other games. These articles occurred in the Internet magazine Two + Two Vol.\ 3, No. 5 and 6 but are not available since the journal only keeps their articles for three months.
To estimate skill and luck percentages Sherman uses a statistical procedure called analysis of variance (ANOVA). To understand Sherman's method of comparing luck and skill we need to understand how ANOVA works so we will do this using a simple example.
Assume that a clinical trial is carried out to determine if vitamin ME improves memory. In the study, two groups are formed from 12 participants. Six were given a placebo and six were given vitamin ME. The study is carried out for a period of six months. At the end of each month the two groups are given a memory test. Here are the results:
Month 
Placebo 
Vitamin ME 
1 
4 
7 
2 
6 
5 
3 
8 
8 
4 
4 
9 
5 
5 
7 
6 
3 
9 
Mean 
5 
7.5 
The numbers in the second column are the average number of correct answers for the placebo group and those in the third column are the average number of correct answers for the Vitamin ME group. ANOVA can be used to see if there is significant difference between the groups. Here is Bill Peterson's explanation for how this works. There are two group means:
Mean2 = [math]\frac{(7+5+8+9+7+9)}{6}= \frac{45}{6}=7.5 [/math]
Then a grand mean over all observations:
Variance is always a sum of square deviations divided by degree of freedom: SS/df. This is also called a mean squared deviation MS.
ANOVA begins by expressing the deviation of each observation from the grand mean as a sum of two terms: the difference of the observation from its group mean, plus the difference of the group mean from the grand mean. Writing this out explicitly for the example, we have, for the placebo group:
(6  6.25) = (6  5.0) + (5.0  6.25)
...
and for the vitamin ME group:
(5  6.25) = (5  7.5) + (7.5  6.25)
...
The magic (actually the Pythagorean Theorem in an appropriate dimensional space)
is that the sums of squares decompose in this way.
[math](46.25)^2 +...+(96.25)^2 =[/math] [math][(45.0)^2+...+(9  7.5)^2][/math]
Check: 46.25 = 27.5 + 18.75
In the usual abbreviations:
where these three quantities are the total sum of squares, the error sum of squares, and the group sum of squares. In ANOVA, scaled versions of SSE and SSG are compared to determine if there is evidence that there is a significant difference among the different groups.
The SSE is a measure of the variations within each group and so should not tell us much about the effectiveness of the treatments and is often called the nuisance variation. On the other hand the SSG is a measure of the variation between the groups and would be expected to give information about the effectiveness of the treatment.
Sherman uses this same kind of decomposition for his measure of skill and chance for a game. We illustrate how he does this using data from five weeks of our lowkey Monday night poker games. In the table below, we show how much each player lost in five games and their mean winnings.
Sally 
Laurie 
John 
Mary 
Sarge 
Dick 
Glenn 

Game 1 
6.75 
10.10 
5.75 
10.35 
9.7 
4.43 
1.95 
Game 2 
4.35 
4.25 
.40 
.35 
8.8 
.15 
5.8 
Game 3 
6.95 
4.35 
.18 
7.75 
7.65 
5.9 
3.9 
Game 4 
1.23 
11.55 
4.35 
2.9 
4.85 
3.9 
3.25 
Game 5 
6.35 
1.5 
.45 
.65 
.25 
4.9 
1.42 
Mean 
1.934 
6.35 
.254 
.9 
2.63 
2.084 
2.484 
To compare the amount of skill and luck in these games Sherman would have us carry out an analysis of variance in the same way we did for our example. The players are now seen in the role of treatments. Each player has a mean net gain over the set of games. For each outcome in the table we write the difference between this outcome and the overall mean as the sum of two terms: the difference between the outcome and the player's mean plus the difference between the player's mean and the overall mean. Sherman suggests that the difference between the outcome and the players mean is due primarily to luck while the difference between the players mean and the overall mean is due primarily to skill. This leads him to define the skill % as the ratio of the of the group sums of squares to the total sums of squares and the luck % as the ratio of the within group sums of squares to the total sums of squares.
Sherman assumes that the variation in the amount won within groups is primarily due to luck and calls this Random Variance and the variation between groups is due primarily to skill and calls this Systematic Variance. He then defines:
<center>[math]{\rm Game's\ Skill\ Percentage} = \frac{\rm Systematic\ Variance}{\rm Systematic\ Variance + Random\ Variance}[/math]and similarly,
So, in our poker game, the Random Variance is 758.499 and the Systematic variance is 311.477. So the Skill Percentage is 29.1% and the Luck Percentage is 70.9%.
In his second article, Sherman reports the Skill Percentage he obtained using data from a number of different types of games. For example, using data for Major League Batting, the Skill Percentage for hits was 39% and for home runs was 68%. For NBA Basketball it was 75% for points scored. For poker stars in weekly tournaments it was 35%.
Sherman concludes his articles with the remarks:
If two persons play the same game, why don't both achieve thesame results? The purpose of last month's article and this article was to address this question. This article suggests that there are two answers to this question: Skill (or systematic variance) or Luck (or random variance). Using both the correlation approach described last month and the ANOVA approach described in this article, one can estimate the amount of skill involved in any game. Last, and maybe most importantly, Table 4 demonstrated that the skill estimates involved in playing poker (or at least tournament poker) are not very different from other sport outcomes which are widely accepted as
skillful.
Discussion questions:
(1) Do you think that Sherman's measure of skill and luck in a game is reasonable? If not, why not?
(2) There is a form of poker modeled after duplicate bridge. Do you think that the congressional decision should apply to this form of gambling?
Second chance lottery drawing
Ask Marilyn
Parade, 5 August 2007
Marilyn vos Savant
A reader poses the following question.
Say that a state runs a lottery with scratchoff tickets and has a secondchance drawing for losing tickets. The latter are sent to a central location, where they are boxed and stored until it’s time for the drawing. An official then chooses one box and draws a ticket from it. All the other boxes are untouched. Is this fair, compared to storing all the tickets in a large container and then drawing a ticket from it?
Marilyn responds that, "The methods are equivalent, and both are perfectly fair: One winner was chosen at random", and suggests that the method is used purely for physical convenience. (In a state lottery, however, we imagine the whole affair would be conducted electronically.)
DISCUSSION QUESTIONS:
(1) Marilyn's answer is almost correct. What has been implicitly assumed here?
(2) Here is a related problem (from Grinstead & Snell, Introduction to Probability, p. 152, problem 23).
You are given two urns and fifty balls. Half of the balls are white and half are black. You are asked to distribute the balls in the urns with no restriction placed on the number of either type in an urn. How should you distribute the balls in the urns to maximize the probability of obtaining a white ball if an urn is chosen at random and a ball drawn out at random? Justify your answer.
Submitted by Bill Peterson
The understanding and misunderstanding of Bayesian statistics
Gambling on tomorrow, The Economist, Aug 16th 2007
Scientists try new ways to predict climate risks, Reuters 12 Aug 2007.
Too late to escape climate disaster?, New Scientist, 18 Aug 2007.
Earth Log  Complex lesson, Daily Telegraph, 17 Aug 2007.
The latest edition of one of the Royal Society's journals, Philosophical Transactions, is devoted to the science of climate modelling:
predictions from different models are pooled to produce estimates of future climate change, together with their associated uncertainties,
the Royal Society said, and it partly focusses on 'the understanding and misunderstanding' of Bayesian statistics. So this Economist article discusses the difference between the frequentist and Bayesian view of statistics, in the context of forecasting the weather.
It starts by claiming that there were just two main influences on the early development of probability theory and statistics: Bayes and Pascal: Pascal's ideas are simple and widely understood while Bayes are not. Pascal adopted a frequentist view, which The Economist characterises as the world was that of the gambler: each throw of the dice is independent of the previous one; Bayes promoted what we now call Bayesian probability, which The Economist characterises as incorporating the accumulation of experience into a statistical model in the form of prior assumptions:
A good prior assumption about tomorrow's weather, for example, is that it will be similar to today's. Assumptions about the weather the day after tomorrow, though, will be modified by what actually happens tomorrow.
But prior assumptions can influence model outcomes in subtle ways, The Economist warns:
Since the future is uncertain, (weather) forecasts are run thousands of times, with varying parameters, to produce a range of possible outcomes. The outcomes are assumed to cluster around the most probable version of the future. The particular range of values chosen for a parameter is an example of a Bayesian prior assumption, since it may be modified in the light of experience. But the way you pick the individual values to plug into the model can cause trouble. They might, for example, be assumed to be evenly spaced, say 1,2,3,4. But in the example of snow retention, evenly spacing both rateoffall and rateofresidenceintheclouds values will give different distributions of results. That is because the second parameter is actually the reciprocal of the first. To make the two match, value for value, you would need, in the second case, to count 1, ½, ⅓, ¼—which is not evenly spaced. If you use evenly spaced values instead, the two models' outcomes will cluster differently.
It goes on to claim that those who use statistical models often fail to account for the uncertainty associated with such models:
Psychologically, people tend to be Bayesian—to the extent of often making false connections. And that risk of false connection is why scientists like Pascal's version of the world. It appears to be objective. But when models are built, it is almost impossible to avoid including Bayesianstyle prior assumptions in them. By failing to acknowledge that, model builders risk making serious mistakes.
One of the Philosophical Transactions papers authors', David Stainforth of Oxford University, says
The answer is more comprehensive assessments of uncertainty, if we are to provide better information for today's policy makers. Such assessments would help steer the development of climate models and focus observational campaigns. Together this would improve our ability to inform decision makers in the future.
Questions
 What influences on the early development of probability theory and statistics can you think of, other than Pascal and Bayes?
 Is the frequentist view of statistics nothing more than each throw of the dice is independent of the previous one. What other characteristics would you associate with this view of statistics? Can you offer a better oneline summary? What about a better descrption of Bayesian statistics than incorporating the accumulation experience into a statistical model in the form of prior assumptions.
 In one of the Royal Society's papers, authors David Stainforth from Oxford University and Leonard Smith from the LSE, advocate making a clearer distinction between the output of model experiments designed for improving the model and those of immediate relevance for decision making. What do you think they meant by that? Can you think of a simple example to illustrate your interpretation?
 The Economist claims that scientists are not easily able to understand Bayes because of their philosophical training in the rigours of Pascal's method. How would you reply to this assertion?
Further reading
 Confidence, uncertainty and decisionsupport relevance in climate predictions, David Stainforth, Oxford University and Leonard Smith, LSE.
 This paper discusses the sources of uncertainty in the interpretation of climate model simulations as projections of the future.
 See also Climateprediction.net.
Sumbitted by John Gavin.
The Myth, the Math, the Sex
The Myth, the Math, the sex.
The New York Times, August 12, 2007, The Week in Review
Gina Kolata
The Median, the Math and the Sex.
The New York Times, August 19, 2007, The Week in Review
Gina Kolata
In the first article Gina Kolata comments that there have been numerous studies claiming to show that men have more sexual partners than women.
She reports a recent government study, reporting that men have had a medium of seven female sex partners while women have had a median of four. Kolata writes:
"It is about time for mathematicians to set the record straight," said David Gale, an emeritus mathematics professor at the University of California, Berkeley.
"Surveys and studies to the contrary notwithstanding, the conclusion that men have substantially more sex partners than women is not and cannot be true for purely logical reasons," Dr Gale said. He even provided a proof, writing in an email message.
By way of dramatization, we change the context slightly and will prove what will be called the High School Prom Theorem. We suppose that on the day after the prom, each girl is asked to give the number of boys she danced with. These numbers are then added up giving a number G. The same information is then obtained from the boy, giving a number B
Theorem: G = B
Kolata reports further:
Ronald Graham, a professor of mathematics and computer science at the University of California, San Diego, agreed with Dr. Gale. After all, on average, men would have to have three more partners than women, raising the question of where all these extra partners might be.
The second Gina Kolata article deals primarily with the shower of responses pointing out that the study reported that the medians were different, and so Gale's proof is either irrelevant or not true.
Of course the Blogs had a field day with this mathematics. One of the best is the Blog of Brad Delong, Delong is an economist at the University of California and hence a colleague of David Gale. He blames Gina Kolata saying that she did not tell Gale that the study reported the results of the study in terms of medians rather than means. However the comments to this Blog are very interesting and show just how hard it is to apply mathematics to the real world. Their comments subjext good discussion questions.
Discussion questions
(1) What explantions can you give for the resutls of the survey? Are they enough to explain the difference reported in this survey.
(2) Did you dance with more than one person at your high school prom?
(3) Is Gales theory true if there are more women than men or more men than women in the population sampled?
(4)The article reports:
I have heard this question before,” said Cheryl D. Fryar, a health statistician at the National Center for Health Statistics and a lead author of the new federal report, “Drug Use and Sexual Behaviors Reported by Adults: United States, 19992002,” which found that men had a median of seven partners and women four. But when it comes to an explanation, she added, “I have no idea.”
Do you think that Fryar Knows the difference between mean and median?
Data for first forsooth
The Times 26 May 2006 article that is the source for the Forsooth included the following data:
LEADING CAUSES OF DEATH
MEN Total deaths Percentage
Heart disease 49,205 20.2
Cerebrovascular diseases 19,266 7.9
Cancer of trachea, bronchus & lung 16,775 6.9
Chronic lower respiratory diseases 13,589 5.6
Influenza and pneumonia 12,209 5
Prostate cancer 9,018 3.7
Cancer of colon, rectum and anus 7,570 3.1
Lymphoid cancer 5,606 2.3
Dementia and Alzheimer's 5,076 2.1
WOMEN Total deaths Percentage
Heart disease 38,969 16
Influenza and pneumonia 31,366 12.9
Dementia and Alzheimer's 19,255 7.9
Chronic lower respiratory diseases 12,605 5.2
Cancer of trachea, bronchus & lung 11,895 4.9
Breast cancer 10,986 4.5
Heart failure & complications, & illdefined heart disease (not included above) 7,212 3
Cancer of colon, rectum and anus 6,537 2.7