# Difference between revisions of "Chance News 29"

## Quotations

"There are few things that are so unpardonably neglected in our country as poker. The upper class knows very little about it. Now and then you find ambassadors who have sort of a general knowledge of the game, but the ignorance of the people is fearful. Why, I have known clergymen, good men, kind-hearted, liberal, sincere, and all that, who did not know the meaning of a "flush." It is enough to make one ashamed of the species".

Mark Twain.

## Forsooth

The following Forsooths are from the September 07 issue of the RSS NEWS.

THE BIGGEST KILLER BY FAR
Heart disease claimed the lives of one in five men
and about one in six women last year, figures indicate.
The Times
26 May 2006

See the end of this Chance News for the data that was the basis for this claim.

[Hanson plc is the]] Largest aggregates producer
in the world and 3rd largest in the USA
Daily Telegraph
3 March, 2006

This Forsooth was suggested by Jerry Grossman.

In addition, a person's odds of becoming obese increased by 57 percent if he or she had a friend who became obese over a certain time interval. If the two people were mutual friends, the odds increased to 171 percent.
Revolution Health
July 25, 2007

This discussion relates to an article The Spread of Obesity in a Large Social Network over 32 Years that appeared in the July 26, 2007 issue of the New England Journal of Medicine and seems to be freely available. Of course, here the "increased to 171 percent" is "increased by 171%."

Jerry remarks "The NEJM article is interesting to those of us interested in the mathematical aspects of the social network."

This forsooth was suggested by Paul Alper

I've done 120 short-term energy outlooks, and I've probably gotten two of them right.
Mark Rodekohr, a veteran Department of Energy (DOE) economist
Minnesota Star Tribune
August 12, 2007

## Is Poker predominantly skill or luck?

Harvard ponders just what it takes to excel at poker.
Wall Street Journal, May 3, 2007, A1
Neil King JR

The WSJ article reports on a one-day meeting in the Harvard Faculty Club of poker pros, game theorists, statisticians, law students and gambling lobbyists to develop a strategy to show that poker is not predominantly a game of chance.

The skill debate has been a preoccupation in poker circles since September (2006), when Congress barred the use of credit cards for online wagers. Horse racing and stock trading were exempt, but otherwise the new law hit any game "predominantly subject to chance". Included among such games was poker, which is increasingly played on Internet sites hosting players from all over the world.

This, of course, is not a new issue. For example it is the subject of the Mark Twain's short story "Science vs. Luck" published in the October 1870 issue of The Galaxy. The Galaxy no longer exists but co-founder Francis Church will always be remembered for his reply to Virginia's letter to the New York Sun: "Yes, Virginia, there is a Santa Claus".

In Mark Twain's story a number of boys were arrested for playing "old sledge" for money. Old sledge was a popular card game in those times and often played for money. In the trial the judge finds that half the experts say that old sledge is a game of science and half that it is a game of skill. The lawyer for the boys suggests:

Impanel a jury of six of each, Luck versus Science -- give them candles and a couple of decks of cards, send them into the jury room, and just abide by the result!

The Judge agrees to do this and so four deacons and the two dominies (Clergymen) were sworn in as the "chance" jurymen, and six inveterate old seven-up professors were chosen to represent the "science" side of the issue. They retired to the jury room. When they came out, the professors had ended up with all the money. So the Judge ruled that the boys were innocent.

Today more sophisticated ways to determine if a gambling game is predominantly skill or luck are being studied. Ryne Sherman has written two articles on this

A Conceptualization and Quantification of Skill and [http://www.dartmouth.edu/~chance/forwiki/Sherman2
<table width="70%" border="1">
<tr>
<td><div align="center">Month</div></td>
<td><div align="center">Placebo</div></td>
<td><div align="center">Vitamin ME</div></td>
</tr>
<tr>
<td><div align="center">1</div></td>
<td><div align="center">4</div></td>
<td><div align="center">7</div></td>
</tr>
<tr>
<td><div align="center">2</div></td>
<td><div align="center">6</div></td>
<td><div align="center">5</div></td>
</tr>
<tr>
<td><div align="center">3</div></td>
<td><div align="center">8</div></td>
<td><div align="center">8</div></td>
</tr>
<tr>
<td><div align="center">4</div></td>
<td><div align="center">4</div></td>
<td><div align="center">9</div></td>
</tr>
<tr>
<td><div align="center">5</div></td>
<td><div align="center">5</div></td>
<td><div align="center">7</div></td>
</tr>
<tr>
<td><div align="center">6</div></td>
<td><div align="center">3</div></td>
<td><div align="center">9</div></td>
</tr>
<tr>
<td height="22"><div align="center">Mean</div></td>
<td><div align="center">5</div></td>
<td><div align="center">7.5</div></td>
</tr>
</table>
</center><br>

The numbers in the  second  column are the average number of correct answers
for the
placebo group and those in the third column are the average number of correct
for the Vitamin ME group.  ANOVA can be used to see if there is significant
difference between the groups. Here is Bill Peterson's explanation for how this
works.
There are two group means:

<center>Mean1 = $\frac{(4+6+8+4+5+3)}{6}=\frac{30}{6}= 5.0$<br><br>

Mean2 = $\frac{(7+5+8+9+7+9)}{6}= \frac{45}{6}=7.5$<br></center>

Then a grand mean over all observations:<br>
<center>Mean = $\frac{(30+45)}{(6+6)} = 6.25$<br></center>

Variance is always a sum of square deviations divided by degree of freedom:
SS/df.  This is also called a mean squared deviation MS.

ANOVA begins by expressing the deviation of each observation from the grand mean as a sum of two terms:  the difference of the observation from its group mean, plus the difference of the group mean from the grand mean.   Writing this out explicitly for the example, we have, for the placebo group:<br><br>
<center>(4 - 6.25) = (4 - 5.0) + (5.0 - 6.25)<br>
(6 - 6.25) = (6 - 5.0) + (5.0 - 6.25)<br>
...<br>
(3 - 6.25) = (3 - 5.0) + (5.0 - 6.25)</center><br>

and for the vitamin ME group:

<center>(7 - 6.25) = (7 - 7.5) + (7.5 - 6.25)<br>

(5 - 6.25) = (5 - 7.5) + (7.5 - 6.25)<br>
...<br>
(9 - 6.25) = (9 - 7.5) + (7.5 - 6.25)<br></center>

The magic (actually the Pythagorean Theorem in an appropriate dimensional space)
is that the sums of squares decompose in this way.<br>

$(4-6.25)^2 +...+(9-6.25)^2 =$
$[(4-5.0)^2+...+(9 - 7.5)^2]$
<center>+ $[(5.0 - 6.25)^2+...+(7.5 - 6.25)^2]$ </center><br>Check:  46.25 = 27.5 + 18.75<br>

In the usual abbreviations:<br>

<center>SST = SSE + SSG</center><br>

where these three quantities are the total sum of squares, the error sum of
squares, and the group sum of squares. In ANOVA, scaled versions of SSE and SSG
are compared to determine if there is evidence that there is a significant
difference among the different groups.

The SSE is a measure of the variations within each group and so should not tell
us much about the effectiveness of the treatments and is often called the
nuisance variation.  On the other hand the SSG is a measure of the variation
between the groups and would be expected to give information about the
effectiveness of the treatment.

Sherman uses this same kind of decomposition for his measure of skill and
chance for a game.  We illustrate how he does this using data from
five weeks of our low-key Monday night poker games.  In the table below, we show
how much each player lost in five games and their mean winnings.
<center>
<table width="90%" border="1">
<tr>
<td width="13%"><div align="center"></div></td>
<td width="13%"><div align="center">Sally</div></td>
<td width="12%"><div align="center">Laurie</div></td>
<td width="13%"><div align="center">John</div></td>
<td width="13%"><div align="center">Mary</div></td>
<td width="12%"><div align="center">Sarge</div></td>
<td width="12%"><div align="center">Dick</div></td>
<td width="12%"><div align="center">Glenn</div></td>
</tr>
<tr>
<td><div align="center">Game 1</div></td>
<td><div align="center">-6.75</div></td>
<td><div align="center">-10.10</div></td>
<td><div align="center">-5.75</div></td>
<td><div align="center">10.35</div></td>
<td><div align="center">9.7</div></td>
<td><div align="center">4.43</div></td>
<td><div align="center">-1.95</div></td>
</tr>
<tr>
<td><div align="center">Game 2</div></td>
<td><div align="center">4.35</div></td>
<td><div align="center">-4.25</div></td>
<td><div align="center">.40</div></td>
<td><div align="center">-.35</div></td>
<td><div align="center">-8.8</div></td>
<td><div align="center">-.15</div></td>
<td><div align="center">5.8</div></td>
</tr>
<tr>
<td><div align="center">Game 3</div></td>
<td><div align="center">6.95</div></td>
<td><div align="center">-4.35</div></td>
<td><div align="center">.18</div></td>
<td><div align="center">-7.75</div></td>
<td><div align="center">7.65</div></td>
<td><div align="center">-5.9</div></td>
<td><div align="center">3.9</div></td>
</tr>
<tr>
<td height="24"><div align="center">Game 4</div></td>
<td><div align="center">-1.23</div></td>
<td><div align="center">-11.55</div></td>
<td><div align="center">4.35</div></td>
<td><div align="center">2.9</div></td>
<td><div align="center">4.85</div></td>
<td><div align="center">-3.9</div></td>
<td><div align="center">3.25</div></td>
</tr>
<tr>
<td><div align="center">Game 5</div></td>
<td><div align="center">6.35</div></td>
<td><div align="center">-1.5</div></td>
<td><div align="center">-.45</div></td>
<td><div align="center">-.65</div></td>
<td><div align="center">-.25</div></td>
<td><div align="center">-4.9</div></td>
<td><div align="center">1.42</div></td>
</tr>
</table>
</center>

To compare the amount of skill and luck in these games Sherman would have us carry out an analysis of variance in the same way we did for our example.  The players are now seen in the role of treatments.  Each player has a mean net gain over the set of games.  For each outcome in the table we write the difference between this outcome and the overall mean as the sum of two terms: the difference between the outcome and the player's mean plus the difference between the player's mean and the overall mean.  Sherman suggests that the difference between the outcome and the players mean is due primarily to luck while the difference between the players mean and the overall mean is due primarily to skill.  This leads him to define the skill %  as the ratio of the of the group sums of squares to the total sums of squares and the luck % as the ratio of the within group sums of squares to the total sums of squares.

Sherman assumes that the variation  in the amount won within groups   is
primarily due to luck and  calls this Random Variance and the variation
between groups is due primarily  to skill and calls this Systematic Variance.
He then defines:

<center>${\rm Game's\ Skill\ Percentage} = \frac{\rm Systematic\ Variance}{\rm Systematic\ Variance + Random\ Variance}$ </center>

and similarly,
<center>${\rm Game's\ Luck\ Percentage} = \frac{\rm Random\ Variance} {\rm Systematic\ Variance + Random\ Variance}$</center>

So, in our poker game, the Random Variance is 758.499 and the Systematic
variance is 311.477. So the Skill Percentage  is 29.1% and  the Luck Percentage
is 70.9%.

In his second article, Sherman reports  the Skill Percentage  he obtained using
data from a number of different types of games.  For example, using data for
Major League Batting, the Skill Percentage for hits was 39% and for home runs
was 68%. For NBA  Basketball it was 75% for points scored. For poker stars in
weekly tournaments it was 35%.

Sherman concludes his articles with the remarks:

<blockquote> If two persons play the same game, why don't both achieve the
same results? The purpose of last month's article and this article was to
question: Skill (or systematic variance) or Luck (or random variance).  Using
both the correlation approach described last month and the ANOVA approach
described in this article, one can estimate the amount of skill involved in any
game.  Last, and maybe most importantly, Table 4 demonstrated that the skill
estimates involved in playing poker  (or at least tournament poker) are not very
different from other sport outcomes which are widely accepted as
skillful.</blockquote>

Discussion questions:

(1) Do you think that Sherman's measure of skill and luck in a game is
reasonable?  If not, why not?

(2) There is a form of poker modeled after duplicate bridge.  Do you think that
the congressional decision should apply to this form of gambling?

==Second chance lottery drawing==
Marilyn vos Savant

A reader poses the following question.
<blockquote>
Say that a state runs a lottery with scratch-off tickets and has a second-chance drawing for losing tickets. The latter are sent to a central location, where they are boxed and stored until it’s time for the drawing. An official then chooses one box and draws a ticket from it. All the other boxes are untouched. Is this fair, compared to storing all the tickets in a large container and then drawing a ticket from it?
</blockquote>

Marilyn responds that, "The methods are equivalent, and both are perfectly fair: One winner was chosen at random", and suggests that the method is used purely for physical convenience.  (In a state lottery, however, we imagine the whole affair would be conducted electronically.)

DISCUSSION QUESTIONS:

(1) Marilyn's answer is almost correct.  What has been implicitly assumed here?

(2)  Here is a related problem (from Grinstead & Snell, [http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/book.html Introduction to Probability], p. 152, problem 23).
<blockquote>
You are given two urns and fifty balls. Half of the balls are white and half
are black. You are asked to distribute the balls in the urns with no restriction
placed on the number of either type in an urn. How should you distribute
the balls in the urns to maximize the probability of obtaining a white ball if
an urn is chosen at random and a ball drawn out at random? Justify your
</blockquote>

Submitted by Bill Peterson

==The understanding and misunderstanding of Bayesian statistics==
[http://www.economist.com/science/PrinterFriendly.cfm?story_id=9645336 <em>Gambling on tomorrow</em>,] The Economist, Aug 16th 2007 <br>
[http://news.yahoo.com/s/nm/20070812/sc_nm/climate_uncertainty_dc <em>Scientists try new ways to predict climate risks</em>,] Reuters 12 Aug 2007.<br>
<em>Too late to escape climate disaster?</em>, New Scientist, 18 Aug 2007.<br>
<em>Earth Log - Complex lesson</em>, Daily Telegraph, 17 Aug 2007.<br>

The latest edition of one of the Royal Society's journals, [http://www.journals.royalsoc.ac.uk/content/102021/ Philosophical Transactions,] is devoted to the science of climate modelling:
<blockquote>predictions from different models are pooled to produce estimates of future climate change, together with their associated uncertainties,</blockquote>
the Royal Society said,
and it partly focusses on 'the understanding and misunderstanding' of Bayesian statistics.
So this Economist article discusses the difference between the frequentist and Bayesian view of statistics, in the context of forecasting the weather.

It starts by claiming that there were just two main influences on the early development of probability theory and statistics:
[http://en.wikipedia.org/wiki/Thomas_Bayes Bayes] and [http://en.wikipedia.org/wiki/Blaise_Pascal Pascal]: Pascal's ideas are simple and widely understood while Bayes are not.
Pascal adopted a [http://en.wikipedia.org/wiki/Frequency_probability frequentist] view, which The Economist characterises as <em>the world was that of the gambler: each throw of the dice is independent of the previous one;</em>
Bayes promoted what we now call [http://en.wikipedia.org/wiki/Bayesian_probability Bayesian probability,] which The Economist characterises as <em>incorporating the accumulation of experience into a statistical model in the form of prior assumptions:</em>
<blockquote>
A good prior assumption about tomorrow's weather, for example, is that it will be similar to today's.
Assumptions about the weather the day after tomorrow, though, will be modified by what actually happens tomorrow.
</blockquote>

But prior assumptions can influence model outcomes in subtle ways, The Economist warns:
<blockquote>
Since the future is uncertain, (weather) forecasts are run thousands of times, with varying parameters, to produce a range of possible outcomes.
The outcomes are assumed to cluster around the most probable version of the future.
The particular range of values chosen for a parameter is an example of a Bayesian prior assumption, since it may be modified in the light of experience. But the way you pick the individual values to plug into the model can cause trouble.
They might, for example, be assumed to be evenly spaced, say 1,2,3,4.
But in the example of snow retention, evenly spacing both rate-of-fall and rate-of-residence-in-the-clouds values will give different distributions of results.
That is because the second parameter is actually the reciprocal of the first.
To make the two match, value for value, you would need, in the second case, to count 1, ½, ⅓, ¼—which is not evenly spaced.
If you use evenly spaced values instead, the two models' outcomes will cluster differently.
</blockquote>

It goes on to claim that those who use statistical models often fail to account for the uncertainty associated with such models:
<blockquote>
Psychologically, people tend to be Bayesian—to the extent of often making false connections. And that risk of false connection is why scientists like Pascal's version of the world. It appears to be objective. But when models are built, it is almost impossible to avoid including Bayesian-style prior assumptions in them. By failing to acknowledge that, model builders risk making serious mistakes.
</blockquote>

One of the Philosophical Transactions papers authors', David Stainforth of Oxford University, says
<blockquote>
The answer is more comprehensive assessments of uncertainty, if we are to provide better information for today's policy makers.
Such assessments would help steer the development of climate models and focus observational campaigns. Together this would improve our ability to inform decision makers in the future.
</blockquote>

===Questions===
* What influences on the early development of  probability theory and statistics can you think of, other than Pascal and Bayes?
* Is the frequentist view of statistics nothing more than <em>each throw of the dice is independent of the previous one</em>. What other characteristics would you associate with this view of statistics? Can you offer a better one-line summary? What about a better descrption of Bayesian statistics than <em>incorporating the accumulation experience into a statistical model in the form of prior assumptions</em>.
* In one of the Royal Society's papers, authors David Stainforth from Oxford University and Leonard Smith from the LSE, advocate making a clearer distinction between the output of model experiments designed for improving the model and those of immediate relevance for decision making. What do you think they meant by that? Can you think of a simple example to illustrate your interpretation?
* The Economist claims that scientists are not easily able to understand Bayes because of their philosophical training in the rigours of Pascal's method. How would you reply to this assertion?

* [http://www.lse.ac.uk/collections/pressAndInformationOffice/newsAndEvents/archives/2007/ClimateChangeReport.htm Confidence, uncertainty and decision-support relevance in climate predictions,] [http://www.atm.ox.ac.uk/user/das/ David Stainforth,] Oxford University and Leonard Smith, LSE.
** This [http://www.lse.ac.uk/collections/cats/papersPDFs/75_Stainforth_ConfidenceUncertaintyRelevance_2007.pdf  paper] discusses the sources of uncertainty in the interpretation of climate model simulations as projections of the future.

Sumbitted by John Gavin.

==The Myth, the Math, the Sex==
[http://www.nytimes.com/2007/08/12/weekinreview/12kolata.html?ex=1188792000&en=4f4f1484b2912d4b&ei=5070 The Myth, the Math, the sex].<br>
''The New York Times'', August 12, 2007, The Week in Review<br>
Gina Kolata

[http://www.nytimes.com/2007/08/19/weekinreview/19kolata.html?ex=1188792000&en=2b4b9a5b0a9293b4&ei=5070 The Median, the Math and the Sex].<br>
''The New York Times'', August 19, 2007, The Week in Review<br>
Gina Kolata

In the first article Gina Kolata comments that there have been numerous studies claiming to show that men have more sexual partners than women.

She reports a recent government study, reporting that men have  had a medium of seven female sex partners while women have had a median of four.  Kolata writes:

<blockquote> "It is  about time for mathematicians to set the record straight," said David Gale, an emeritus mathematics professor at the University of California, Berkeley.<br><br>

"Surveys and studies to the contrary notwithstanding, the conclusion that men have substantially more sex partners than women is not and cannot be true for purely logical reasons," Dr Gale said. He even provided a proof, writing in an e-mail message.<br><br>

By way of dramatization, we change the context slightly and will prove what will be called the High School Prom Theorem.  We suppose that on the day after the prom, each girl is asked to give the number of boys she danced with.  These numbers are then added up giving a number G.  The same information is then obtained from the boy, giving a number B<br><br>

Theorem: G = B <br></blockquote>

Kolata reports further:<br>

<blockquote>Ronald Graham, a professor of mathematics and computer science at the University of California, San Diego, agreed with Dr. Gale.  After all, on average, men would have to have three more partners than women, raising the question of where all these extra partners might be.</blockquote>

The second Gina Kolata article deals primarily  with the shower of responses pointing out  that the study reported that the medians were different, and so Gale's proof is either irrelevant or not true.

Of course the Blogs had a field day with this mathematics.  One of the best is the [http://delong.typepad.com/sdj/2007/08/why-oh-why-ca-2.html Blog of Brad Delong], Delong is an economist at the University of California and hence a colleague of David Gale. He blames Gina Kolata saying that she did not tell Gale that the study reported the results of the study in terms of medians rather than means.  However the comments to this Blog are very interesting and show just how hard it is to apply mathematics to the real world. Their comments subjext good discussion questions.

===Discussion questions===

(1) What explantions can you give for the resutls of the survey?  Are they enough to explain the difference  reported in this survey.

(2) Did you dance with more than one person at your high school prom?

(3) Is Gales theory true if there are more women than men or more men than women in the population sampled?

(4)The article reports:
<blockquote> I have heard this question before,” said Cheryl D. Fryar, a health statistician at the National Center for Health Statistics and a lead author of the new federal report, “Drug Use and Sexual Behaviors Reported by Adults: United States, 1999-2002,” which found that men had a median of seven partners and women four. But when it comes to an explanation, she added, “I have no idea.”</blockquote>

Do you think that Fryar Knows the difference between mean and median?

==Data for first forsooth==

The Times 26 May 2006 article that is the source for the Forsooth included the following data:

MEN Total deaths Percentage<br>
Heart disease 49,205 20.2<br>
Cerebrovascular diseases 19,266 7.9<br>
Cancer of trachea, bronchus & lung 16,775 6.9<br>
Chronic lower respiratory diseases 13,589 5.6<br>
Influenza and pneumonia 12,209 5<br>
Prostate cancer 9,018 3.7<br>
Cancer of colon, rectum and anus 7,570 3.1<br>
Lymphoid cancer 5,606 2.3<br>
Dementia and Alzheimer's 5,076 2.1<br>

WOMEN Total deaths Percentage<br>
Heart disease 38,969 16<br>
Influenza and pneumonia 31,366 12.9<br>
Dementia and Alzheimer's 19,255 7.9<br>
Chronic lower respiratory diseases 12,605 5.2<br>
Cancer of trachea, bronchus & lung 11,895 4.9<br>
Breast cancer 10,986 4.5<br>
Heart failure & complications, & ill-defined heart disease (not included above) 7,212 3<br>
Cancer of colon, rectum and anus 6,537 2.7<br>