Chance News 29: Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
Line 104: Line 104:
SS/df.  This is also called a mean squared deviation MS.
SS/df.  This is also called a mean squared deviation MS.


The idea of ANOVA is to compare variation between groups (which measures the
ANOVA begins by expressing the deviation of the each observation from the grand mean as a sum of two terms:  the difference of the observation from its group mean, plus the difference of the group mean from the grand mean.
treatment effect) to the variation within groups (which is noise or error).
To this end, we express the deviation from the grand mean as a sum of two
parts:  the difference from the group mean ("within") and the difference of
from the grand mean ("between").


To make this partition we begin by  decomposing the difference from the means as follows::<br><br>
To make this partition we begin by  decomposing the difference from the means as follows::<br><br>
Line 135: Line 131:
(total sum of sqs = error sum of sqs + group sum of sqs)<br>
(total sum of sqs = error sum of sqs + group sum of sqs)<br>


Fisher's F statistic is  F = MSG/MSE.  Large values of F are effectively a
Fisher's F statistic is  F = MSG/MSE.  Large values of F are taken as evidence that there is a real treatment
large signal to noise ratio, and we conclude that there is a real treatment
effect.
effect.



Revision as of 19:23, 5 September 2007

Quotations

"There are few things that are so unpardonably neglected in our country as poker. The upper class knows very little about it. Now and then you find ambassadors who have sort of a general knowledge of the game, but the ignorance of the people is fearful. Why, I have known clergymen, good men, kind-hearted, liberal, sincere, and all that, who did not know the meaning of a "flush." It is enough to make one ashamed of the species".

Mark Twain.

Forsooth

The following Forsooths are from the September 07 issue of the RSS NEWS.

THE BIGGEST KILLER BY FAR

Heart disease claimed the lives of one in five men
and about one in six women last year, figures indicate.

The Times
26 May 2006

See the end of this Chance News for the data that was the bases for this claim.


[Hanson plc is the]] Largest aggregates producer
in the world and 3rd largest in the USA

Daily Telegraph
3 March, 2006

This Forsooth was suggested by Jerry Grossman.

In addition, a person's odds of becoming obese increased by 57 percent if he or she had a friend who became obese over a certain time interval. If the two people were mutual friends, the odds increased to 171 percent.

Family, Friend May "Spread" Obesity
Revolution Health
July 25, 2007

This discussion relates to an article The Spread of Obesity in a Large Social Network over 32 Years that appeared in the July 26, 2007 issue of the New England Journal of Medicine and seems to be freely available. Of course, here the increased to 171 percent is increased by 71%.

Jerry remarks "The NEJM article is interesting to those of us interested in the mathematical aspects of the social network."



This forsooth was suggested by Paul Alper

I've done 120 short-term energy outlooks, and I've probably gotten two of them right.

Mark Rodekohr, a veteran Department of Energy (DOE) economist
Minnesota Star Tribune
August 12, 2007

Is Poker predominantly skill or luck?

Harvard ponders just what it takes to excel at poker.
Wall Street Journal, May 3, 2007, A1
Neil King JR

The WSJ article reports on a one-day meeting in the Harvard Faculty Club of poker pros, game theorists, statisticians, law students and gambling lobbyists to develop a strategy to show that poker is not predominantly a game of chance.

In the article we read:

The skill debate has been a preoccupation in poker circles since September (2006), when Congress barred the use of credit cards for online wagers. Horse racing and stock trading were exempt, but otherwise the new law hit any game "predominantly subject to chance". Included among such games was poker, which is increasingly played on Internet sites hosting players from all over the world.

This, of course, is not a new issue. For example it is the subject of the Mark Twain's short story "Science vs. Luck" published in the October 1870 issue of The Galaxy. The Galaxy no longer exists but co-founder Francis Church will always be remembered for his reply to Virginia's letter to the New York Sun: "Yes, Virginia, there is a Santa Claus".

In Mark Twain's story a number of boys were arrested for playing "old sledge" for money. Old sledge was a popular card game in those times and often played for money. In the trial the judge finds that half the experts say that old sledge is a game of science and half that it is a game of skill. The lawyer for the boys suggests:

Impanel a jury of six of each, Luck versus Science -- give them candles and a couple of decks of cards, send them into the jury room, and just abide by the result!

The Judge agrees to do this and so four deacons and the two dominies (Clergymen) were sworn in as the "chance" jurymen, and six inveterate old seven-up professors were chosen to represent the "science" side of the issue. They retired to the jury room. When they came out, the professors had ended up with all the money. So the Judge ruled that the boys were innocent.

Today more sophisticated ways to determine if a gambling game is predominantly skill or luck are being studied. Ryne Sherman has written two articles on this, "Towards a Skill Ratio" and "More on Skill and Individual Differences" in which he proposes a way to estimate luck and skill in poker and other games.

To estimate skill and luck percentages Sherman uses a statistical procedure called analysis of variance (ANOVA). To understand Sherman's method of comparing luck and skill you need to understand how ANOVA works. For those who do not know this we show how ANOVA works using a simple example from Variance and the Design of Experiments This begins with the following hypothetical data.

 

Treatment 1
Treatment 2
4
7
6
5
8
8
4
9
5
7
3
9

Assume that these are the result of a clinical trial to determine if vitamin WR improves memory. In the study one group of 6 participants were given a placebo and a second group of 6 were given vitamin WR for a month. At the end of the month the two groups are given a memory test and the numbers in the first column are the number of correct answers for the two groups. Then an ANOVA test is made to see if there is significant difference between the groups. Here is Bill Peterson's explanation for how this works.

There are two group means:

Mean1 = (4+6+8+4+5+3)/6 = 30/6 = 5.0
Mean2 = (7+5+8+9+7+9)/6 = 45/6 = 7.5

Then a grand mean over all observations
Mean = (30+45)/(6+6) = 6.25

Variance is always a sum of square deviations divided by degree of freedom: SS/df. This is also called a mean squared deviation MS.

ANOVA begins by expressing the deviation of the each observation from the grand mean as a sum of two terms: the difference of the observation from its group mean, plus the difference of the group mean from the grand mean.

To make this partition we begin by decomposing the difference from the means as follows::

(4 - 6.25) = (4 - 5.0) + (5.0 - 6.25)
(6 - 6.25) = (6 - 5.0) + (5.0 - 6.25)
...
(3 - 6.25) = (3 - 5.0) + (5.0 - 6.25)

(7 - 6.25) = (7 - 7.5) + (7.5 - 6.25)

(5 - 6.25) = (5 - 7.5) + (7.5 - 6.25)
...
(9 - 6.25) = (9 - 7.5) + (7.5 - 6.25)

The magic (actually the Pythagorean Theorem)
is that the sums of squares decompose in this way.

(4-6.25)^2 +...+(9-6.25)^2 =
[(4-5.0)^2+...+(9 - 7.5)] + [(5.0 - 6.25)^2+...+(7.5 - 6.25)^2]
Check: 46.25 = 27.5 + 18.75

In the usual abbreviations:

SST = SSE + SSG

(total sum of sqs = error sum of sqs + group sum of sqs)

Fisher's F statistic is F = MSG/MSE. Large values of F are taken as evidence that there is a real treatment effect.

Now Sherman uses this same kind of decomposition for his measure of skill and of chance for a game. We illustrate how he does this using the following data from five weeks of our low-key Monday night poker games.

Sally
Laurie
John
Mary
Sarge
Dick
Glenn
Game 1
-6.75
-10.10
-5.75
10.35
9.7
4.43
-1.95
Game 2
4.35
-4.25
.40
-.35
-8.8
-.15
5.8
Game 3
6.95
-4.35
.18
-7.75
7.65
-5.9
3.9
Game 4
-1.23
-11.55
4.35
2.9
4.85
-3.9
3.25
Game 5
6.35
-1.5
-.45
-.65
-.25
-4.9
1.42


To compare the amount of skill and luck in these games Sherman would have us carry out an analysis of variance in the same way we did for our example. The variation between groups is the variation in the players winnings within the games. Sherman believes that this variation is due primarily to luck. The variation within groups is the variation in the players winnings within the games. Sherman believes that this variation is due primarily due to skill.

This leads Sherman to define the skill %  as the ratio of the between group sums of squares to the total sums of squares and the luck % as the ratio of the within group sums of squares to the total sums of squares .

Using our poker data and the SAS ANOVA program, we find that the total sums of squares is 1069.95, the between groups sums of squares is 311.447 and the within groups the sums of squares is 758.499. Thus, from our poker game we would estimate the skill % to be 311.447/1069.95 = 29.1 % and the luck % to be 758.499/1069.06 = 70.9 %. Thus, not surprisingly, luck is more important than skill.

In his second article, Sherman reports the skill % he obtained using data from a number of different types of games. For example, using data for Major League Batting hits the skill % was 39 % and for home runs it was 68 %. For NBA Basketball for points scored it was 75 % and for poker stars in weekly tournaments it was 35 %.

Sherman concludes his articles with the remarks:

If two persons play the same game, why don't both achieve the same results? The purpose of last month's article and this article was to address this question. This article suggests that there are two answers to this question: Skill (or systematic variance) or Luck (or random variance). Using both the correlation approach described last month and the ANOVA approach described in this article, one can estimate the amount of skill involved in any game. Last, and maybe most importantly, Table 4 demonstrated that the skill estimated involved in playing poker (or at least tournament poker) are not very different from other sport outcomes which are widely accepted as skillful.<\blockquote>

Discussion questions

(1) Do you think that Sherman's measure of skill and luck in a game is reasonable? If not why not?

(2) There is a form of duplicate poker modeled after duplicate bridge. Do you think that the congressional decision should not apply to this form of gambling?

Submitted by Laurie Snell

Second chance lottery drawing

Ask Marilyn
Parade, 5 August 2007
Marilyn vos Savant

A reader poses the following question.

Say that a state runs a lottery with scratch-off tickets and has a second-chance drawing for losing tickets. The latter are sent to a central location, where they are boxed and stored until it’s time for the drawing. An official then chooses one box and draws a ticket from it. All the other boxes are untouched. Is this fair, compared to storing all the tickets in a large container and then drawing a ticket from it?


Marilyn responds that, "The methods are equivalent, and both are perfectly fair: One winner was chosen at random", and suggests that the method is used purely for physical convenience. (In a state lottery, however, we imagine the whole affair would be conducted electronically.)

DISCUSSION QUESTIONS:

(1) Marilyn's answer is almost correct. What has been implicitly assumed here?

(2) Here is a related problem (from Grinstead & Snell, Introduction to Probability, p. 152, problem 23).

You are given two urns and fifty balls. Half of the balls are white and half are black. You are asked to distribute the balls in the urns with no restriction placed on the number of either type in an urn. How should you distribute the balls in the urns to maximize the probability of obtaining a white ball if an urn is chosen at random and a ball drawn out at random? Justify your answer.

Second chance lottery drawing

Ask Marilyn
Parade, 5 August 2007
Marilyn vos Savant

A reader poses the following question.

Say that a state runs a lottery with scratch-off tickets and has a second-chance drawing for losing tickets. The latter are sent to a central location, where they are boxed and stored until it’s time for the drawing. An official then chooses one box and draws a ticket from it. All the other boxes are untouched. Is this fair, compared to storing all the tickets in a large container and then drawing a ticket from it?


Marilyn responds that, "The methods are equivalent, and both are perfectly fair: One winner was chosen at random", and suggests that the method is used purely for physical convenience. (In a state lottery, however, we imagine the whole affair would be conducted electronically.)

DISCUSSION QUESTIONS:

(1) Marilyn's answer is almost correct. What has been implicitly assumed here?

(2) Here is a related problem (from Grinstead & Snell, Introduction to Probability, p. 152, problem 23).

You are given two urns and fifty balls. Half of the balls are white and half are black. You are asked to distribute the balls in the urns with no restriction placed on the number of either type in an urn. How should you distribute the balls in the urns to maximize the probability of obtaining a white ball if an urn is chosen at random and a ball drawn out at random? Justify your answer.


Submitted by Bill Peterson

The understanding and misunderstanding of Bayesian statistics

Gambling on tomorrow, The Economist, Aug 16th 2007
Scientists try new ways to predict climate risks, Reuters 12 Aug 2007.
Too late to escape climate disaster?, New Scientist, 18 Aug 2007.
Earth Log - Complex lesson, Daily Telegraph, 17 Aug 2007.

The latest edition of one of the Royal Society's journals, Philosophical Transactions is devoted to the science of climate modelling:

predictions from different models are pooled to produces estimates of future climate change, together with their associated uncertainties

the Royal Society said, and it partly focusses on 'the understanding and misunderstanding' of Bayesian statistics. So this Economist article discusses the difference between the frequentist and Bayesian view of statistics, in the context of forecasting the weather.

The article starts by claiming that there were just two main influences on the early development of probability theory and statistics: Bayes and Pascal. It claims that Pascal's ideas are simple and widely understood while Bayes are not. Pascal adopted a frequentist view, which The Economist characterises as the world was that of the gambler: each throw of the dice is independent of the previous one. Bayes promoted what we now call Bayesian probability, which The Economist characterises as incorporating the accumulation of experience into a statistical model in the form of prior assumptions:

A good prior assumption about tomorrow's weather, for example, is that it will be similar to today's. Assumptions about the weather the day after tomorrow, though, will be modified by what actually happens tomorrow.

But prior assumptions can influence model outcomes in subtle ways, The Economist warns:

Since the future is uncertain, (weather) forecasts are run thousands of times, with varying parameters, to produce a range of possible outcomes. The outcomes are assumed to cluster around the most probable version of the future. The particular range of values chosen for a parameter is an example of a Bayesian prior assumption, since it may be modified in the light of experience. But the way you pick the individual values to plug into the model can cause trouble. They might, for example, be assumed to be evenly spaced, say 1,2,3,4. But in the example of snow retention, evenly spacing both rate-of-fall and rate-of-residence-in-the-clouds values will give different distributions of results. That is because the second parameter is actually the reciprocal of the first. To make the two match, value for value, you would need, in the second case, to count 1, ½, ⅓, ¼—which is not evenly spaced. If you use evenly spaced values instead, the two models' outcomes will cluster differently.

It goes on to claim that those who use statistical models often fail to account for the uncertainty associated with such models:

Psychologically, people tend to be Bayesian—to the extent of often making false connections. And that risk of false connection is why scientists like Pascal's version of the world. It appears to be objective. But when models are built, it is almost impossible to avoid including Bayesian-style prior assumptions in them. By failing to acknowledge that, model builders risk making serious mistakes.

One of the Philosophical Transactions papers authors', David Stainforth of Oxford University, says

The answer is more comprehensive assessments of uncertainty, if we are to provide better information for today's policy makers. Such assessments would help steer the development of climate models and focus observational campaigns. Together this would improve our ability to inform decision makers in the future.

Questions

  • What influences on the early development of probability theory and statistics can you think of, other than Pascal and Bayes?
  • Is the frequentist view of statistics nothing more than each throw of the dice is independent of the previous one. What other characteristics would you associate with this view of statistics? Can you offer a better one-line summary? What about a better descrption of Bayesian statistics than incorporating the accumulation experience into a statistical model in the form of prior assumptions.
  • In one of the Royal Society's papers, authors David Stainforth from Oxford University and Leonard Smith from the LSE, advocate making a clearer distinction between the output of model experiments designed for improving the model and those of immediate relevance for decision making. What do you think they meant by that? Can you think of a simple example to illustrate your interpretation?
  • The Economist claims that scientists are not easily able to understand Bayes because of their philosophical training in the rigours of Pascal's method. How would you reply to this assertion?

Further reading

Sumbitted by John Gavin.

The Myth, the Math, the Sex

The Myth, the Math, the sex.
The New York Times, August 12, The Week in Review
Gina Kolata

The Median, the Math and the Sex.
The New York Times, August 19, 2007, The Week in Review
Gina Kolata

In the first article Gina Kolata comments that there have been numerous studies claiming to show that men have more sexual partners than women.

She reports a recent government study, reporting that men have had a medium of seven female sex partners while women have had a median of four. Kolata writes:

"It is about time for mathematicians to set the record straight," said David Gale, an emeritus mathematics professor at the University of California, Berkeley.

Surveys and studies to the contrary notwithstanding, the conclusion that men have substantially more sex partners than women is not and cannot be true for purely logical reasons," Dr Gale said. He even provided a proof, writing in an e-mail message.

By way of dramatization, we change the context slightly and will prove what will be called the High School Prom Theorem. We suppose that on the day after the prom, each girl is asked to give the number of boys she danced with. These numbers are then added up giving a number G. The same information is then obtained from the Boy, giving a number B

Theorem: G = B

Kolata reports further:

Ronald Graham, a professor of mathematics and computer science at the University of California, San Diego, agreed with Dr. Gale. After all, on average, men would have to have three more partners than women, raising the question of where all these extra partners might be.

The second article deals primarily with the shower of responses pointing out that the study reported that the medians were different, and so Gale's proof is either irrelevant or not true.

Of course the Blogs had a field day with this mathematics. One of the best is the Blog of Brad Delong, an economist at the University of California and hence a colleague of David Gale. He blames Gina Kolata saying that she did not tell Gale that the study reported the results of the study in terms of medians rather than means. However the comments to this Blog are very interesting and show just how hard it is to apply mathematics to the real world. Their comments subjext good discussion questions.

==Discussion questions==.

(1) What explantions can you give for the resutls of the survey? Are they enough to explain the difference reported in this survey.

(2) Did you dance with more than one person at your high school prom?

(3) Is Gales theory true if there are more women than men or more men than women in the population sampled?

(4)The article reports:

I have heard this question before,” said Cheryl D. Fryar, a health statistician at the National Center for Health Statistics and a lead author of the new federal report, “Drug Use and Sexual Behaviors Reported by Adults: United States, 1999-2002,” which found that men had a median of seven partners and women four. But when it comes to an explanation, she added, “I have no idea.”<\blockquote> Do you think that she Knows the difference beteen mean and media?

Data for first forsooth

The Times 26 May 2006 article that is the source for the Forsooth included the following data:

LEADING CAUSES OF DEATH
MEN Total deaths Percentage
Heart disease 49,205 20.2
Cerebrovascular diseases 19,266 7.9
Cancer of trachea, bronchus & lung 16,775 6.9
Chronic lower respiratory diseases 13,589 5.6
Influenza and pneumonia 12,209 5
Prostate cancer 9,018 3.7
Cancer of colon, rectum and anus 7,570 3.1
Lymphoid cancer 5,606 2.3
Dementia and Alzheimer's 5,076 2.1

WOMEN Total deaths Percentage
Heart disease 38,969 16
Influenza and pneumonia 31,366 12.9
Dementia and Alzheimer's 19,255 7.9
Chronic lower respiratory diseases 12,605 5.2
Cancer of trachea, bronchus & lung 11,895 4.9
Breast cancer 10,986 4.5
Heart failure & complications, & ill-defined heart disease (not included above) 7,212 3
Cancer of colon, rectum and anus 6,537 2.7