Chance News 29

From ChanceWiki
Jump to navigation Jump to search

Quotations

"There are few things that are so unpardonably neglected in our country as poker. The upper class knows very little about it. Now and then you find ambassadors who have sort of a general knowledge of the game, but the ignorance of the people is fearful. Why, I have known clergymen, good men, kind-hearted, liberal, sincere, and all that, who did not know the meaning of a "flush." It is enough to make one ashamed of the species".

Mark Twain.

Forsooth

The following Forsooth was suggested by Jerry Grossman.

In addition, a person's odds of becoming obese increased by 57 percent if he or she had a friend who became obese over a certain time interval. If the two people were mutual friends, the odds increased to 171 percent.

Family, Friend May "Spread" Obesity
Revolution Health
July 25, 2007

This discussion relates to an article The Spread of Obesity in a Large Social Network over 32 Years that appeared in the July 26, 2007 issue of the New England Journal of Medicine and seems to be freely available. Of course here the increased to 171 percent is increased by 71%.

Jerry remarks "The NEJM article is interesting to those of us interested in the mathematical aspects of the social network."


This forsooth was suggested by Paul Alper

I've done 120 short-term energy outlooks, and I've probably gotten two of them right.

Mark Rodekohr, a veteran Department of Energy (DOE) economist
Minnesota Star Tribune
August 12, 2007

Is Poker predominantly skill or luck?

Harvard ponders just what it takes to excel at poker
Wall Street Journal, May 3, 2007, A1
Neil King JR.

The WST article reports on a one day meeting in the Harvard Faculty Club of poker pros, game theorists, statisticians, law students and gambling lobbyist to develop a strategy to show that poker is not predominantly a game of chance.

In the article we read:

The skill debate has been a preoccupation in poker circles since September (2006), when Congress barred the use of credit cards for online wagers. Horse racing and stock trading were exempt, but otherwise the new law hit any 'game predominantly subject to chance' Included among such games was poker, which is increasingly played on Internet sites hosting players from all over the world.

This, of course, is not a new issue. For example it is the subject of the Mark Twains short story Science vs. Luck published in October 1870 issue of The Galaxy. In the story a number of boys were arrested for playing "old sledge" for money. Old sledge was a popular card game in those times often played for money. In the trial the judge finds that half the experts say that old sledge is a game of science and half that it is a game of skill. The lawyer for the boys suggests:

Impanel a jury of six of each, Luck versus Science -- give them candles and a couple of decks of cards, send them into the jury room, and just abide by the result!

The Judge agrees that this is a good way to decide old sledge is a game of science or luck and so four deacons and the two dominies (Clergymen) were sworn in as the "chance" jurymen, and six inveterate old seven-up professors were chosen to represent the "science" side of the issue. They retired to the jury room.

The professors ended up with all the money and the boys were found innocent.

Today more sophisticated ways to determine if a gambling game is predominantly skill or luck. Ryne Sherman has written two articles "Towards a Skill Ratio" in the Article 1 and Article 2 in which he proposes a way to estimtate luck and skill in poker and other games.

Sherman writes:

If two poker payers play in two equally difficult ring games, why don't both players win the same amount of money? Typically there are two answers to this question. Either one player was more skilled, or one player was lucker. The purpose of this article is to provide a qunatitative way to measure both of these differences.

We will illustrate how how Sherman determines the skill factor and the luck factor using data from a local low key poker game that we play once a week.

Game
1
2
3
4
5
6
Sally
-1.23
- 6.75
6.35
4.35
-1.20
4.55
Laurie
-11.25
-10.10
-1.50
- 4.25
-11.25
1.70
John
-.45
-5.75
-.45
.40
4.35
- 6.25
Mary
2.90
10.35
-.65
-.35
2.90
.30
Sarge
4.85
9.70
-.25
- 8.80
4.85
1.30
Dick
3.9
4.45
- 4.90
-15
- 4
1.50
Glenn
3.25
-1.95
1.42
5.80
3.25
3.60

Second chance lottery drawing

Ask Marilyn
Parade, 5 August 2007
Marilyn vos Savant

A reader poses the following question.

Say that a state runs a lottery with scratch-off tickets and has a second-chance drawing for losing tickets. The latter are sent to a central location, where they are boxed and stored until it’s time for the drawing. An official then chooses one box and draws a ticket from it. All the other boxes are untouched. Is this fair, compared to storing all the tickets in a large container and then drawing a ticket from it?


Marilyn responds that, "The methods are equivalent, and both are perfectly fair: One winner was chosen at random", and suggests that the method is used purely for physical convenience. (In a state lottery, however, we imagine the whole affair would be conducted electronically.)

DISCUSSION QUESTIONS:

(1) Marilyn's answer is almost correct. What has been implicitly assumed here?

(2) Here is a related problem (from Grinstead & Snell, Introduction to Probability, p. 152, problem 23).

You are given two urns and fifty balls. Half of the balls are white and half are black. You are asked to distribute the balls in the urns with no restriction placed on the number of either type in an urn. How should you distribute the balls in the urns to maximize the probability of obtaining a white ball if an urn is chosen at random and a ball drawn out at random? Justify your answer.


Submitted by Bill Peterson

The understanding and misunderstanding of Bayesian statistics

Gambling on tomorrow, The Economist, Aug 16th 2007
Scientists try new ways to predict climate risks, Reuters 12 Aug 2007.
Too late to escape climate disaster?, New Scientist, 18 Aug 2007.
Earth Log - Complex lesson, Daily Telegraph, 17 Aug 2007.

The latest edition of one of the Royal Society's journals, Philosophical Transactions is devoted to the science of climate modelling:

predictions from different models are pooled to produces estimates of future climate change, together with their associated uncertainties

the Royal Society said, and it partly focusses on 'the understanding and misunderstanding' of Bayesian statistics. So this Economist article discusses the difference between the frequentist and Bayesian view of statistics, in the context of forecasting the weather.

The article starts by claiming that there were just two main influences on the early development of probability theory and statistics: Bayes and Pascal. It claims that Pascal's ideas are simple and widely understood while Bayes are not. Pascal adopted a frequentist view, which The Economist characterises as the world was that of the gambler: each throw of the dice is independent of the previous one. Bayes promoted what we now call Bayesian probability, which The Economist characterises as incorporating the accumulation of experience into a statistical model in the form of prior assumptions:

A good prior assumption about tomorrow's weather, for example, is that it will be similar to today's. Assumptions about the weather the day after tomorrow, though, will be modified by what actually happens tomorrow.

But prior assumptions can influence model outcomes in subtle ways, The Economist warns:

Since the future is uncertain, (weather) forecasts are run thousands of times, with varying parameters, to produce a range of possible outcomes. The outcomes are assumed to cluster around the most probable version of the future. The particular range of values chosen for a parameter is an example of a Bayesian prior assumption, since it may be modified in the light of experience. But the way you pick the individual values to plug into the model can cause trouble. They might, for example, be assumed to be evenly spaced, say 1,2,3,4. But in the example of snow retention, evenly spacing both rate-of-fall and rate-of-residence-in-the-clouds values will give different distributions of results. That is because the second parameter is actually the reciprocal of the first. To make the two match, value for value, you would need, in the second case, to count 1, ½, ⅓, ¼—which is not evenly spaced. If you use evenly spaced values instead, the two models' outcomes will cluster differently.

It goes on to claim that those who use statistical models often fail to account for the uncertainty associated with such models:

Psychologically, people tend to be Bayesian—to the extent of often making false connections. And that risk of false connection is why scientists like Pascal's version of the world. It appears to be objective. But when models are built, it is almost impossible to avoid including Bayesian-style prior assumptions in them. By failing to acknowledge that, model builders risk making serious mistakes.

One of the Philosophical Transactions papers authors', David Stainforth of Oxford University, says

The answer is more comprehensive assessments of uncertainty, if we are to provide better information for today's policy makers. Such assessments would help steer the development of climate models and focus observational campaigns. Together this would improve our ability to inform decision makers in the future.

Questions

  • What influences on the early development of probability theory and statistics can you think of, other than Pascal and Bayes?
  • Is the frequentist view of statistics nothing more than each throw of the dice is independent of the previous one. What other characteristics would you associate with this view of statistics? Can you offer a better one-line summary? What about a better descrption of Bayesian statistics than incorporating the accumulation experience into a statistical model in the form of prior assumptions.
  • In one of the Royal Society's papers, authors David Stainforth from Oxford University and Leonard Smith from the LSE, advocate making a clearer distinction between the output of model experiments designed for improving the model and those of immediate relevance for decision making. What do you think they meant by that? Can you think of a simple example to illustrate your interpretation?
  • The Economist claims that scientists are not easily able to understand Bayes because of their philosophical training in the rigours of Pascal's method. How would you reply to this assertion?

Further reading

Sumbitted by John Gavin.