Chance News 48: Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 21: Line 21:

==Winning system for dice game?==
== Winning system for dice game?==
"Accused Cheater's Trial Includes Lesson In Craps"[]<br>
"Accused Cheater's Trial Includes Lesson In Craps"[]<br>
by Karen Florin, The Day (New London, CT), May 20, 2009<br>
by Karen Florin, The Day (New London, CT), May 20, 2009<br>

Revision as of 19:06, 28 May 2009


In other words, the population of a city is [according to Zipf's law], to a good approximation, inversely proportional to its rank [within its country]. Why this should be true, no one knows.

Steven Strogatz.
Guest Column
The New York Times
May 19 2009

Submitted by Paul Alper

Note: Strogatz is a good writer for the public and your students might enjoy reading this.

Laurie Snell


None yet

Winning system for dice game?

"Accused Cheater's Trial Includes Lesson In Craps"[1]
by Karen Florin, The Day (New London, CT), May 20, 2009

A Tennessee man has been charged with cheating at the craps table of the Foxwoods Casino in Ledyard, CT. During the trial he claimed that he was a "professional gambler who had a winning system that did not involve bribing dealers to pay him for late or illegal bets." (Closing arguments were scheduled for May 21, 2009.)

Jurors watched him provide an in-court demonstration of why his $3,000 or $5,000 bets are successful:

The makeshift craps table was “hot” for a while and [the defendant], who has spent the last eight months in prison, appeared happy to be reunited with the dice. He cradled them in his hand and shook them as [his] defense attorney ... walked him through an explanation of his strategy, which involves combinations of numbers that “go with” other numbers. If the shooter rolls a four, for example, [the defendant] recommends betting on 4, 6, 9, 10, 2, 3 or 11 to come up next.

The defendant described his success:

"The strategy works if you manage your money right and cash out when you're winning,” he said. He said his best run occurred in 2002, when he won $73,000 at Foxwoods. Other times, he won big and then lost big, he said.


According to Wikipedia [2], in the game of craps, a shooter rolls a pair of dice until a "point" of 4, 5, 6, 8, 9, or 10 is established. He/she continues rolling the dice until either the "point" is rolled again (a win) or a 7 is rolled (a loss).

1. If the shooter rolls a 4, what is the probability of a 2, 3, 4, 6, 9, 10, or 11 coming up next on a pair of fair dice?

2. If the shooter rolls any other "point," what is the probability of a 2, 3, 4, 6, 9, 10, or 11 coming up next on a pair of fair dice?

3. Can you think of a reason why the defendant might have chosen this set of outcomes?

4. Can you identify any other set(s) of outcomes that have the same probability of coming up on any roll of a pair of fair dice?

5. It appears that bets are placed on an ultimate win or loss, not on the outcome of a particular roll. However, even if the defendant were correct in his reasoning and one wanted to bet on a particular outcome immediately following the roll of a 4, do you think that the defendant's strategy would be helpful in choosing an outcome to bet on?

Submitted by Margaret Cibes

Financial decision-making advice?

"Many Bought Shares High, Sold Low" [3] by Mary Pilon, The Wall Street Journal, May 18, 2009

The article describes several people who took large losses by selling their stocks around the time of the "March 9 12-year low," and before the more recent 26% "rebound."

One financial advisor told an investor about his "two-Ambien" test for decision-making. (An Ambien is a sleeping pill.)

If two Ambien can allow you to sleep, ... then it still might make sense to stay invested.

Many advisors tell clients not to divest during "downturns," because "buying high and selling low is a formula for awful returns." One advisor says that:

They know that in hindsight, it wasn't the best thing to do.... But it was what they had to do emotionally. Math and the mind don't always add up.

Submitted by Margaret Cibes

Infuse and Kuklo

J. Scott, editor of the Journal of Bone and Joint Surgery writes about a paper, “Recombinant human morphogenetic protein-2 for type grade III open segmental tibial fractures from combat injuries in Iraq” by Timothy Kuklo, et al, which appeared in the JBJS in August, 2008:

“The paper described the management of 138 Gustillo Type IIIB and C tibial fractures in soldiers injured in Iraq. It was a retrospective study with some randomization of these patients into two groups, one of which received bone morphogenetic protein-2 (rhBMP-2) as part of the management and the other did not. The authors reported a significantly higher union rate in the group treated with rhBMP-2 (92% vs 76%). There was also a higher rate of further surgery required in the patients who did not receive BMP.”

For the non-specialist, rhBMP-2 is sold commercially as “Infuse” and is marketed by Medtronic. And, the paper “clearly seemed to represent a major contribution to the treatment of these severe complicated fractures which are difficult to manage and usually require several surgical procedures, careful wound management and extensive rehabilitation.”

However, you won’t be able to read the Kuklo paper to find out what is meant by “a significantly higher union rate in the group treated with rhBMP-2 (92% vs 76%)” because the paper has been withdrawn by Scott and the JBJS. The reasons may be found in Wilson and Meier’s reporting in The New York Times. The headline is “Doctor Falsified Study on Injured G.I.’s, Army says.” Not only did he “not obtain the Army’s required permission to conduct the study” but also “Army investigators found that Dr. Kuklo forged the signatures of four Walter Reed doctors on the article before submitting it last year to a British medical journal, falsely claiming them as co-authors.” In addition, “the total number of patients Dr. Kuklo reported as having been treated for extensive lower leg wounds at Walter Reed during the study period—138 soldiers—was greater than the number for which the hospital could find records.”

Further, according to Scott, the forgery came to light because “Shortly after the paper was published we received correspondence from one of the persons identified as a co-author indicating than the alleged co-authors had not seen the manuscript prior to publication and they had not signed the letter of transmittal. It was further disclosed that much of the paper was essentially false.”


1. The NYT article states “During the six-month period ending last October, sales of Medtronic’s bioengineered products, principally Infuse, reached $419 million, according to a company filing.” A Medtronic spokesperson “confirmed that Dr. Kuklo was a paid consultant to the company and that the company financially supported some of his research at Walter Reed.”

A subsequent NYT article reveals that “Army doctors can accept money to consult for medical product companies if they are given approval. Military officials said Wednesday that they had not found records that Dr. Kuklo sought or received such permission.” A “consultants list shows that Medtronic paid about $943,000 from 2003 to 2008 to 22 doctors for consulting specifically about Infuse.” Kuklo’s name is not on that list “because he had a general consulting contract with Medtronic, rather than one specific to Infuse.”

2. Kuklo is now an associate professor at Washington University. Here we have this commentary from someone at St. Xavier University whose father “served on the [Washington University] medical school faculty for over 30 years”: “Dr Dan Riew, the Mildred B. Simon Distinguished Professor and Chief of Cervical Spine Surgery in the Department of Orthopedic Surgery, alleged that his colleague’s forging signatures of four phantom co-authors may have been subsequent to oral authorisation. Astonishingly, Dr Riew claimed that when a researcher is without a fax machine or is abroad, the forging of signatures may be the only alternative.” From the St. Louis Post-Dispatch is the headline, “WU colleagues say surgeon accused of fraud is honest, hardworking.” The university’s chief of orthopedic spine surgery writes, “The claims are largely false. Dr. Kuklo is a very honest investigator.”

3. Forging co-authors, although bizarre, is not unique to this instance. Another example may be found a short time ago in Chance News. An interesting book to read is Why Smart People do Dumb Things by Feinberg and Tarrant. The section on Cyril Burt focuses on his putative co-authors, sometimes known as “the ladies,” Ms. Howard and Ms. Conway. Then use Google to find assertions that Howard and Conway did exist and were not inventions of Burt.

4. Speculate on what motivates highly respected medical and scientific researchers to commit fraud.

Submitted by Paul Alper

Financial engineering as pseudo-science

"The Death of Kings," [4] by Nick Paumgarten, The New Yorker, May 18, 2009
Note: Readers may only be able to access the abstract online, without subscription.

In this 16-page article, author Paumgarten discusses the current economic crisis and presents some possible causes that were identified by several financial analysts he interviewed.

According to bond salesman Colin Negrych, "What Wall Street offers is the continual rationalization that ever-increasing indebtedness is sustainable .... It concocts believable, defensible arguments for the prices that they think things ought to be. Financial engineering fills the gap between people's desires and their wherewithal. So what you have is optimism buttressed by pseudoscience and statistical legerdemain."

Financial writer Paumgarten reports:

By that time, modern finance theory – the notion, borne of some elegant mid-century mathematics, that one could use models to value contingencies – had taken root in the world of financial practice. It gradually obscured "the sheer brute fact that the results of human activity cannot be anticipated," as the economist Frank Knight wrote in 1921. Yet anticipate it people did, or tried to, on trading desks and conference calls, amid what [David] Beim called "a rise in complexity." Mathematicians and physicists, cut loose by the decline of the space program, gravitated to Wall Street and began devising ways to measure, price, and package risk. It was a kind of decentralized Manhattan Project."....

"Financial engineering tapped into a strain in the investor's mind by replacing uncertainty with the appearance of certainty," [Simon] Mikhailovich said. Certainty came in a guise of inscrutability; the products designed to reassure also happened to befuddle. Many of the people responsible for evaluating the engineering considered their mystification to be further proof of its brilliance. They were, like Bernie Madoff's investors, comforted by their own ignorance. ....

Negrych quoted a line from a friend: "Wall Street takes your money and their experience and turns it into their money and your experience."

Submitted by Margaret Cibes

More on SAT coaching

"SAT Coaching Found to Boost Scores – Barely" [5] by John Hechinger, The Wall Street Journal, May 20, 2009

In a May 2009 report, authored by Derek Briggs, chairman of the Research and Methodology Department at the University of Colorado in Boulder, the National Association for College Admission Counseling [6]

criticizes common test-prep-industry marketing practices, including promises of big score gains with no hard data to back up such claims. The report also finds fault with the frequent use of mock SAT tests because they can be devised to inflate score gains when students take the actual SAT."

(The Association's analysis was based on prep courses for the pre-Writing-Section version of the SAT.)

Several students allege that their prep company's practice tests were more difficult than the actual test, which could account for score gains from practice test to actual test. A company's response to the students is that those students were "outliers," and that "surveys of students at [their high school] generally show high satisfaction with the test-coaching company's results."

The report concludes that "on average, prep courses yield only a modest benefit, 'contrary to claims made by many test-preparation providers.'" It claims that "SAT coaching resulted in about 30 points in score improvement on the SAT, out of a possible 1600, and less than one point out of a possible 36 on the ACT ...."

According to the article's author, the College Board is "critical of colleges that select applicants based on small score differences that aren't statistically significant."

The article includes a table of claims/guarantees provided by seven test-preparation companies.[7]
In a blog [8], Matthew Fraser, of Education Unlimited, responds to the author of the report.


1. Did you realize that some test-prep companies may be claiming student gains in SAT scores on the basis of comparing actual SAT scores to practice test scores, not to previous actual SAT scores? Do you think that companies should divulge (or be required to divulge) more information about their claims of improved scores?

2. While a 30-point difference (out of 1600 points) in two students' SAT scores might be highly significant to a college admissions officer, under what condition(s) might a 30-point difference be statistically significant?

Submitted by Margaret Cibes

Multiple-choice test aims to screen for potential gang members

"A New Approach to Gang Violence Includes a Multiple-Choice Test" [9]
by Nicholas Casey, The Wall Street Journal, May 20, 2009
Having experienced nine burglaries at his home and studied gang activity for four decades, retired social psychologist Malcolm Klein has joined with USC colleagues to design a multiple-choice test [10] that "they hope will empirically identify which children are headed toward a life on the street." The City of Los Angeles plans to use the 70-question test to screen 10-15 year olds for signs of potential gang membership. The children will not be told the purpose of the test.
According to the author of the article,

In Los Angeles, Dr. Klein's theories are appealing to policy makers eager to stretch limited resources. This year, the test is being given to children for the first time, and officials say they will use the results to determine whether some of the city's $24 million annual budget for gang prevention is being spent on children who aren't at high risk.

The emphasis on data is part of what policy makers have been calling an "epidemiological" strategy, drawing analogies between the spread of crime and disease. The focus is shifted from treating "symptoms" of gang activity -- violent crime, for example -- to prevention efforts that will stem proliferation.

So far, 958 children who live in active gang areas have taken the test; of that group, about one-third have been identified as potential future gang members and will be enrolled in prevention programs. But city officials won't know for several years whether the test failed to pick out children who went on to join a gang.

An LA-detective blogger [11] expresses his concern about the potential for test results to stigmatize young people.


The test [12] is actually a "Youth Services Eligibility Interview," with more than 70 questions. It is in a multiple-choice format in the sense that an interviewee chooses an answer from a rating scale of "1" to "5" or from the pair "Yes" or "No." Some questions are open-ended.
1. Students are told at the beginning of the interview:

The reason for this survey is to find youth who might want to participate in a new city program. The program was designed to help young people develop successfully and keep them out of gangs. This survey will let us know if our free program will be helpful for you. This is not a test, and there are no right or wrong answers. All you have to do is answer honestly. The answers you give will stay private.

Do you agree that the children will not be told the purpose of the test?
2. What would a "false positive test" mean for a student? How about a "false negative test"? Can you suggest some repercussions of a false test, for better or worse?

Submitted by Margaret Cibes

Predicting winners and losers in Supreme Court cases

When the Justices Ask Questions, Be Prepared to Lose the Case Adam Liptak, The New York Times, May 25, 2009.

A second year law student at Georgetown unlocked a secret to predicting who would prevail in court cases argued before the U.S. Supreme Court. Just look at what happens during oral arguments for the case.

"'The bottom line, as simple as it sounds,' said the student, Sarah Levien Shullman, who is now a litigation associate at a law firm in Florida, 'is that the party that gets the most questions is likely to lose.'"

This was a very small study (10 small), but it inspired a replication by a future chief justice.

"Chief Justice Roberts heard about Ms. Shullman’s study while he was a federal appeals court judge, and he decided to test its conclusion for himself. So he picked 14 cases each from the terms that started in October 1980 and October 2003, and he started counting. 'The most-asked-question "rule" predicted the winner — or more accurately, the loser — in 24 of those 28 cases, an 86 percent prediction rate,' he told the Supreme Court Historical Society in 2004."

These small studies were replicated in a comprehensive study that looked at 2,000 oral arguments.

"If the two sides receive the same number of questions, the likelihood of reversal is 64 percent, which is in line with the usual probabilities; the court reverses more often than it affirms. But if the side seeking reversal gets 50 more questions than its adversary, the likelihood of a victory drops to 39 percent. And if that side manages to get the maximum number of extra questions in the study, which was 94, the likelihood of winning drops to 18 percent."

The article continues with a discussion of the predictive power of particular words. Pleasant words like approve, confidence, and guidance do not seem to influence the prediction but unpleasant words like "abusing," "failed," and "hostile" that are directed at a particular party will decrease the chances they will prevail.

Submitted by Steve Simon

Stereotype threat may affect senior citizens

"How Stereotypes Defeat the Stereotyped"[,8599,1897009,00.html ]
by John Cloud, TIME, online version of May 9, 2009

A study in Experimental Aging Research shows that old people may be subject to the effects of "stereotype threat," which refers to a situation in which "some members of stigmatized groups, when faced with stressful situations, expect themselves to do worse – a prophecy that fulfills itself."
The study is said to describe an experiment in which psychologists at North Carolina State University in Raleigh recruited 103 volunteers, ages 60 to 82, to perform simple arithmetic and recall tests. The researchers told about half of the participants that the purpose of the tests was "to examine aging effects on memory" and were asked to write down their ages before beginning the tests. A control group was told that the tests had been constructed to correct for age-related bias, but these participants were not asked to write down their ages. According to the article's author, members of the "treated" group performed significantly worse on the memory tests than the control group, and they even performed worse than they had on a pre-experiment screening test.
In the print version of this article (June 1, 2009, issue), the author cites Baruch College psychologist Catherine Good, who advises preparers of standardized tests to move questions about personal demographics to the ends of tests, in order to mitigate the effect of "stereotype threat" on student test takers. Professor Good, along with Steve Stroessner and Lauren Webster, runs the website Reducing Stereotype Threat [13]. The author also credits Stanford University social psychologist Claude Steele [14] with the original use of the term "stereotype threat."
Submitted by Margaret Cibes