Difference between revisions of "Chance News 60"
|Line 6:||Line 6:|
Submitted by Steve Simon
Submitted by Steve Simon
Revision as of 17:14, 1 February 2010
"As a Usenet discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one."
Godwin's Law, as quoted at Wikipedia.
Submitted by Steve Simon
“Statistophobia: When Economic Indicators Aren’t Worth That Much”
by Justin Fox, TIME, February 1, 2010
Chances are, the disparity between the [Commerce Department’s quarterly GDP report and the Labor Department’s monthly unemployment report] was mostly statistical noise. Those who read great meaning into either were deceiving themselves. It's a classic case of information overload making it harder to see the trends and patterns that matter. In other words, we might be better off paying less (or at least less frequent) attention to data. …. Most of us aren't professional forecasters. What should we make of the cacophony of monthly and weekly data? The obvious advice is to focus on trends and ignore the noise. But the most important economic moments come when trends reverse — when what appears to be noise is really a sign that the world has changed. Which is why, in these uncertain times, we jump whenever a new economic number comes out. Even one that will be revised in a month.
Submitted by Margaret Cibes
Does corporate support really subvert the data analysis
Corporate Backing for Research? Get Over It. John Tierney, The New York Times, January 25, 2010.
We've been warned many times to beware of corporate influences on research, and many reserach journals are now demanding more, in terms of disclosure and independent review, from researchers who have a conflict of interest. But John Tierney has argued that this effort gone too far.
Conflict-of-interest accusations have become the simplest strategy for avoiding a substantive debate. The growing obsession with following the money too often leads to nothing but cheap ad hominem attacks.
Mr. Tierney argues that this emphasis on money prevents thoughtful examination of all the motives associated with presentation of results
It is simpler to note a corporate connection than to analyze all the other factors that can bias researchers’ work: their background and ideology, their yearnings for publicity and prestige and power, the politics of their profession, the agendas of the public agencies and foundations and grant committees that finance so much scientific work.
Another emotion is at work, as well, snobbery.
Many scientists, journal editors and journalists see themselves as a sort of priestly class untainted by commerce, even when they work at institutions that regularly collect money from corporations in the form of research grants and advertising. We trust our judgments to be uncorrupted by lucre — and we would be appalled if, say, a national commission to study the publishing industry were composed only of people who had never made any money in the business. (How dare those amateurs tell us how to run our profession!) But we insist that others avoid even “the appearance of impropriety.”
Mr. Tierney cites a controversial requirement imposed by the Journal of the American Medical Association in 2005.
Citing “concerns about misleading reporting of industry-sponsored research,” the journal refused to publish such work unless there was at least one author with no ties to the industry who would formally vouch for the data.
This policy has been criticized by other journals.
That policy was called “manifestly unfair” by BMJ (formerly The British Medical Journal), which criticized JAMA for creating a “hierarchy of purity among authors.”
Submitted by Steve Simon.
1. Do you side with JAMA or BMJ on the policy of an independent author who can formally vouch for the data?
2. Should conflict of interest requirements be different for research articles involving subjective opinions, such as editorials, than for research involving objective approaches like clinical trials?
Climatology of Snow-to-Liquid Ratio for the Contiguous United States”
by Martin A. Baxter, Charles E. Graves, and James T. Moore, Weather and Forecasting, October 2005
In this paper, two Saint Louis University professors report the results of a National Weather Service study of the ratio of snow to liquid, which concludes that the mean ratio for much of the country is 13, and not the “often-assumed value of 10.” The NWS studied climatology for 30 years. The study found “considerable spatial variation in the mean,” illustrated in lots of maps, tables, and histograms.
[A quantitative precipitation forecast (QPF)] represents the liquid equivalent expected to precipitate from [a weather] system. To convert this liquid equivalent to a snowfall amount, a snow-to-liquid-equivalent ratio (SLR) must be determined. An SLR value of 10 is often assumed as a mean value; however, this value may not be accurate for many locations and meteorological situations. Even if the forecaster has correctly forecasted the QPF, an error in the predicted SLR value may cause significant errors in forecasted snowfall amount.
The SLR of 10:1, as a rough approximation, dates from 1875. Subsequent similar estimates did “not account for geographic location or in-cloud microphysical processes.”
The goals of this paper are to present the climatological values of SLR for the contiguous United States and examine the typical variability using histograms of SLR for various NWS county warning areas (CWAs). [Sections of the paper describe] the datasets and methodology used to perform this research; [present] the 30-yr climatology of SLR for the contiguous United States; [detail] the frequency of observed SLR values through the use of histograms for selected NWS CWAs; [include] a brief discussion on how the climatology of SLR may be used operationally; and [summarize] the results and [present] suggestions for future research.
Submitted by Margaret Cibes at the suggestion of Jim Greenwood
An interesting problem
This hasn't hit the news yet but Bob Drake wrote us about an interesting problem. He wrote
Here is an example of e turning up unexpectedly. Select a random number between 0 and 1. Now select another and add it to the first., piling on random numbers. How many random numbers, on average, do you need to make the total greater than 1?
This appears in the notes of Derbyshire's book "Prime Obsession" pg 366 A proof of this can be found http://www.olimu.com/riemann/FAQs.htm here].
We mentioned this to Charles Grinstead who wrote:
It appears in Feller, vol. 2. But more interesting than that problem is the following generalization: Pick a positive real number M, and play the same game as before, i.e. stop when the sum first equals or exceeds M. Let f(M) denote the average number of summands in this process (so the game that he was looking at corresponds to M = 1, and he saw that it is known that f(1) = e). Clearly, since the average size of the summands is 1/2, f(M) should be about 2M, or perhaps slightly greater than 2M. For example, when M = 1, f(M) is slightly greater than 2. It can be shown that as M goes to infinity, f(M) is asymptotic to 2M + 2/3.
Submitted by Laurie Snell