Sandbox: Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
Line 103: Line 103:


==Evaluating health studies==
==Evaluating health studies==
The quotation below from Regina Nuzzo is from
[http://www.nature.com/news/scientific-method-statistical-errors-1.14700 Scientific method: Statistical errors]<br>
 
by Regina Nuzzo, ''Nature News'', 12 February 2014
http://www.nature.com/news/scientific-method-statistical-errors-1.14700


The subtitle is "''P'' values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume."  We read:
<blockquote>
<blockquote>
P values have always had critics. In their almost nine decades of existence, they have been likened to mosquitoes (annoying and impossible to swat away), the emperor's new clothes (fraught with obvious problems that everyone ignores) and the tool of a “sterile intellectual rake” who ravishes science but leaves it with no progeny3. One researcher suggested rechristening the methodology “statistical hypothesis inference testing”3, presumably for the acronym it would yield.
P values have always had critics. In their almost nine decades of existence, they have been likened to mosquitoes (annoying and impossible to swat away), the emperor's new clothes (fraught with obvious problems that everyone ignores) and the tool of a “sterile intellectual rake” who ravishes science but leaves it with no progeny3. One researcher suggested rechristening the methodology “statistical hypothesis inference testing”3, presumably for the acronym it would yield.

Revision as of 18:02, 19 August 2015

More on the hot hand

In Chance News 105, the last item was titled Does Selection bias explain the hot hand?. It described how in their July 6 article, Miller and Sanjurjo assert that a way to determine the probability of a heads following a heads in a fixed sequence, you may calculate the proportion of times a head is followed by a head for each possible sequence and then compute the average proportion, giving each sequence an equal weighting on the grounds that each possible sequence is equally likely to occur. I agree that each possible sequence is equally likely to occur. But I assert that it is illegitimate to weight each sequence equally because some sequences have more chances for a head to follow a second head than others.

Let us assume, as Miller and Sanjurjo do, that we are considering the 14 possible sequences of four flips containing at least one head in the first three flips. A head is followed by another head in only one of the six sequences (see below) that contain only one head that could be followed by another, making the probability of a head being followed by another 1/6 for this set of six sequences.

TTHT Heads follows heads 0 time.
THTT Heads follows heads 0 times
HTTT Heads follows heads 0 times
TTHH Heads follows heads 1 time
THTH Heads follows heads 0 times
HTTH Heads follows heads 0 times

A head is followed by another head six times in the six sequences (see below) that contain two heads that could be followed by another head, making the probability of a head being followed by another 6/12 = 1/2 for this set of six sequences.

THHT Heads follows heads 1 time
HTHT Heads follows heads 0 times
HHTT Heads follows heads 1 time
THHH Heads follows heads 2 times
HTHH Heads follows heads 1 time
HHTH Heads follows heads 1 time

A head is followed by another head five times in the six sequences (see below) that contain three heads that could be followed by another head, making the probability of a head being followed by another 5/6 this set of two sequences.

HHHT Heads follows heads 2 times
HHHH Heads follows heads 3 times

An unweighted average of the 14 sequences gives

[(6 × 1/6) + (6 × 1/2) + (2 × 5/6)] / 14 = [17/3] / 14 = 0.405,

which is what Miller and Sanjurjo report. A weighted average of the 14 sequences gives

[(1)(6 × 1/6) + (2)(6 × 1/2) + (3)(2 × 5/6)] / [(1×6) + (2 × 6) + (3 × 2)]
= [1 + 6 + 5] / [6 + 12 + 6] = 12/24 = 0.50.

Using an unweighted average instead of a weighted average is the pattern of reasoning underlying the statistical artifact known as Simpson’s paradox. And as is the case with Simpson’s paradox, it leads to faulty conclusions about how the world works.

Submitted by Jeff Eiseman, University of Massachusetts

Comment

Sequence
of tosses
Number of H
in first 3 tosses
Number of H
followed by H
Number of HH
in first 3 tosses
Number of HH
followed by H
TTTT 0 0 0 0
TTTH 0 0 0 0
TTHT 1 0 0 0
THTT 1 0 0 0
HTTT 1 0 0 0
TTHH 1 1 0 0
THTH 1 0 0 0
THHT 2 1 1 0
HTTH 1 0 0 0
HTHT 2 0 0 0
HHTT 2 1 1 0
THHH 2 2 1 1
HTHH 2 1 0 0
HHTH 2 1 1 0
HHHT 3 2 2 1
HHHH 3 3 2 2
Total 24 12 8 4

Evaluating health studies

Scientific method: Statistical errors
by Regina Nuzzo, Nature News, 12 February 2014

The subtitle is "P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume." We read:

P values have always had critics. In their almost nine decades of existence, they have been likened to mosquitoes (annoying and impossible to swat away), the emperor's new clothes (fraught with obvious problems that everyone ignores) and the tool of a “sterile intellectual rake” who ravishes science but leaves it with no progeny3. One researcher suggested rechristening the methodology “statistical hypothesis inference testing”3, presumably for the acronym it would yield.

Later on we have

But while the rivals feuded — Neyman called some of Fisher's work mathematically “worse than useless”; Fisher called Neyman's approach “childish” and “horrifying [for] intellectual freedom in the west” — other researchers lost patience and began to write statistics manuals for working scientists. And because many of the authors were non-statisticians without a thorough understanding of either approach, they created a hybrid system that crammed Fisher's easy-to-calculate P value into Neyman and Pearson's reassuringly rigorous rule-based system. This is when a P value of 0.05 became enshrined as 'statistically significant', for example. “The P value was never meant to be used the way it's used today,” says [Steven] Goodman.

I have always bemoaned the conflation of exploratory and confirmatory:

Such practices have the effect of turning discoveries from exploratory studies — which should be treated with scepticism — into what look like sound confirmations but vanish on replication.

Submitted by Paul Alper

Some math doodles

<math>P \left({A_1 \cup A_2}\right) = P\left({A_1}\right) + P\left({A_2}\right) -P \left({A_1 \cap A_2}\right)</math>

<math>\hat{p}(H|H)</math>


<math>\hat{p}(H|HH)</math>

Accidental insights

My collective understanding of Power Laws would fit beneath the shallow end of the long tail. Curiosity, however, easily fills the fat end. I long have been intrigued by the concept and the surprisingly common appearance of power laws in varied natural, social and organizational dynamics. But, am I just seeing a statistical novelty or is there meaning and utility in Power Law relationships? Here’s a case in point.

While carrying a pair of 10 lb. hand weights one, by chance, slipped from my grasp and fell onto a piece of ceramic tile I had left on the carpeted floor. The fractured tile was inconsequential, meant for the trash.

BrokenTile.jpg

As I stared, slightly annoyed, at the mess, a favorite maxim of the Greek philosopher, Epictetus, came to mind: “On the occasion of every accident that befalls you, turn to yourself and ask what power you have to put it to use.” Could this array of large and small polygons form a Power Law? With curiosity piqued, I collected all the fragments and measured the area of each piece.

Piece Sq. Inches % of Total
1 43.25 31.9%
2 35.25 26.0%
3 23.25 17.2%
4 14.10 10.4%
5 7.10 5.2%
6 4.70 3.5%
7 3.60 2.7%
8 3.03 2.2%
9 0.66 0.5%
10 0.61 0.5%
Montante plot1.png

The data and plot look like a Power Law distribution. The first plot is an exponential fit of percent total area. The second plot is same data on a log normal format. Clue: Ok, data fits a straight line. I found myself again in the shallow end of the knowledge curve. Does the data reflect a Power Law or something else, and if it does what does it reflect? What insights can I gain from this accident? Favorite maxims of Epictetus and Pasteur echoed in my head: “On the occasion of every accident that befalls you, remember to turn to yourself and inquire what power you have to turn it to use” and “Chance favors only the prepared mind.”

Montante plot2.png

My “prepared” mind searched for answers, leading me down varied learning paths. Tapping the power of networks, I dropped a note to Chance News editor Bill Peterson. His quick web search surfaced a story from Nature News on research by Hans Herrmann, et. al. Shattered eggs reveal secrets of explosions. As described there, researchers have found power-law relationships for the fragments produced by shattering a pane of glass or breaking a solid object, such as a stone. Seems there is a science underpinning how things break and explode; potentially useful in Forensic reconstructions. Bill also provided a link to a vignette from CRAN describing a maximum likelihood procedure for fitting a Power Law relationship. I am now learning my way through that.

Submitted by William Montante


The p-value ban

http://www.statslife.org.uk/opinion/2114-journal-s-ban-on-null-hypothesis-significance-testing-reactions-from-the-statistical-arena