Sandbox: Difference between revisions

From ChanceWiki
Jump to navigation Jump to search
 
(624 intermediate revisions by the same user not shown)
Line 1: Line 1:
==Mean vs. median: The case of the ox==
Paul Alper sent a link to the following:


:[http://io9.com/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800 I fooled millions into thinking chocolate helps weight loss. Here's how.]<br>
:by John Bohannon, io9.com, 27 May 2015


Publishing under the pseudonym Johannes Bohannon, at his own respectably named [http://instituteofdiet.com Institute of Diet and Health], the  Bohannon announced the results of a deliberately faulty study designed to show that eating chocolate promotes weight loss.  These findings should have sounded too good to be true, but that didn't stop a host of media outlets from uncritically reporting the story.
==Forsooth==


==Some math doodles==
==Quotations==
<math>P \left({A_1 \cup A_2}\right) = P\left({A_1}\right) + P\left({A_2}\right) -P \left({A_1 \cap A_2}\right)</math>
“We know that people tend to overestimate the frequency of well-publicized, spectacular
events compared with more commonplace ones; this is a well-understood phenomenon in
the literature of risk assessment and leads to the truism that when statistics plays folklore,
folklore always wins in a rout.”
<div align=right>-- Donald Kennedy (former president of Stanford University), ''Academic Duty'', Harvard University Press, 1997, p.17</div>
 
----
 
"Using scientific language and measurement doesn’t prevent a researcher from conducting flawed experiments and drawing wrong conclusions — especially when they confirm preconceptions."
 
<div align=right>-- Blaise Agüera y Arcas, Margaret Mitchell and Alexander Todoorov, quoted in: The racist history behind facial recognition, ''New York Times'', 10 July 2019</div>
 
==In progress==
[https://www.nytimes.com/2018/11/07/magazine/placebo-effect-medicine.html What if the Placebo Effect Isn’t a Trick?]<br>
by Gary Greenberg, ''New York Times Magazine'', 7 November 2018
 
[https://www.nytimes.com/2019/07/17/opinion/pretrial-ai.html The Problems With Risk Assessment Tools]<br>
by Chelsea Barabas, Karthik Dinakar and Colin Doyle, ''New York Times'', 17 July 2019
 
==Hurricane Maria deaths==
Laura Kapitula sent the following to the Isolated Statisticians e-mail list:
 
:[Why counting casualties after a hurricane is so hard]<br>
:by Jo Craven McGinty, Wall Street Journal, 7 September 2018
 
The article is subtitled: Indirect deaths—such as those caused by gaps in medication—can occur months after a storm, complicating tallies
Laura noted that
:[https://www.washingtonpost.com/news/fact-checker/wp/2018/06/02/did-4645-people-die-in-hurricane-maria-nope/?utm_term=.0a5e6e48bf11 Did 4,645 people die in Hurricane Maria? Nope.]<br>
:by Glenn Kessler, ''Washington Post'', 1 June 2018
 
The source of the 4645 figure is a [https://www.nejm.org/doi/full/10.1056/NEJMsa1803972 NEJM article].  Point estimate, the 95% confidence interval ran from 793 to 8498.
 
President Trump has asserted that the actual number is
[https://twitter.com/realDonaldTrump/status/1040217897703026689 6 to 18].
The ''Post'' article notes that Puerto Rican official had asked researchers at George Washington University to do an estimate of the death toll.  That work is not complete.
[https://prstudy.publichealth.gwu.edu/ George Washington University study]
 
:[https://fivethirtyeight.com/features/we-still-dont-know-how-many-people-died-because-of-katrina/?ex_cid=538twitter We sttill don’t know how many people died because of Katrina]<br>
:by Carl Bialik, FiveThirtyEight, 26 August 2015


----
[https://www.nytimes.com/2018/09/11/climate/hurricane-evacuation-path-forecasts.html These 3 Hurricane Misconceptions Can Be Dangerous. Scientists Want to Clear Them Up.]<br>
[https://journals.ametsoc.org/doi/abs/10.1175/BAMS-88-5-651 Misinterpretations of the “Cone of Uncertainty” in Florida during the 2004 Hurricane Season]<br>
[https://www.nhc.noaa.gov/aboutcone.shtml Definition of the NHC Track Forecast Cone]
----
[https://www.popsci.com/moderate-drinking-benefits-risks Remember when a glass of wine a day was good for you? Here's why that changed.]
''Popular Science'', 10 September 2018
----
[https://www.economist.com/united-states/2018/08/30/googling-the-news Googling the news]<br>
''Economist'', 1 September 2018


==Parenting time==
[https://www.cnbc.com/2018/09/17/google-tests-changes-to-its-search-algorithm-how-search-works.html We sat in on an internal Google meeting where they talked about changing the search algorithm — here's what we learned]
[http://www.nytimes.com/2015/04/02/upshot/yes-your-time-as-a-parent-does-make-a-difference.html?abt=0002&abg=1 Yes, your time as a parent does make a difference]<br>
----
by Justin Wolfers, "Upshot" blog, ''New York Times'', 1 April 2015
[http://www.wyso.org/post/stats-stories-reading-writing-and-risk-literacy Reading , Writing and Risk Literacy]


[http://www.nytimes.com/2015/04/03/upshot/why-a-claim-about-the-irrelevance-of-parenting-time-doesnt-add-up.html?rref=upshot&module=Ribbon&version=context&region=Header&action=click&contentCollection=The%20Upshot&pgtype=article&abt=0002&abg=1 Why a claim about the irrelevance of parenting time doesn’t add up]<br>
[http://www.riskliteracy.org/]
by Justin Wolfers, "Upshot" blog, ''New York Times'', 2 April 2015
-----
[https://twitter.com/i/moments/1025000711539572737?cn=ZmxleGlibGVfcmVjc18y&refsrc=email Today is the deadliest day of the year for car wrecks in the U.S.]


In this pair of articles, Wolfers seeks to debunk a study on parenting time that was widely reported in the media (he cites articles from [http://www.washingtonpost.com/local/making-time-for-kids-study-says-quality-trumps-quantity/2015/03/28/10813192-d378-11e4-8fce-3941fc548f1c_story.html The Washington Post], [http://www.theguardian.com/commentisfree/2015/apr/01/dont-stress-out-our-kids-are-just-fine-when-their-mothers-work-late The Guardian], and [http://www.today.com/parents/quality-over-quantity-new-study-brings-time-squeezed-parents-relief-t11746 NBC News], among others).  The study in question failed to find a significant correlation between parental time spent with children and outcomes later in life, such as scores on standardized tests.  The common theme of all the media reports was that "quality beats quantity" in parenting time.
==Some math doodles==
<math>P \left({A_1 \cup A_2}\right) = P\left({A_1}\right) + P\left({A_2}\right) -P \left({A_1 \cap A_2}\right)</math>


Wolfers's first article pointed out that the study in question was based on a survey that asked parents about two specific days, one during the the week and one on the weekend.  He compares this with trying to predict your income by looking a particular day:  the result would vary wildly based on whether the day in question happened to be a payday.  Similary he quotes developmental psychologists saying, “What you did yesterday should not be taken as representative of what you did last year.”
<math>P(E)  = {n \choose k} p^k (1-p)^{ n-k}</math>


The second article provides responds to some readers's objections, and gives a particularly careful discussion of statistical issues related to "errors in variables."  Wolfers acknowledges that randomness in the sample could reasonably be expected to guarantee that the average parenting time balances out correctly in the measure, in that some parents will respond about more time-intensive days, and others about lesser days.  The real problem, he explains, comes in correlating this with another measure.  To illustrate, he constructs a set of three scatterplots, the first showing a positive correlation between the time you spent with your children today and the  time you typically spend, the second showing a positive correlation between test scores and the time you typically spend, but the third showing near-zero correlation between test scores and the time you spend with your children today.
<math>\hat{p}(H|H)</math>


Submitted by Bill Peterson
<math>\hat{p}(H|HH)</math>


==Accidental insights==
==Accidental insights==
Line 72: Line 119:


----
----
==The p-value ban==
http://www.statslife.org.uk/opinion/2114-journal-s-ban-on-null-hypothesis-significance-testing-reactions-from-the-statistical-arena

Latest revision as of 20:58, 17 July 2019


Forsooth

Quotations

“We know that people tend to overestimate the frequency of well-publicized, spectacular events compared with more commonplace ones; this is a well-understood phenomenon in the literature of risk assessment and leads to the truism that when statistics plays folklore, folklore always wins in a rout.”

-- Donald Kennedy (former president of Stanford University), Academic Duty, Harvard University Press, 1997, p.17

"Using scientific language and measurement doesn’t prevent a researcher from conducting flawed experiments and drawing wrong conclusions — especially when they confirm preconceptions."

-- Blaise Agüera y Arcas, Margaret Mitchell and Alexander Todoorov, quoted in: The racist history behind facial recognition, New York Times, 10 July 2019

In progress

What if the Placebo Effect Isn’t a Trick?
by Gary Greenberg, New York Times Magazine, 7 November 2018

The Problems With Risk Assessment Tools
by Chelsea Barabas, Karthik Dinakar and Colin Doyle, New York Times, 17 July 2019

Hurricane Maria deaths

Laura Kapitula sent the following to the Isolated Statisticians e-mail list:

[Why counting casualties after a hurricane is so hard]
by Jo Craven McGinty, Wall Street Journal, 7 September 2018

The article is subtitled: Indirect deaths—such as those caused by gaps in medication—can occur months after a storm, complicating tallies

Laura noted that

Did 4,645 people die in Hurricane Maria? Nope.
by Glenn Kessler, Washington Post, 1 June 2018

The source of the 4645 figure is a NEJM article. Point estimate, the 95% confidence interval ran from 793 to 8498.

President Trump has asserted that the actual number is 6 to 18. The Post article notes that Puerto Rican official had asked researchers at George Washington University to do an estimate of the death toll. That work is not complete. George Washington University study

We sttill don’t know how many people died because of Katrina
by Carl Bialik, FiveThirtyEight, 26 August 2015

These 3 Hurricane Misconceptions Can Be Dangerous. Scientists Want to Clear Them Up.
Misinterpretations of the “Cone of Uncertainty” in Florida during the 2004 Hurricane Season
Definition of the NHC Track Forecast Cone


Remember when a glass of wine a day was good for you? Here's why that changed. Popular Science, 10 September 2018


Googling the news
Economist, 1 September 2018

We sat in on an internal Google meeting where they talked about changing the search algorithm — here's what we learned


Reading , Writing and Risk Literacy

[1]


Today is the deadliest day of the year for car wrecks in the U.S.

Some math doodles

<math>P \left({A_1 \cup A_2}\right) = P\left({A_1}\right) + P\left({A_2}\right) -P \left({A_1 \cap A_2}\right)</math>

<math>P(E) = {n \choose k} p^k (1-p)^{ n-k}</math>

<math>\hat{p}(H|H)</math>

<math>\hat{p}(H|HH)</math>

Accidental insights

My collective understanding of Power Laws would fit beneath the shallow end of the long tail. Curiosity, however, easily fills the fat end. I long have been intrigued by the concept and the surprisingly common appearance of power laws in varied natural, social and organizational dynamics. But, am I just seeing a statistical novelty or is there meaning and utility in Power Law relationships? Here’s a case in point.

While carrying a pair of 10 lb. hand weights one, by chance, slipped from my grasp and fell onto a piece of ceramic tile I had left on the carpeted floor. The fractured tile was inconsequential, meant for the trash.

BrokenTile.jpg

As I stared, slightly annoyed, at the mess, a favorite maxim of the Greek philosopher, Epictetus, came to mind: “On the occasion of every accident that befalls you, turn to yourself and ask what power you have to put it to use.” Could this array of large and small polygons form a Power Law? With curiosity piqued, I collected all the fragments and measured the area of each piece.

Piece Sq. Inches % of Total
1 43.25 31.9%
2 35.25 26.0%
3 23.25 17.2%
4 14.10 10.4%
5 7.10 5.2%
6 4.70 3.5%
7 3.60 2.7%
8 3.03 2.2%
9 0.66 0.5%
10 0.61 0.5%
Montante plot1.png

The data and plot look like a Power Law distribution. The first plot is an exponential fit of percent total area. The second plot is same data on a log normal format. Clue: Ok, data fits a straight line. I found myself again in the shallow end of the knowledge curve. Does the data reflect a Power Law or something else, and if it does what does it reflect? What insights can I gain from this accident? Favorite maxims of Epictetus and Pasteur echoed in my head: “On the occasion of every accident that befalls you, remember to turn to yourself and inquire what power you have to turn it to use” and “Chance favors only the prepared mind.”

Montante plot2.png

My “prepared” mind searched for answers, leading me down varied learning paths. Tapping the power of networks, I dropped a note to Chance News editor Bill Peterson. His quick web search surfaced a story from Nature News on research by Hans Herrmann, et. al. Shattered eggs reveal secrets of explosions. As described there, researchers have found power-law relationships for the fragments produced by shattering a pane of glass or breaking a solid object, such as a stone. Seems there is a science underpinning how things break and explode; potentially useful in Forensic reconstructions. Bill also provided a link to a vignette from CRAN describing a maximum likelihood procedure for fitting a Power Law relationship. I am now learning my way through that.

Submitted by William Montante