*New York Times*did not learn from its earlier error (or else physicist Kyle Cranmer misspoke). The

*Times*stated that

One perceptive science writer, Faye Flam, spotted the transposition error in theWhen all the statistical effects are taken into consideration, Dr. Cranmer said, the bump in the Atlas data had about a 1-in-93 chance of being a fluke — far stronger than the 1-in-3.5-million odds of mere chance, known as five-sigma, considered the gold standard for a discovery. That might not be enough to bother presenting in a talk except for the fact that the competing CERN team, named C.M.S., found a bump in the same place.

*Times*article and promptly called attention to it in her science column for

*Bloomberg Views*. However, she then argued that 1/93 is too high a significance level to use in particle physics because of “a problem of statistics called the prosecutor’s fallacy” (referring to UCLA physicist Robert Cousins as the source of this argument).

That argument is wrong, or at least incomplete. The “prosecutor’s fallacy” is just a special case of the ubiquitous practice of transposing the terms in a conditional probability and thinking that the value stays the same. The usual name for this in statistics is the transposition fallacy. The transposition fallacy is a fallacy because Pr(A | B) <> P(B | A) unless P(A) = P(B). In other words, P(A | B) actually could be higher — or lower — than P(B | A).

This inequality does not reveal why particle physicists normally choose a much more demanding significance level than 1/93. Flam uses two examples with low prior probabilities P(A) to give the impression that P(A | B) must be greater than P(B | A), so that a higher value for P(B | A) is necessary to achieve some desired value for P(A | B). The argument works only in situations in which the prior probability of a new particle is small. That is a fair claim here, because the Standard Theory, which is well entrenched, does not predicts the super-massive particle). However, it is not clear why that always would be the case. Thus, the question of why particle physicists used the stringent 5-sigma rule for the discovery of the Higgs Boson remains.

One standard argument for setting an especially demanding significance level is that the cost of a false positive is far greater than the cost of a false negative (which can be tolerated while one waits for more data). Flam and Cousins mention error costs in this way, but this consideration provides a very different motivation than does correcting the errors that might arise from computing a probability by naive transposition.

Another common argument is that even if the costs of errors are not so disparate, and a moderate significance level such as 1/93 is acceptable, multiple opportunities to find significance make it too easy to find a “significant” result. Such data mining goes on a bit in hunting for new particles. See More on Statistical Reasoning and the Higgs Boson, July 11, 2012. Again, however, this valid reason for being more demanding with regard to a significance level does not flow from the transposition fallacy, the prosecutor’s fallacy, or whatever else it might be called. It applies regardless of the significance level that the experiment is seeking. It is a correction designed to make the declared significance level applicable to the mined data. The corrected level is still subject to the transposition fallacy.

**References**

- Faye Flam, Lies, Damned Lies and Physics, Dec. 16, 2015,http://www.bloombergview.com/articles/2015-12-30/lies-damned-lies-and-physics
- Dennis Overbye, Physicists in Europe Find Tantalizing Hints of a Mysterious New Particle, N.Y. Times, Dec. 16, 2015, at A18, http://www.nytimes.com/2015/12/16/science/physicists-in-europe-find-tantalizing-hints-of-a-mysterious-new-particle.html?_r=0

## No comments:

## Post a Comment