*Numerical Recipes*, Bill Press describes statistics as "that gray area which is as surely not a branch of mathematics as it is neither a branch of science." Statistics is all about using data to derive conclusions, but there's no single "right" way to do this. So the world of statistics resembles Europe during the Reformation, divided into various factions and sects, one of these being the Cult of the Bayesians. The key idea of Bayesian statistics is that one needs to incorporate prior assumptions about reality into any modeling of data.

Here's an example. Suppose that Alfred flips a coin 20 times, and he gets 20 heads in a row (this is very unlikely -- the probability of 20 heads in a row is less than one in two million).

Now Alfred flips the coin one more time. What is the probability that this coin flip will come up heads? Is it

(A) Less than 1/2? Alfred has used up all the heads.

(B) Exactly 1/2? Past performance tells you nothing about future returns.

(C) Greater than 1/2? Alfred is on a roll!

A classical statistician would have to answer (B). In fact, any other answer would be considered a prototypical fallacy about probabilities. But what about a Bayesian statistician? I think a Bayesian would have to answer (C). Why? Because if Alfred gets 20 heads in a row, there's a good chance he's cheating and using a 2-headed coin! Of course, I am grossly oversimplifying here, but that's what the internet is for.

## 2 comments:

Even if Alfred isn't cheating, there is a good chance that the coin is not exactly 50/50. In fact, it is hard to create anything that has precise odds (even something like a QM Stern Gerlach needs to be measured and we must assume that the ability of each side to measure is the same). So if the true probability of heads has some distribution around 0.5 with maybe a 0.1% uncertainty, then we would conclude after 20 heads that we were on the higher side.

I think the moral is that you should not engage in any games of chance with Alfred under these conditions.

Post a Comment