this isn’t the worst paper in the history of philosophy

Notwithstanding that, Robert Northcott’s paper A Dilemma for the Doomsday Argument is (and by a comfortable margin) the worst paper I have reviewed (i.e. trashed) on this blog. In the paper, the following Doom scenario is presented:

“Imagine that a large asteroid…is…heading…towards Earth…astronomers calculate that it has a 0.5 probability of colliding with us, the uncertainty being due to measurement imprecision regarding its exact path…any collision would be certain to destroy all human life..what is now the rational expectation of humanity’s future duration…a 0.5 probability that humanity will last just the few days…(‘Doom-Now’), and a 0.5 probability that it will last however long it would have otherwise. What does DA (Doomsday Argument) say? Either it is deemed to modify these empirical probabilities; or it is not.”

Doomsday practitioners of course would revise the probability of Doom-Now upward significantly. For convenience, let’s say that humanity would last another million years otherwise, with a total number of people numbering 1 quadrillion. if we take ourselves to have been sampled uniformly at random from the total population past present and future, our birth rank of 60 billion or so looks very, very unlikely conditioned on Doom Later. It looks more plausible conditioned on Doom Now. So DA would revise P(DoomNow) upwards, close to 1. That’s how the Doomsday argument works. But Northcott writes:

“…according to DA, a priori considerations show that the expected duration for humanity is much greater than just a few days. The probability of Doom-Now should accordingly be modified downwards.”

First of all, the expected duration just is half a million years, which is already much greater than a few days. So the first sentence makes no sense. Second, the DA advocates in favor of Doom-Now…not against it. (That’s why it’s called the Doomsday argument.)  A footnote here sheds no light whatsoever:

“True, DA reasoning implies that the single most likely total number of humans is the current number, i.e. 60 billion. But although the mode of the distribution is thus 60 billion, the mean is much greater. Thus, Doom-Now is not favoured.”

Obviously Doom-Now is favored by DA. I mean, of course the expected number of humans is around half of a quadrillion. That’s not DA reasoning, that’s just what it is. Not relevant though because DA assumes that the world is sampled by objective chance and then you are sampled uniformly from the observers in that world. The fact that some unlikely worlds have enormous population doesn’t dilute the worlds that have smallish population. That’s the whole point of DA. Nor is the gaffe a typo…Northcott goes on thinking that DA revises the probability of Doom-Now downward for the rest of the paper. Part of the paper involves miracles. I won’t go into that but the publication of this paper in Ratio is proof that they do happen.

The next passage is not to be believed:

“…an unbiased combined estimate (of the mean) can be achieved via inverse-variance weighting. Roughly, the higher an estimate’s variance, the more uncertain that estimate is, and so the less weight we should put on it. In the DA case, how we balance competing DA and empirical estimates of a probability turns – and must turn – on exactly this issue….Some toy numbers will illustrate. By assumption, the empirical estimate of the asteroid collision’s probability, and thus of Doom-Now’s, is very certain. Suppose that the density function of that estimate is a normal distribution with a mean of 0.5 and, representing the scientists’ high degree of certainty, a small standard deviation of 0.001. Next, suppose initially that for DA the equivalent figures are a mean of 0.001 and the same small standard deviation of 0.001. In this case, because the two variances are the same, so an unbiased estimate of the mean would be midway between the two component estimates of it, i.e. midway between 0.5 and 0.001, i.e. approximately 0.25.”

This is so wrongheaded in so many different ways I don’t really know where to start. So I will start with what is worst. Yes, there is a method in statistical meta-analysis of taking a weighted average of two estimators to get an unbiased third estimator that is of minimum variance, among all weighted averages, and yes it goes by taking the inverse variances as weights. But, first, the estimators you start with have to themselves be unbiased. The two “estimators” considered here can’t both be unbiased estimators of the same parameter, because they have different means, and what it means for an estimator to be unbiased is for it to have mean equal to the true value of the parameter being estimated. Perhaps more troubling, however, is that it’s not at all clear what they could be estimating…the only thing around to estimate is the objective chance that the asteroid hits the earth, which is either zero or one–unless it’s something like “the credence an ideal rational agent would have in this epistemic position”. Surely though that would have to be at least mentioned. The next thing that is wrongheaded is just what was mentioned before—the latter mean shouldn’t be .001 but rather something like .999…DA wants to raise the probability of Doom-Now. Finally…assuming that p is in fact the credence of an ideal rational agent in the current epistemic situation and assuming that both the scientists and DA are trying to estimate p, does it not strike the author as odd that the scientists are so damn certain that p is very close to 1/2 and DA is so damn certain that p is very close to .001? The compromise suggested is, perhaps it bears mentioning, 250 standard deviations away from the mean for both parties. Normally this would be a moment where the meta-statistician might say “hmm…maybe these estimators aren’t estimating the same thing….”.

Not that it matters much at this point, but the most amusing passage is this:

“we can calculate a second scenario, with new toy numbers…this time, suppose that for DA the equivalent figures are still a mean of 0.001 but now, say, a standard deviation of 0.1….”

Sorry, but it’s not possible for a credence estimator (which must take on values in [0,1]) to have a mean of .001 and a standard deviation of .1.

Don’t get me wrong. My beef is not with the author, who I assume is not perpetrating a hoax but just sincerely trying to say something important. The question, for me, is not how did this mess come to be written, but how did this mess come to be published in a respectable journal? Ratio isn’t some obscure, fly-by-night outfit. Ernest Sosa, for example, is on the editorial board. In fact, it seems there is a list of 49 “Most popular journals” used by PhilPapers to identify when someone is a “professional author” in philosophy, and Ratio is on it!

http://philpapers.org/journals?listId=1

So perhaps congratulations are in order…by slipping this awful manuscript past the editors of this journal, this author is (if he wasn’t before) now and forevermore a “professional author” of philosophy, meaning that PhilPaper editors shall be obliged to archive every stupid thing he ever writes, even if no one on earth or in heaven will touch it.

Okay…but back to my question. How did this ridiculous manuscript come to be published? The question is intended for the editors of Ratio…I’m inviting reply. What was the process? Of course any public reply would obviously be just “we sent it off for double blind refereeing, got a positive report and it was voted in by the editors…blah, blah blah”. So, well…never mind, I guess.

But come on guys….my job isn’t supposed to be this easy.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: