Since the last post was about the Doomsday Argument, I thought it would be a good idea to remind everyone that the Doomsday Argument is quite dead. The correct resolution…SIA (for self-indicating assumption)…has been around for a while, but a thorny objection to SIA, explicated in a certain type of thought experiment having several extant versions, including Kierland and Monton’s replicating worlds and Nick Bostrom’s “Presumptuous Philosopher gedanken”, has prevented this solution from gaining universal acceptance. Bostrom’s thought experiment (which I will concentrate on, as it seems to be one of the more clearly elaborated) is formidable and had not been correctly explained away as of about a year ago, when I uploaded the final verson of my SB survey to the PhilSci archive:
The relevant refutation is hinted at in footnote 4. It hadn’t to my knowledge appeared previously in the literature, though it does bear resemblance to a 2005 suggestion of Cian Dorr (again appearing in a footnote to an unpublished Sleeping Beauty paper, A Challenge for Halfers). I’ve been writing about it off and on on this blog but will give a slightly more developed treatment here. Philosophers have ignored this solution, but it isn’t, as you shall see, especially susceptible to counter-attack. About all one could hope to do would be to undermine the understanding of modality on which it is arguably based. You don’t need to buy into that understanding to buy into the refutation, but the understanding is itself so robust and so immune to paradox that I’m going to couch the refutation inside of it anyway. Indeed, I have for the past year been intending to write at some length about it. I’m apparently lazy though and it’s not clear when this will happen, so before too much time passes I will at least say a bit.
Here is how Nick Bostrom introduces the Doomsday Argument (he has a setup to this but I am skipping to the meat as it’s pretty clear what’s going on). See
“Now we modify the thought experiment a bit. We still have the hundred cubicles but this time they are not painted blue or red. Instead they are numbered from 1 to 100. The numbers are painted on the outside. Then a fair coin is tossed (by God perhaps). If the coin falls heads, one person is created in each cubicle. If the coin falls tails, then persons are only created in cubicles 1 through 10.
You find yourself in one of the cubicles and are asked to guess whether there are ten or one hundred people? Since the number was determined by the flip of a fair coin, and since you haven’t seen how the coin fell and you don’t have any other relevant information, it seems you should believe with 50% probability that it fell heads (and thus that there are a hundred people).
Moreover, you can use the self-sampling assumption to assess the conditional probability of a number between 1 and 10 being painted on your cubicle given how the coin fell. For example, conditional on heads, the probability that the number on your cubicle is between 1 and 10 is 1/10, since one out of ten people will then find themselves there. Conditional on tails, the probability that you are in number 1 through 10 is one; for you then know that everybody is in one of those cubicles.
Suppose that you open the door and discover that you are in cubicle number 7. Again you are asked, how did the coin fall? But now the probability is greater than 50% that it fell tails. For what you are observing is given a higher probability on that hypothesis than on the hypothesis that it fell heads. The precise new probability of tails can be calculated using Bayes’ theorem. It is approximately 91%. So after finding that you are in cubicle number 7, you should think that with 91% probability there are only ten people.”
It’s the second paragraph that’s troublesome. More naturally you would assign probability 100/110 to 100 cubicles. Not everyone is as likely to be in a cubicle at all if there are only ten of them! There has to be potential for at least 100 people, so it makes this easier to think about if God creates 100 people (including you) either way and just doesn’t awaken ninety of them in case of tails. Now you might find yourself asleep (in which case, technically, you don’t really “find” yourself at all), but if you find yourself in a cubicle instead you simply condition on that fact.
If you don’t like the sleep angle here’s another perspective. Presumably, God can have only finitely many templates for persons. For convenience, let’s say he has 1000 and let’s say he tattoos what number template you are on your foot. So you find yourself in a cubicle. You look at your foot and see “457” tattooed there. What is your information? Now your information is there’s a 457 in one of the cubicles, and that confirms 100…for conditional on there’s a 457 in one of the cubicles, 100 cublicles is about ten time likelier than 10 cubicles. Or let’s say God doesn’t tattoo the number…that obviously makes no difference. Now the evidence is just there’s someone just like me in at least one of the cubicles and, conditional on that, 100 cublicles is again about ten times likelier than 10 cubicles.
Or, just imagine that the experiment is repeated over and over again. Most of the time you find yourself in a cubicle, it will be with 99 others and not with just 9 others. For if we got everybody together after doing this a huge number of times, and everybody at the party was in a cubicle just as often during a tails run as during a heads run, we’d have a bit of a problem! (Problem being that it’s not possible.)
“After hearing about (the Doomsday Argument), many people think they know what is wrong with it. But these objections tend to be mutually incompatible, and often they hinge on some simple misunderstanding. Be sure to read the literature before feeling too confident that you have a refutation.”
Okay…so there is one sort of thing in the literature that comes up. First, Bostrom calls the sort of reasoning I was doing just now invocation of a “self indicating assumption”:
(SIA) Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.
Bostrom does not find this assumption compelling:
“SIA may seem quite dubious as a methodological prescription or a purported principle of rationality. Why should reflecting on the fact that you exist rationally compel you to redistribute your credence in favor of hypotheses that say that there are many observers at the expense of those that claim that there are few?… our view is that SIA is no less implausible ultimo facie. Probably the most positive thing that can be said on its behalf is that it is one way of getting rid of the counterintuitive effects of the Doomsday argument…”
Since I view SIA as a commonplace, about all I can suggest is that one ignore premature reports of its dubiousness and work through the grounds for it yourself…or, barring that, read the paragraphs I wrote carefully. SIA is not invented there as an expedient, rather it’s just the case that as one reasons through some natural assumptions and their obvious consequences, SIA comes out in the wash.
At any rate, the fact that Bostrom doesn’t understand what’s so natural about SIA appears to have made the mind of this “Top one hundred Global Thinker” an environment in which the following tricky thought experiment purportedly telling against it could arise:
“It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite and there are a trillion trillion trillion observers. The super-duper symmetry considerations are indifferent as between these two theories. Physicists are preparing a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T2 is about a trillion times more likely to be true than T1!” (whereupon the philosopher runs the argument that appeals to SIA).”
I’m cutting and pasting this stuff, by the way, from
a reply to Olum, who attempted to refute Doomsday with SIA apparently. Good idea, but apparently Olum’s grasp of SIA wasn’t so sharp either, for he “bit the bullet” and agreed that SIA commits one to the presumptuous philosopher’s counter-intuitive leap in the above thought experiment.
That the thought experiment is importantly different from the cubicles case that led one to SIA in the first place, however, is clear. In the cubicles case, whether there was a hundred-verse or a ten-verse was contingent. If we repeat the experiment many times, the coin God tosses will land heads roughly half of the time and it will land tails roughly half of the time, so that most of the people who wind up in cubicles will be in hundred-verses rather than ten-verses. In Bostrom’s gedanken, meanwhile, presumption intuitions require that T1 is either necessarily true or necessarily false.
Why do presumption intuitions require necessity? Well, if the matter of T1 vs. T2 were taken to be contingent, the thought experiment would collapse into something analogous to the cubicles thought experiment…half of any world instances in an infinite-repetition multiverse would be T1 worlds or trillion^2-verses, and half would be T2 worlds or trillion^3-verses. Therefore if we were good self-indicators and took ourselves to’ve been selected uniformly at random from the sequence of all observers in the multiverse of which our world was a part, we would consider it a trillion times more likely that T2 were the case and that we inhabited a trillion^3-verse. Say what you might about we so-called “presumptuous” philosophers–our recommendations would, in such case, be vindicated in proportion to our numbers.
On a more natural hearing, however, the choice between T1 and T2 is a matter of necessity, and our credence of 1/2 in each merely epistemic. Therefore, assuming that our world is part of an infinite repetition multiverse, we take it that there is a 1/2 chance that every world in the multiverse is a T1 world (in which case every observer from the multiverse inhabits a trillion^2-verse), and a 1/2 chance that every world in the multiverse is a T2 world (in which case every observer from the multiverse inhabits a trillion^3-verse). So, if we are good self-indicators and take ourselves to have been sampled uniformly at random from the stream of observers in the multi-verse of which our world is a part, we will take it that there is a 1/2 chance that the multiverse is a T1 multiverse, in which case T1 is true for us, and a 1/2 chance that the multiverse is a T2 multiverse, in which case T2 is true for us–thus defusing Bostrom’s thought experiment.
One might complain that I have cheated by turning the world, stipulated by Bostrom to be finite, into a infinite multiverse. On this reading, “world” means something like “everything that ever was or ever will be”. I can’t speak here for Bostrom of course, but the idea that “everything that ever was or ever will be” could be finite could derive only from a bankrupt (i.e. inadequate to the demands of philosophy) sense of modality. As David Lewis understood so well, philosophy requires some form of modal realism. On the other hand, “there is only one world…the real (i.e. actual) world”–as Bertrand Russell understood so well. These pincers constrain modal thinking rather a lot.
Fortunately, it’s perfectly possible to satisfy both Lewis and Russell. What most philosophers call “the actual world”, the thing that Bostrom invites us to view as finite in this thought experiment, is a small portion of “everything that ever was or ever will be”. (The alternative seems to be that something arose from nothing and will one day go away again, this time, oddly, for good.) It refers, more or less, to a local environment of some sort…perhaps circumscribed by an information-destroying event horizon (Big Bang, Big Crunch…no particular cosmology is implicit here). What philosophers call “counterfactual worlds” meanwhile are very real entities (they existed or will exist or exist now, somewhere). What we use them for mostly, I would claim, is to explicate “objective chance”. When I say that the objective chance that P is one half, what I mean, roughly, is that half of the nearby (in some similarity respect) counterfactual worlds are P worlds and half of them aren’t. (Lewis’s notion that at small similarity distance either all of the nearby worlds might be P worlds or all might be not P worlds…the basis for his treatment of counterfactuals…hasn’t aged particularly well in a scientific environment more conscious of sensitivity to initial conditions. More about this at some point…not now.)
Now, some philosophers may be troubled by the idea of sampling uniformly at random from an infinite sequence…this may smack of trying to take “half of an infinite set”. They should not be worried. These worlds are not, like Lewis’s worlds, completely separated from ours in time and space. Probably you could get from one to the other, albeit in pieces (Big Bangs leave you worse for wear, generally, and squeezing through those tiny string dimensions may leave you a lot thinner), but that doesn’t mean they wouldn’t nevertheless be juxtaposed in time and space, juxtaposition that would allow one to speak of orders, densities (i.e. frequencies) and so, therefore, fractional parts.
Another possible complaint is that I have employed just-so metaphysics to “conveniently” allow myself to apply SIA when and only when it doesn’t offend my intuitions to do so. The charge is, I think, not serious…I’ll concede that I have let my intuitions guide me here, and that, in the end, whatever metaphysics I adopt should be independently plausible. In this case they are; it is natural, in retrospect, to take oneself to have been sampled uniformly at random from a sequence of “real” observers…i.e. observers who are part of “everything that ever was or ever will be”, and not merely “epistemically possible”. After all, we want our counterparts to find vindication for their credences in proportion to their numbers. (The sentence I just typed may be the most important sentence in the whole of philosophy of probability.)
Indeed, this is what got Doomsday going in the first place, as most philosophers have identified “everything that ever was or ever will be” with “the actual” and everything else as “counterfactual”. The more robust modal realism I have proposed identifies “the actual” with something like “everything I might observe with a good telescope, and good microscope, a good space ship and maybe a good time machine (as traditionally conceived in antiquated science fiction, at any rate)”, identifies “the metaphysically possible” with “everything that ever was or will be (actually, was and will be…infinitely many times over)”, and exiles that which is amenable to coherent description but which is never (presumably because it cannot be, with the available stuff) instantiated at all…i.e. that which is merely logically possible.
What makes this preferable is that, now, there is a modal distinction between real but not actual (i.e. counterfactual) observers and unreal observers. What our intuitions rail against in Bostrom’s thought experiment is that T2 might be an utter fiction. We can live with assigning low credence to an event that occurs when we know that our luck was merely bad…that if things had turned out differently we would have fared better and, indeed, for most agents in the same epistemic situation, things do turn out differently.
As a curiosity, I’ll just mention that the modal views I have proposed dissolve Pascal’s wager, McGee’s Airtight Dutch Book, etc.. For we don’t think it’s the case that God and heaven are real for some tiny proportion of counterfactual agents. (If we did think that, it would be hard to deny that we should adopt an orthodox lifestyle.) Rather, we think that there is a tiny chance that God and heaven are real for everyone. Moreover, any Godly sacrifices we might have been comfortable with only on account of their finitude get multiplied to infinity after all, given that everything that ever was or will be was and will be infinitely many times over.
What are the prospects for philosophers paying any attention to the above? Recent history suggests “not so good”. True, Bostrom has in the past taken time to engage with upstart Doomsday naysayers even when they’ve had little or nothing compelling to say, but only after they managed to get their papers into good journals. George F. Sowers, for example, who wrote in Mind:
“Consider a situation where you are confronted with two large urns. You are informed that one urn holds 10 balls…and the other holds 1,000,000 balls…. You are equipped with a stopwatch and a marker. You first choose one of the urns as your subject. It doesn’t matter which urn is chosen. You start the stopwatch. Each minute you reach into the urn and withdraw a ball. The first ball withdrawn you mark with the number one and set aside. The second ball you mark with the number two. In general, the n th ball withdrawn you mark with the number n. After an arbitrary amount of time has elapsed, you stop the watch and the experiment…. Will there be a probability shift? … If the number drawn exceeds 10, then we can conclude that (the urn has 1,000,000 balls)…. So long as the number drawn is less than 10, however, there is no probability shift….”
While that’s all obviously true, Sowers neglected to say what would happen if the last number drawn was equal to 10. Odd, because he just got done saying “If this thing happens, there’s a shift in a certain direction. If another thing happens, there’s no shift.” You can’t have a unidirectional shift, so obviously something is missing. This isn’t rocket science…it isn’t even earth science, actually. It’s just trichotomy. Numbers can be greater than 10, they can be less than 10…or they can be equal to 10. If the number drawn is equal to 10, you get a shift the other way, because if it’s 10, it’s probably 10 because you ran out of numbers at 10. Sowers wants to set up an analogy between his urn scheme and the Doomsday thought experiment, but there is no Doom scenario analogous to observing a 10 in his urn scheme. The analogy might be more apt if one got put to sleep after 10 in the small urn case, but since in that case no observation would be recorded much of the time, there would still be work to do.
According to Bostrom (see reply at http://www.anthropic-principle.com/preprints/sowers/beyond.pdf), Sowers make some even crazier moves later. So far as I can tell Bostrom’s reply is very apt….I’m convinced that Sowers has no refutation of Doomsday to speak of. Be warned, however, that Bostrom’s paper degenerates in its final section to his favored selection effects (later double halfer) stuff…stuff that leads to gratuitous violations of reflection (as shown by Cian Dorr and some others), endorses the Monty Hall fallacy, etc. (So much the worse for it.)
At any rate, to recapitulate (and perhaps elaborate) on the above: my position is that self indication is a commonplace, that it should be employed by rational agents, a truism. This post is not an attempt to make a knockdown argument in favor of self-indication, but to answer Bostrom’s Presumptuous Philosopher objection to self-indication. It is true that I can’t relate to Bostrom’s attitudes about self-indication…I find self-indication neither surprising nor counter-intuitive, and I certainly don’t believe it’s motivated solely by a desire to cancel the Doomsday Argument. But, although I have discussed why I think it “comes out in the wash” independently, I don’t know how to convince people like Bostrom that self-indication is proper for rational agents. I don’t have an original argument for it–such sketches of arguments I alluded to have been developed more fully by others–therefore I don’t have an original argument against Doomsday. What I do have is an original refutation of Bostrom’s preferred take on the Presumptuous Philosopher example. That’s what’s here.
I should perhaps also mention (even though I claim not to write about Sleeping Beauty anymore) that my primary disagreement with the philosophical community about SB was its failure to notice that Lewis is a self-indicator, and so immune to any argument for thirding that establishes only SIA. This renders most extant arguments for thirding useless against Lewis. Indeed, if you go back up and look at the three informal “arguments” I gave in favor of self-indication after quoting Bostrom’s presentation of DA you will see that they are, in essence, Horgan’s thirding argument, Dorr’s argument (the one with the skylight) against Roger White’s and other double halfer schemes, and the frequency argument for thirding, respectively. Lewis self-indicates but employs a (fairly standard) sample weight bias correction technique. Halfer schemes that respect SIA (Lewis’s and that of Patrick Hawley) are the proper target for thirders, but thirders have ignored them completely, which is tantamount to begging the question.
But, I should probably not get started about SB.