Is the Presumptuous Philosopher really overconfident?
How convincing is the best argument against SIA?
There are two main theories of anthropic reasoning, the practice of taking your own existence into account when evaluating probabilities. The first theory is the self-sampling assumption (SSA), which says that you should reason as if you are randomly selected from the group of all actually existing observers in your reference class. It’s unclear what a reference class is actually supposed to be - the only principled answers seem to be to include all observers, period, or only the observers subjectively indistinguishable from you, but the latter would imply that your reference class changes over time, which leads to big problems. As an example of the self-sampling assumption, suppose you learn that before the universe was created, God flipped a fair coin. If it landed heads, he created only one inhabited planet. If it landed tails, he created two. In each case, the planets would have the same population. SSA says that you should conclude that there is a 1/2 chance that the coin landed heads, since in either case, you would be randomly selected from the set of all existing observers, so your existence provides no evidence either way.
On the other hand, the self-indication assumption (SIA) says that you should reason as if you are randomly selected from the set of all possible observers (weighted by the prior likelihood of each possible observer existing). More intuitively, it says that you should update your probabilities based on your own existence so that outcomes where more people like you exist are rendered more likely. Unlike SSA, SIA does not depend on the reference class. In the coin flip example above, SIA says there’s a 1/3 chance that the coin landed heads, since you were twice as likely to exist if it landed tails.
On purely theoretical grounds, SIA seems to me like a way better option than SSA. It doesn’t make the probability dependent on an arbitrarily defined reference class. And it seems completely obvious that your own existence would be more likely on a hypothesis where many people exist, all else being equal. SSA treats your existence as 100% certain as long as at least one person exists, since you are randomly selected to be one of those people no matter what, while SIA treats the probability of your existence as linear in the number of people who exist and updates on that in the normal Bayesian way. SSA also arbitrarily treats the boundary between zero people existing and one person existing as different from the boundary between N and N+1 people for any other N: According to SSA, you can use your existence to update against hypotheses that predict that no one at all should exist, but not against those that predict that fewer people should exist. SIA denies this asymmetry.
There are also plenty of specific-case-based arguments for SIA. My favorite is the Lazy Adam thought experiment. But proponents of SSA do have one argument against SIA: It seems to imply an irrationally high certainty in hypotheses that predict a large number of people existing. The classic example is the Presumptuous Philosopher: Imagine that scientists have somehow narrowed down the theory of everything to two options. Theory A predicts that only one inhabited planet exists, while Theory B predicts a trillion (all with the same population in either case). Neither says anything more specific about what the inhabited planet(s) will be like, and both theories are equally simple and equally well-confirmed on experimental grounds. They’re also equal on whatever other theoretical virtues you think should affect their prior probabilities. SSA proponents say you should give each theory a 50% probability, but SIA says that Theory B has a-trillion-to-one odds of being correct. SSA proponents think that a philosopher who believes in SIA would be crazy to hold this credence. It would force him to, for example, assume that an experimental error must have occurred if scientists come up with a result showing that Theory A is true.
Even worse is the case where Theory B predicts that infinitely many inhabited planets exist, instead of just a trillion. Now the Presumptuous Philosopher must put infinity-to-one odds on Theory B - in other words, he believes Theory B with probability 1.1 This infinite presumptuousness seems beyond the pale. Doesn’t it violate Cromwell’s Rule?
I will admit that the Presumptuous Philosopher result disconcerts me, especially the Infinitely Presumptuous Philosopher. But I can’t help but wonder if this scenario is pumping our intuitions in the wrong direction. When discussing the problem, we consider it from an outside perspective. Based on the information available to us, there is a 50% chance of either theory being correct regardless of which anthropic principle is true, because we don’t live in the presumptuous philosopher’s world, so we don’t make the anthropic update on our own existence. And this perhaps makes us fail to take into account the anthropic evidence that the philosopher should get from his own existence, given that he has actually observed himself to exist in the real world.
The problem also implicitly asks us to imagine that there would be a presumptuous philosopher regardless of which theory is true - otherwise, the intuition that both theories are equally likely doesn’t make sense. But presumably a presumptuous philosopher is more likely to exist according to Theory B, since there are more total people who might end up being a presumptuous philosopher. So it should at least seem plausible that Theory B is more likely than Theory A, given the fact of a presumptuous philosopher’s existence. From the presumptuous philosopher’s own perspective, then, is it really so weird that he would take the indexical fact of his existence, specifically (rather than just the existence of a presumptuous philosopher), to be even stronger evidence for Theory B? It should after all, be much less likely that any given person ends up being him, specifically, than that they end up being just any presumptuous philosopher.
Now, the thought experiment can be explicitly rephrased so that it does say that a presumptuous philosopher is guaranteed to exist no matter what without changing SIA’s verdict - for example, by positing that the planet that Theory A predicts and all the trillion planets Theory B predicts are identical. But even this phrasing exposes a flaw in the thought experiment’s framing. In either case, the thought experiment focuses our attention on a single presumptuous philosopher even though, if Theory B is true, there are a trillion presumptuous philosophers. But who is this presumptuous philosopher that we’re focusing on? Well, since we’re not told anything more about him, he must be one randomly selected from the set of all presumptuous philosophers. But, hold on, that means the thought experiment’s framing is directing our focus in a very peculiar way. Our external perspective is viewing the situation in the exact same way that SSA (with the reference frame of “all presumptuous philosophers”) says that the perspective of an actual observer works. But that seems like rigging the thought experiment in SSA’s favor! Our attention has been manipulated to focus on the only presumptuous philosopher in the Theory A world and to focus on only one of the trillion presumptuous philosophers in the theory B world, in order to make it seem like SSA’s verdict is correct.
To get around this, proponents of SSA might want to redesign the thought experiment so that only one presumptuous philosopher exists in the whole universe, regardless of which theory is true. But this won’t work, because SIA says that the probability of each theory is 1/2 in that case. It is no longer presumptuous. The presumptuous philosopher result requires that a Theory B world has a trillion times as many presumptuous philosophers as a Theory A world, in expectation.
But it gets even worse. Because in this modified scenario, it’s SSA that becomes presumptuous! According to SSA, the presumptuous philosopher, upon learning that both theories predict that only one philosopher like him exists, but that Theory B predicts that a trillion times as many people would exist as Theory A, should reason as follows: “According to Theory B, I’m a trillion times less likely to be the presumptuous philosopher than I am according to theory A, since I’m randomly selected from a group a trillion times larger which still only contains one such philosopher. Therefore, Theory A is a trillion times more likely.”2 It’s not clear, then, why the Presumptuous Philosopher is supposed to be an argument against SIA and in favor of SSA when SSA also implies presumptuousness!
There are other cases where SSA implies presumptuousness too. Imagine that Theory A predicts that only one inhabited planet exists, and it has carbon-based life. Theory B predicts that the same carbon-based life planet exists, but a planet with silicon-based life also exists and has a population a trillion times larger.3 SIA is indifferent between the two theories, but SSA says that carbon-based life forms should have a-trillion-to-one odds of Theory A. And both this and the previous version can be extended to infinite versions where the SSAer would hold Theory A to be true with probability 1. There’s also an example involving terrifying human-lobster hybrids, in which an SSAer must have irrational certainty about whether certain beings fit the criteria for being inside their reference class (and this argument has to be right, since it involves terrifying human-lobster hybrids).
Maybe the SSAer will bite the bullet and say that these cases of presumptuousness are intuitive to them, while the SIA ones are not. But it’s not clear what makes these cases any better and why the SIAer can’t just say the opposite. So if Presumptuous Philosopher arguments really are good arguments against anthropic principles, then perhaps we should conclude that both theories are wrong and the correct method of anthropic reasoning is some as-yet undiscovered third option. I think this is probably more likely to be correct than straight-up SSA, given all the other problems SSA has.
The Confidence in SIA Objection
There’s another problem with this thought experiment, which is that, well, an SIAer shouldn’t actually have a-trillion-to-one odds in Theory B. That would only be true for someone who was absolutely sure about SIA, or at least, someone who thinks that SIA has far less than a 1 in a trillion chance of being wrong. I think SIA is the best theory of anthropic reasoning, but I’m not that confident in it - that would be excessive. I’m not even trillion-to-one confident that there is a correct theory of anthropic reasoning at all, let alone that there is one, and it’s SIA. But what happens if we modify the Presumptuous Philosopher experiment to account for this logical uncertainty? Say the philosopher is 99% confident in SIA and 1% sure of SSA. Then, even in the infinite case, the philosopher will only be 99.5% certain of Theory B. That’s still a very high credence, but a 1 in 200 chance can easily be turned into >50% by scientific evidence, so the charge that the philosopher would irrationally refuse to change his mind in light of new evidence is false.
I think this is a big problem, because it’s not clear that our intuition of the philosopher’s irrationality isn’t just coming from the fact that he’s irrationally certain of SIA. After all, the philosopher is right that, if SIA is true, the probability of Theory A is 1/1,000,000,000,001, so what other source could his irrationality possibly have? He can’t be misjudging the probability that SIA gives, so the only way he could be misjudging the probability of Theory A is by misjudging the rational credence he should give to SIA itself. Opponents of SIA think he’s misjudging it by believing in SIA at all, but SIA proponents can just say that the most reasonable credence to have in SIA is >50% but lower than 100%. After all, we should acknowledge the fallibility in our own reasoning, and thus, the epistemic possibility that we could be wrong about SIA. The presumptuous philosopher is misjudging the reliability of his own reasoning by assigning SIA 100% credence.
If a real SIA proponent was faced with the presumptuous philosopher’s situation, but then scientific evidence for Theory A started pouring in, they would probably just abandon their belief in SIA. After all, the scientific evidence for Theory A is really unlikely if SIA is true, so it’s reasonable to perform a Bayesian update away from SIA upon observing it. But the mere fact that we would change our mind, if faced with overwhelming evidence, is no reason to change our mind now, when we have no such evidence. I can believe that SIA would be unwarranted in some situations while still believing that SIA is true. After all, if SIA is true, those situations where it would be unwarranted are very unlikely.
The Multiverse Objection
Suppose that instead of a single universe where either Theory A or Theory B is true, there are multiple universes created by some mechanism that has a 50% chance of making Theory A or Theory B true in each universe. Scientists have discovered this but have not yet verified whether Theory A or Theory B is true in their own universe. A presumptuous philosopher says there’s no reason to do the experiment: According to SIA, there’s a 1,000,000,000,000/1,000,000,000,001 chance that they’re in a Theory B world. The scientists balk at this. Surely this guy can’t be that confident without any physical evidence - this SIA principle must be BS. Then another philosopher comes along. He agrees that SIA is BS - he’s an SSAer. And according to SSA, there’s a 1,000,000,000,000/1,000,000,000,001 chance that they’re in a Theory B world. He doesn’t think there’s any need for an experiment either.
It’s pretty clear to see why this is the correct verdict under SSA. After all, in this multiverse example, as long as there are enough universes, there will be a trillion times as many observers in Theory B universes as there are in Theory A ones. So SSA says you’re a trillion times more likely to be in a Theory B universe. But this is really strange, isn’t it? All we did was duplicate the original Presumptuous Philosopher scenario a bunch of times by stipulating that there’s a multiverse, and that the scenario plays out in a bunch of different universes, and somehow that changed the probability that SSA gives.
If we agree with the trillion-to-one verdict in this case, and it seems that we must, then we should also agree with it in the original case. It doesn’t make a difference to the probability whether the same situation is playing out in other universes. Thus, the presumptuous philosopher is right to say that the odds of theory B are a trillion to one. Of course, he can still have a lower credence than that based on uncertainty as to whether this argument works, but his reasoning appears to be correct, and thus, not a point against SIA.
Conclusion
The Presumptuous Philosopher argument is probably the best argument against SIA.4 But there are four powerful objections to it:
The argument pumps our intuitions in the wrong direction by having us view the situation from an external perspective that has selection bias over which presumptuous philosophers we think about. This selection bias reflects the kind of observer selection effect that SSA thinks exists, so the framing is biased in favor of SSA. This can be considered a debunking explanation of the intuition that the philosopher is overconfident.
The thought experiment doesn’t actually favor SSA over SIA. Thus, if it proves anything, it proves that some third principle is right.
The argument assumes that the philosopher is 100% confident in SIA. Real people are not 100% confident in their views on anthropic reasoning, so this may be the source of the perceived overconfidence, rather than SIA itself.
The multiverse argument seems to demonstrate that the presumptuous philosopher’s reasoning is correct, so he isn’t presumptuous after all.
There’s a broader point here, which relates to the third objection: I think part of the intuition that the philosopher is irrational just comes from the fact that we don’t like attributing extreme odds to anything. I have to wonder whether, when probability theory was first developed, someone who had thought of this argument might have objected to it by imagining a “presumptuous statistician” who says that he doesn’t even need to check what the dice rolled because he’s so certain that you didn’t roll twelve tens on a d10 in a row. It seems like we just don’t like the idea of being super confident in something that you could look for physical evidence for, based on argument alone.
Given these problems, I don’t think the Presumptuous Philosopher is a decisive objection against SIA, and it certainly doesn’t prove SSA correct. SSA proponents are thus unwarranted in using it to dismiss the alternative view.
not absolute certainty, but almost-certainty.
This assumes that all the people predicted to exist on either theory are in the presumptuous philosopher’s reference class. But unless the reference class is defined very narrowly, we can just specify that both theories predict that all the existing people will be in the same reference class as him without changing the thought experiment.
If you think silicon-based life would be outside of your reference class, just replace carbon vs. silicon based life with some other difference that you would consider to be within your reference class.
Although I have another argument against SIA that I don’t think anyone has made before, which I plan to publish in a future post. Hopefully, I’ll get some comments on that to see how successful it is.
Amazing article! You might be interested, as I show here https://link.springer.com/article/10.1007/s11229-024-04686-w, any alternative to SIA must be extremely presumptuous in super weird ways, giving you absurdly strong evidence that there weren't a bunch of prehistoric humans!