Thursday, September 22, 2016

Observer selection effects from scratch

Suppose that I have only three theories T0, T1, T2, describing three possible worlds W0, W1, and W2. Now, suppose that I observe X, and suppose that the following is true:
  • In W0, there are no observers of X.
  • In W1, there is one observer of X.
  • In W2, there are 100 observers of X.
What should I now believe about my theories? Should my beliefs be sensitive to how many observers of X there are in each world?

It seems pretty clear to me that I shouldn't believe T0, since it's not compatible with my observation of X; that's a minimal level at which my beliefs should be sensitive to the number of observers of X. A way of justifying this is to cash out "I believe in Tn" to mean "I believe I am in Wn", or "I believe that my future observations will be consistent with Tn". Then "I observe X" and "In W0, there are no observers of X" come together to imply "It's not possible that I'm in W0" and hence "I don't believe T0".

What should I think about T1 and T2, though? It's still possible that I'm in either one of their worlds, so I'll believe both of them to some extent. Should I believe one of them more than the other? (Let's assume that T1 and T2 were equally plausible to me before this whole thing started.)

Pretty solid ground so far; now things get shaky.

Let's think about the 101 possible observers distributed among W1 and W2. I think it's meaningful to ask which of those I believe I am; after all, which one I am could imply differences in my future observations.

Nothing about my observation X favors any of these observers over any other, so I don't see how I can believe I'm more likely to be one of them than another one, i.e. I should have equal credence that I'm any one of those observers.

This implies that I should think it's 100 times more likely that I'm in W2 than in W1, since 100 equally likely observers-of-X live in W2 and only one observer-of-X lives in W1. I should think T2 is much more likely than T1. This answers the original question of this blog post.

However, that means that if I'm considering two cosmological theories, and one of them predicts that there are billions of copies of me having the experience I'm having now, I should believe that it's very likely that that theory is true (all else equal). It's weird that I can have that kind of belief about a scientific theory while I'm just sitting in my armchair. (Nick Bostrom calls this "The Presumptuous Philosopher", and thinks you shouldn't reason this way.)

So, it seems like we have to pick one of these weird things:
  1. It's nonsensical to have beliefs about which possible observer I am (even if being different observers implies different future observations).
  2. Something besides my observations and my prior belief in theories of the world should affect my beliefs (in theories of the world, or in which observer I am).
  3. Just by sitting in my armchair and thinking, I can come to strong, justified beliefs about cosmological theories based solely on how many people-thinking-in-armchairs they contain.
  4. I've made some other mistake in my reasoning; like, my account of theories and worlds is wrong, or I'm not thinking carefully enough about what it means to be an observer, or I'm not thinking clearly about normative principles around beliefs, or something else. (Actually, me making a mistake wouldn't be so weird.)
?!

I tend to lean toward 3 (well, if I assume 4 isn't true), but smart people disagree with me, and it's kind of a crazy thing to believe. It could also mean that we're Boltzmann brains, thought I'm not sure. See also this paper.

---

Addendum: consider this similarly plausible-sounding reasoning:
  1. "I observe X" just means "there exists an observer of X".
  2. "There exists an observer of X" rules out T0, but not T1 or T2.
  3. "There exists an observer of X" doesn't favor T1 or T2.
  4. All else equal, I should have equal belief in T1 and T2.
I think this reasoning is too weak, and leaves out some implications. "I observe X" implies "there exists an observer of X", but I'd argue that it implies some additional things: it has implications about what I should believe I'll observe in the future (not just what some existing observer will observe), what theories I should believe are true (not just some observer), and which observers I should believe I could possibly be (ditto). Maybe I should redo my earlier reasoning in terms of expected observations and see what happens? 

3 comments:

  1. this feels like stats and monty hall (especially the Sleeping Beauty problem) and therefore should be solved. (... continues reading Bostrom's paper...) oh gosh it's totally not.

    hold on wait sleeping beauty: the 1/3 view is totally right.
    ... the 1/2 view - then 2/3 if she's told it's monday? what the hell?
    ... but then the 1/3 view leads to exactly what you're saying above, right? And to foolish extremes. Like, I can just armchair say "well imagine a trillion billion worlds where superintelligent aliens have formed the Dan Truman Show, and I'm more likely to be in one of them, so... good morning, and if I don't see you, good afternoon, good evening, and good night."

    ... I have so many questions about Sleeping Beauty and since you're coming down here in a couple weeks I will ask you them then (if you don't terribly mind getting me up to speed; I am getting there but there are still some knots in my brain and I should probably do other work now)

    ReplyDelete
  2. Hey! This was interesting—thanks!

    This might not make sense, or might not be applicable, but one quick thought:

    I’m not sure it makes sense to assume that nothing about your observation X favors any of these observers over any other (out of the 101 possible observers distributed among W1 and W2). In the abstract, having not defined X at all, X does not have any relevant features, but if you were considering a *particular* observation X, I think that it’s at least somewhat plausible that the *particular* features of that observation could come into play in a way that might make a difference to which world you should believe you’re in.

    For instance, let’s say observation A is internally focused (i.e. that you are experiencing a particular flavor of happy right now) and observation B is externally focused (i.e. that apples appear to fall downward).

    For observation B, it seems that some relevant factors would be how available you think the information is to make that observation (i.e. are there apple trees near others), how likely it is that other people are paying attention, etc. If it seems reasonable that others could be observers too, it seems like that should tilt you more towards believing that you are in W2. I’m not sure, but it seems plausible to me that you should think it’s more than 100 times more likely that you’re in W2 than W1.

    I think observation A gets a bit trickier, but I have this inkling that how the particulars of observation A should tilt you will involve separate prior beliefs, e.g. how likely you already think it is that there are a bunch of copies of yourself having the same experience.

    ReplyDelete