Showing posts with label boltzmann brains. Show all posts
Showing posts with label boltzmann brains. Show all posts

Sunday, September 25, 2016

Bostrom's observation equation

Nick Bostrom's book "Anthropic Bias" seems to be the most thorough examination of observation selection effects around. However, I don't really understand the reasoning method he proposes at the end of the book (Chapter 10) and in this paper. So, let's work through it.

(Note: I started this post not understanding the observation equation, but now I feel like I do. +1 for writing to understand!)

First, Bostrom states the Strong Self-Sampling Assumption (SSSA), an informal rule for reasoning that he thinks is correct:
(SSSA) Each observer-moment should reason as if it were randomly selected from the set of all observer-moments in its reference class.
Sounds pretty good to me, but the devil's in the details -- in particular, what is a reference class?

Bostrom offers an "observation equation" formalizing SSSA. Suppose an observer-moment \(m\) has evidence \(e\) and is considering hypothesis \(h\). Bostrom proposes this rule for \(m\)'s belief:
\[P(h|e) = \frac{1}{\gamma}\sum_{o\in O_h\cap O_e}{\frac{P(w_o)}{|O_o\cap O(w_o)|}}\]
Okay, what does this mean? Ignore \(\gamma\) for now; it's a normalizing constant that depends only on \(e\) that makes sure probabilities add up to 1, I think. \(O_h\) is the set of observer-moments that are consistent with hypothesis \(h\), and \(O_e\) is the set of observer-moments that have evidence \(e\). So, what we're doing is looking at each observer-moment \(o\) with evidence \(e\) where hypothesis \(h\) is actually true, and adding up the probabilities of the worlds that those \(o\) live in, divided by the number of observers in that world that are members of \(o\)'s "reference class", which we still haven't defined.

Now let's look at the normalization constant:
\[\gamma = \sum_{o\in O_e}{\frac{P(w_o)}{|O_o \cap O(w_o)|}}\]
This is pretty similar to the above, but it just sums over observer-moments that have evidence \(e\). In fact, the inside of the sum is the same function of \(o\) as the inside of the sum of the first equation. In fact, I think we can sensibly pull this out into its own function, which semantically I think is something like the prior probability of "being" each observer:
\[P(o) = \frac{P(w_o)}{|O_o\cap O(w_o)|}\]
For each observer, the prior probability of "being" that observer is the probability of being in that world, split equally among all observers in that world that are in the same "reference class". This in turn lets us rewrite the observation equation as:
\[P(h|e) = \frac{\sum\limits_{o\in O_h\cap O_e}{P(o)}} {\sum\limits_{o\in O_e}{P(o)}}\]
This is useful, because it makes it clear that this is basically the formula for conditional probability!
\[P(h|e) = \frac{P(\text{observe }e\text{ and }h\text{ is true})}{P(\text{observe }e)}\]
So, now I feel like I understand how Bostrom's observation equation works. I expect that I'll mostly be arguing in the future about whether \(P(o)\) is defined correctly, and I still need to come back to what exactly an observer's "reference class" is. Spoiler: Bostrom doesn't pin down reference classes precisely, and he thinks there are a variety of choices.

Thursday, September 22, 2016

Observer selection effects from scratch

Suppose that I have only three theories T0, T1, T2, describing three possible worlds W0, W1, and W2. Now, suppose that I observe X, and suppose that the following is true:
  • In W0, there are no observers of X.
  • In W1, there is one observer of X.
  • In W2, there are 100 observers of X.
What should I now believe about my theories? Should my beliefs be sensitive to how many observers of X there are in each world?

It seems pretty clear to me that I shouldn't believe T0, since it's not compatible with my observation of X; that's a minimal level at which my beliefs should be sensitive to the number of observers of X. A way of justifying this is to cash out "I believe in Tn" to mean "I believe I am in Wn", or "I believe that my future observations will be consistent with Tn". Then "I observe X" and "In W0, there are no observers of X" come together to imply "It's not possible that I'm in W0" and hence "I don't believe T0".

What should I think about T1 and T2, though? It's still possible that I'm in either one of their worlds, so I'll believe both of them to some extent. Should I believe one of them more than the other? (Let's assume that T1 and T2 were equally plausible to me before this whole thing started.)

Pretty solid ground so far; now things get shaky.

Let's think about the 101 possible observers distributed among W1 and W2. I think it's meaningful to ask which of those I believe I am; after all, which one I am could imply differences in my future observations.

Nothing about my observation X favors any of these observers over any other, so I don't see how I can believe I'm more likely to be one of them than another one, i.e. I should have equal credence that I'm any one of those observers.

This implies that I should think it's 100 times more likely that I'm in W2 than in W1, since 100 equally likely observers-of-X live in W2 and only one observer-of-X lives in W1. I should think T2 is much more likely than T1. This answers the original question of this blog post.

However, that means that if I'm considering two cosmological theories, and one of them predicts that there are billions of copies of me having the experience I'm having now, I should believe that it's very likely that that theory is true (all else equal). It's weird that I can have that kind of belief about a scientific theory while I'm just sitting in my armchair. (Nick Bostrom calls this "The Presumptuous Philosopher", and thinks you shouldn't reason this way.)

So, it seems like we have to pick one of these weird things:
  1. It's nonsensical to have beliefs about which possible observer I am (even if being different observers implies different future observations).
  2. Something besides my observations and my prior belief in theories of the world should affect my beliefs (in theories of the world, or in which observer I am).
  3. Just by sitting in my armchair and thinking, I can come to strong, justified beliefs about cosmological theories based solely on how many people-thinking-in-armchairs they contain.
  4. I've made some other mistake in my reasoning; like, my account of theories and worlds is wrong, or I'm not thinking carefully enough about what it means to be an observer, or I'm not thinking clearly about normative principles around beliefs, or something else. (Actually, me making a mistake wouldn't be so weird.)
?!

I tend to lean toward 3 (well, if I assume 4 isn't true), but smart people disagree with me, and it's kind of a crazy thing to believe. It could also mean that we're Boltzmann brains, thought I'm not sure. See also this paper.

---

Addendum: consider this similarly plausible-sounding reasoning:
  1. "I observe X" just means "there exists an observer of X".
  2. "There exists an observer of X" rules out T0, but not T1 or T2.
  3. "There exists an observer of X" doesn't favor T1 or T2.
  4. All else equal, I should have equal belief in T1 and T2.
I think this reasoning is too weak, and leaves out some implications. "I observe X" implies "there exists an observer of X", but I'd argue that it implies some additional things: it has implications about what I should believe I'll observe in the future (not just what some existing observer will observe), what theories I should believe are true (not just some observer), and which observers I should believe I could possibly be (ditto). Maybe I should redo my earlier reasoning in terms of expected observations and see what happens? 

Wednesday, September 7, 2016

Boltzmann brains

What if almost everything we thought we knew about our position in the universe was wrong? What if we actually were not members of a species that arose around 200,000 years ago, among life forms that started evolving around 4 billion years ago on a planet that formed 4.5 billion years ago, in a universe that began with a Big Bang around 13 billion years ago? A single cosmological discovery that changed all of that would be an amazingly big deal (at least in terms of scientific knowledge -- it might not change what we actually do with our lives).

That's roughly what's at stake with the question of Boltzmann brains -- whether instead of the picture above, it's more likely that we came into existence a short time ago via random (quantum or thermal) fluctuations during an extremely long quiet period in one of the last ages of the universe. Not only our ideas about our position in the universe are at stake; it's also possible that only my brain arose this way, perhaps a few minutes or seconds ago, meaning that much of what I think is real (other people, places beyond my immediate reach, all of human history, etc.) is not actually physically real.

Now, this sounds suspiciously similar to many radically skeptical arguments, like the brain in a vat thought experiment -- how do you know you're not just a brain in a vat? These arguments are great for an intro-to-Philosophy class, but once the shine wears off, they seem a little thin -- what does it really offer to say "well, you might be a brain in a vat, there might be a deceiving demon, etc.", and what more can we say about these arguments? They might be useful thought experiments for epistemologists who need corner-cases to test their ideas of what "knowledge" really is and what we can really know, but they don't feel productive as a way to think about the world. I think the typical arc is to be surprised by these arguments, live with them for a while, and then forget about them, and I think that's fine.

However, I think the Boltzmann brains (BB) argument is importantly different. The BB argument isn't "how do you know you're not a BB", it's "according to some cosmological theories, many BBs will exist, and using some kinds of anthropic reasoning, it's likely that you're one of those BBs." It's as if scientists pointed their telescopes at the sky and saw vast arrays of brains-in-vats; we have (as far as I know) real reasons to take the BB scenario seriously.

I haven't been able to find a comprehensive survey of argumentation around BBs, or even a very rigorous paper that attempts to thoroughly examine the question; it's usually treated as an example or interesting implication in cosmology or philosophy books and papers, as far as I can tell. It looks like it's only been seriously considered for about 20 years, like so many of the ideas that I think are most important.

To be totally honest, I expect the BB argument to fail. I also don't think it's likely to be importantly action-informing; how would I really make decisions differently if I were a BB? However, it's one of a few really big questions about what the world is actually like that I'd really love to see answered. In fact, I think I'll post again to talk more about those big questions -- stay tuned.

I'm playing with the idea of writing and thinking more about BB -- it's an appealing hobby project. If I do, you'll see it here first!