(I told Amanda I'd post some philosophy stuff, but I've spent most of my posts this week on AI because I'm certifiably obsessed. So, here's a philosophy thing, and I'll leave out the application to AI for variety.)
I'm a little perturbed about using Occam's razor as a foundation of epistemology, especially in its computational forms. Here's the kind of reasoning I'm concerned about:
Physics test: If Michael Jordan has a vertical leap of 1.29 m, then what is his takeoff speed and his hang time (total time to move upwards to the peak and then return to the ground)?
Student: According to the simplest explanation, Michael Jordan has formed randomly from thermal or quantum fluctuations, and the small bubble of order he inhabits will collapse back into background heat long before he touches the ground.You would probably not get extra credit for rigor or consistency in answering this question!
My basic worry is that the simplest explanation for a set of observations may be something that doesn't fit with any of my normal beliefs about my situation. This is because simple explanations can expand into vast universes, and in these universes there could be many instances of my circumstances (or something observer-independent, like the Michael Jordan problem above) that are nothing like what I believe to be my current situation; they could be in simulations, part of programs numerating all possible computations in order, fluctuations of some very long-lasting, near-equilibrium cosmological state, or something stranger.
(I don't think the problem goes away when you consider the set of all explanations compatible with observations, weighted by their simplicity, but I might be wrong.)
Of course, people could just as well have had my complaint when physics was just being discovered; our view of what the universe is and our place in it would probably appear extremely weird to them, violating many of their normal beliefs about their situation. Heck, the implications of quantum physics are weird enough to me now. So maybe I'm just being stubborn, and I should bite Occam's bullet and think that most of my normal beliefs about my situation are wrong.
So why don't I think we should use this kind of reasoning? I could have epistemic or instrumental reasons, I could actually be asking a different question from "what is the most likely explanation", or I could use some kind of anthropic reasoning.
- Epistemic: I don't feel like I really believe that the most likely explanation is that I'm a Boltzmann brain; I feel like I have evidence that says otherwise. However, that evidence could be fabricated, which is a big problem -- I may just have an unjustified belief that I'm not a Boltzmann brain! Should I bite Occam's bullet?
- Instrumental: if I am in a Boltzmann brain, things I do matter only over very small timescales (until the bubble collapses).
- Different question: maybe instead I want to know something like "conditioning on some other assumptions (like that most of my evidence is "real", whatever that means), what is the most likely explanation?" This actually doesn't seem so bad; it's the most appealing answer to me at the moment.
- Anthropic reasoning: I'm not particularly satisfied with this, because I'd like questions about situations without observers -- e.g. physics problems like the Michael Jordan problem above (well, versions without MJ the observer!) -- to have "reasonable" answers, instead of silly ones. In fact, that might be the most interesting part of this post -- that these problems seem like they can't be answered fully by anthropics, if we want to answer observer-free questions "sensibly".
I do like the idea of re-framing the basic epistemic question ("what is the best explanation for x, and what does this imply we should expect in x's future"), but I'm not sure where to go from there. Perhaps in future posts!
No comments:
Post a Comment