He suggested that some Adversary wants people to believe that they can't affect politics enough to make efforts worthwhile; that's good for the Adversary because it's already in power and this lets it keep power. I hadn't thought of this, and it's a problem for me, because I don't think I can affect politics enough to make efforts beyond informed voting / possibly donation worthwhile. (I think Obama was mostly telling people "Adversary wants to deceive you into thinking voting is not worthwhile", but once you take the Adversarial hypothesis, it's not clear that it should be limited to voting.)
This is a game-theoretic problem I haven't seen much analysis of: when an Adversary can partially manipulate your perception of the world, how do you adjust your epistemics to make a good decision? This looks like an intermediate case between Descartes' Demon and a plane dropping leaflets that say "Surrender!" -- I know more about the Adversary than Descartes did, but less than a soldier who's seen the leaflets drop.
A few scattered thoughts on how to deal with this situation:
Cui bono? To identify a hidden actor from some actions / outcomes, you can ask "who benefits?" and guess that the beneficiary may be the hidden actor. In the adversarial epistemics case, my actions are partially determined by the Adversary, so I can look at the actions and beliefs of mine and ask "who benefits?" (Obama is partially using cui bono to support the adversarial hypothesis in the first place. "Don't think voting will work? Who benefits from that?")
Attack model: We can try to model the Adversary's capabilities and motives, and use this to figure out what our defense options are. For example, how specifically can the Adversary target me? Presumably they are playing against some classes that I'm part of, not directly against me, and presumably they are spending more effort attacking classes that will benefit them more per dollar. Maybe this gives me a way to trust some info to be "cleaner" than other info?
Epistemics vs morale: I'd naively guess that it's easier to attack morale than to attack epistemics. If there's an Adversary, I think there's more reason to trust spreadsheet-style calculative models based on historical data ("I should help in way X because its track record / expected value is good") over gut-level models ("I should help because it feels intuitively likely to work").
Distraction: It's plausibly easier to discourage people by directing their attention toward an intractable issue than it is to make a tractable issue look less tractable; e.g., I think I'm particularly powerless over Supreme Court confirmations, so directing my attention to confirmations makes me feel more powerless. This suggests I should look away from the spotlight for better opportunities.
Based on this, I might reconsider donation to political causes; not sure which ones, and they'd probably be justified in terms of Citizenship instead of in terms of Effective Altruism.
Realistically, I doubt I have the time / energy for political action beyond voting :(
Cui bono? To identify a hidden actor from some actions / outcomes, you can ask "who benefits?" and guess that the beneficiary may be the hidden actor. In the adversarial epistemics case, my actions are partially determined by the Adversary, so I can look at the actions and beliefs of mine and ask "who benefits?" (Obama is partially using cui bono to support the adversarial hypothesis in the first place. "Don't think voting will work? Who benefits from that?")
Attack model: We can try to model the Adversary's capabilities and motives, and use this to figure out what our defense options are. For example, how specifically can the Adversary target me? Presumably they are playing against some classes that I'm part of, not directly against me, and presumably they are spending more effort attacking classes that will benefit them more per dollar. Maybe this gives me a way to trust some info to be "cleaner" than other info?
Epistemics vs morale: I'd naively guess that it's easier to attack morale than to attack epistemics. If there's an Adversary, I think there's more reason to trust spreadsheet-style calculative models based on historical data ("I should help in way X because its track record / expected value is good") over gut-level models ("I should help because it feels intuitively likely to work").
Distraction: It's plausibly easier to discourage people by directing their attention toward an intractable issue than it is to make a tractable issue look less tractable; e.g., I think I'm particularly powerless over Supreme Court confirmations, so directing my attention to confirmations makes me feel more powerless. This suggests I should look away from the spotlight for better opportunities.
Based on this, I might reconsider donation to political causes; not sure which ones, and they'd probably be justified in terms of Citizenship instead of in terms of Effective Altruism.
Realistically, I doubt I have the time / energy for political action beyond voting :(
Hmm - So you looked into this at first because you were like "my thoughts match what Obama said the Adversary would want me to think... am I getting played by the Adversary?" (There is a possibility of being mugged/DOSed by Obama here, but if that's happening, I guess all you lose is a little time thinking about it.)
ReplyDeleteDistraction seems like a big issue here. (recent thought I had that's a related semi-tangent: maybe this is why anyone plays Culture Wars? Like, if you're a richman funding the Adversary, you're going to make a big deal about Serena Williams arguing with a ref and getting a penalty (and racism and sexism and stuff!), so they're paying more attention to that than to your tax cuts for the rich?)
Calculations > feelings: yeah, but that's what this whole exercise is about, right?
How do you know when you should be looking out for adversaries? If the answer is "all the time", this feels like a difficult way to think! Like trying to do 2-d geometry on a plane, but assuming that, by the way, every so often the plane bends in weird ways.
These are half assed thoughts too; I am too tired to have actual good ones. Same lack of time/energy I guess :-/
whoops, my reply below. Also, trigger warning, this post is about how you're being Secretly Swindled, which may be a mugging for Dan ;)
DeleteYeah! I definitely agree about distraction, though I'd like to know if that's like 90% of it or if there's some other effect I'm not wary enough of yet. (And if Obama is DOSing me then I'm not nearly paranoid enough for this world, RIP me)
ReplyDeleteCalculation vs feelings is a good example: if the Adversary is actually not only distracting / discouraging me, but also sowing information that throws off my *calculations* about what's worthwhile to do, it's a much worse situation than I'd previously thought. (I guess in this case they're mostly back-of-the-envelope calculations, but still I might hope they're more robust against adversaries than my gut is.)
Knowing when to look for adversaries: strongly agree that it'll be too costly to do it all the time. Cf our conversation in Alaska about not getting screwed during transactions -- maybe it's good to "forget what you know about the world", look from first principles, and identify what your largest-impact decisions might be, then look for cases where they apparently turn out not to matter? It seems like I might say "a priori, my citizenship in a top-10-most-powerful country should give me a lot of power; a posteriori, it looks like I don't have much power; wonder if an adversary is misinforming me here"
IDK -- I'm frankly just as interested in the abstract game theory version of this, where you have to ask what the Nash equilibrium of deception strategies and anti-deception reactions is.