He suggested that some Adversary wants people to believe that they can't affect politics enough to make efforts worthwhile; that's good for the Adversary because it's already in power and this lets it keep power. I hadn't thought of this, and it's a problem for me, because I don't think I can affect politics enough to make efforts beyond informed voting / possibly donation worthwhile. (I think Obama was mostly telling people "Adversary wants to deceive you into thinking voting is not worthwhile", but once you take the Adversarial hypothesis, it's not clear that it should be limited to voting.)
This is a game-theoretic problem I haven't seen much analysis of: when an Adversary can partially manipulate your perception of the world, how do you adjust your epistemics to make a good decision? This looks like an intermediate case between Descartes' Demon and a plane dropping leaflets that say "Surrender!" -- I know more about the Adversary than Descartes did, but less than a soldier who's seen the leaflets drop.
A few scattered thoughts on how to deal with this situation:
Cui bono? To identify a hidden actor from some actions / outcomes, you can ask "who benefits?" and guess that the beneficiary may be the hidden actor. In the adversarial epistemics case, my actions are partially determined by the Adversary, so I can look at the actions and beliefs of mine and ask "who benefits?" (Obama is partially using cui bono to support the adversarial hypothesis in the first place. "Don't think voting will work? Who benefits from that?")
Attack model: We can try to model the Adversary's capabilities and motives, and use this to figure out what our defense options are. For example, how specifically can the Adversary target me? Presumably they are playing against some classes that I'm part of, not directly against me, and presumably they are spending more effort attacking classes that will benefit them more per dollar. Maybe this gives me a way to trust some info to be "cleaner" than other info?
Epistemics vs morale: I'd naively guess that it's easier to attack morale than to attack epistemics. If there's an Adversary, I think there's more reason to trust spreadsheet-style calculative models based on historical data ("I should help in way X because its track record / expected value is good") over gut-level models ("I should help because it feels intuitively likely to work").
Distraction: It's plausibly easier to discourage people by directing their attention toward an intractable issue than it is to make a tractable issue look less tractable; e.g., I think I'm particularly powerless over Supreme Court confirmations, so directing my attention to confirmations makes me feel more powerless. This suggests I should look away from the spotlight for better opportunities.
Based on this, I might reconsider donation to political causes; not sure which ones, and they'd probably be justified in terms of Citizenship instead of in terms of Effective Altruism.
Realistically, I doubt I have the time / energy for political action beyond voting :(
Cui bono? To identify a hidden actor from some actions / outcomes, you can ask "who benefits?" and guess that the beneficiary may be the hidden actor. In the adversarial epistemics case, my actions are partially determined by the Adversary, so I can look at the actions and beliefs of mine and ask "who benefits?" (Obama is partially using cui bono to support the adversarial hypothesis in the first place. "Don't think voting will work? Who benefits from that?")
Attack model: We can try to model the Adversary's capabilities and motives, and use this to figure out what our defense options are. For example, how specifically can the Adversary target me? Presumably they are playing against some classes that I'm part of, not directly against me, and presumably they are spending more effort attacking classes that will benefit them more per dollar. Maybe this gives me a way to trust some info to be "cleaner" than other info?
Epistemics vs morale: I'd naively guess that it's easier to attack morale than to attack epistemics. If there's an Adversary, I think there's more reason to trust spreadsheet-style calculative models based on historical data ("I should help in way X because its track record / expected value is good") over gut-level models ("I should help because it feels intuitively likely to work").
Distraction: It's plausibly easier to discourage people by directing their attention toward an intractable issue than it is to make a tractable issue look less tractable; e.g., I think I'm particularly powerless over Supreme Court confirmations, so directing my attention to confirmations makes me feel more powerless. This suggests I should look away from the spotlight for better opportunities.
Based on this, I might reconsider donation to political causes; not sure which ones, and they'd probably be justified in terms of Citizenship instead of in terms of Effective Altruism.
Realistically, I doubt I have the time / energy for political action beyond voting :(