Saturday, August 27, 2016

Acts, omissions, friends, and enemies

The act-omission distinction plays a role in some ethical theories. It doesn't seem relevant to me, because I'm much more concerned with the things that happen to people than with whether a particular actor's behavior meets some criteria. (Of course, if some consequence is more beneficial or harmful if it is caused by an act or an omission, I'd care about that.)

Supererogatory acts, and the broader concept that some acts are required / forbidden while others are not, also play a role in lots of ethical theories, but don't seem relevant to me. To me, ethics is entirely about figuring out which acts or consequences are better or worse, and this doesn't give an obvious opening for making some acts required or forbidden.

I recently had an idea about why these concepts appear in some ethical theories: acts/omissions and supererogatory acts seem useful for identifying allies and enemies. This is roughly because acts tend to be costly (in terms of attention and other resources), and supererogatory acts tend to be expensive as well. There's not a lot more to say:

  • Allies will pay the cost to help you through their acts; supererogatory acts are especially good indicators of allies.
  • Enemies will pay the cost to hurt you through their acts.
  • Neutral parties may hurt or help you through omissions, but since these aren't costly, they don't carry much information about whether that party is an ally or enemy; they don't seem to be thinking about you much.

From my perspective, this is a tentative debunking of these concepts' role in ethics, since allies and enemies don't belong in ethics as far as I can tell. For others, allies and enemies might be important ethical concepts, and maybe this could help them explain these concepts' role in those ethical theories.

Final note: I remember hearing about supererogatory acts' evil twins, i.e. acts that are not forbidden, but are morally blameworthy; "suberogatory acts" (search for the term on this page). These might be useful for identifying allies, who will avoid suberogatory acts, but they don't seem to play much of a role in any ethical theory.

5 comments:

  1. For other readers: Supererogation is the technical term for the class of actions that go “beyond the call of duty.” Roughly speaking, supererogatory acts are morally good although not (strictly) required.
    (From Stanford encyclopedia of philosophy)

    (I assume you blog mostly for yourself, like I do, so no worries for not defining it yourself, I just post it here in case anyone else, like me, wants to know)

    ReplyDelete
  2. Anyway I think I agree; it seems like the ideas of supererogatory acts and obligations are a way to figure out if someone is a good person, not if an act is good. And I kind of feel like worrying about whether you're a good person is a waste of time.

    ReplyDelete
  3. Yeah, this makes sense - I was thinking recently about why people often have this intuition that altruism requires self-sacrifice, which seems related. If what's actually important is the consequences of my actions for others/the world, it shouldn't matter whether I'm sacrificing something myself to benefit others. But if we're trying to assess who we should be allies with, who *actually* cares about our interests rather than just doing things for their own benefit, those who seem to incur some cost to themselves are a better bet. I guess another way of putting this is that incurring some cost to yourself is a way of signalling that you care about the interests of others.

    I also think this is relevant for EA, and potentially part of why some people have a negative reaction to/distrust of EAs. EAs often actively discourage the idea that we need to be self-sacrificing to be altruistic (both explicitly by talking about how donating makes you happy, for example, and implicitly by suggesting that high status or interesting careers might be the highest impact.) But this can lead others to be sceptical of EAs motives: since they don't seem to be incurring many costs to themselves, perhaps they don't really care, and are just in it for the status or self-validation or something. I'm not endorsing this position, but I do think there's a risk here - not clear what to do about it though, as I don't think the solution would be to encourage EAs to sacrifice more...

    ReplyDelete
    Replies
    1. That's interesting! Missing out on the signaling benefits of self-sacrifice does seem like it could cause some problems, especially for the "opportunity framing" or "excited altruism" versions of EA (e.g. http://blog.givewell.org/2013/08/20/excited-altruism/). I'm sympathetic to people who are turned off by EA's non-sacrificial qualities, because it's a pretty reasonable kind of debunking -- you're helping other people, and it *just happens* to make you really happy as well?

      Delete