Sunday, September 25, 2016

Bostrom's observation equation

Nick Bostrom's book "Anthropic Bias" seems to be the most thorough examination of observation selection effects around. However, I don't really understand the reasoning method he proposes at the end of the book (Chapter 10) and in this paper. So, let's work through it.

(Note: I started this post not understanding the observation equation, but now I feel like I do. +1 for writing to understand!)

First, Bostrom states the Strong Self-Sampling Assumption (SSSA), an informal rule for reasoning that he thinks is correct:
(SSSA) Each observer-moment should reason as if it were randomly selected from the set of all observer-moments in its reference class.
Sounds pretty good to me, but the devil's in the details -- in particular, what is a reference class?

Bostrom offers an "observation equation" formalizing SSSA. Suppose an observer-moment \(m\) has evidence \(e\) and is considering hypothesis \(h\). Bostrom proposes this rule for \(m\)'s belief:
\[P(h|e) = \frac{1}{\gamma}\sum_{o\in O_h\cap O_e}{\frac{P(w_o)}{|O_o\cap O(w_o)|}}\]
Okay, what does this mean? Ignore \(\gamma\) for now; it's a normalizing constant that depends only on \(e\) that makes sure probabilities add up to 1, I think. \(O_h\) is the set of observer-moments that are consistent with hypothesis \(h\), and \(O_e\) is the set of observer-moments that have evidence \(e\). So, what we're doing is looking at each observer-moment \(o\) with evidence \(e\) where hypothesis \(h\) is actually true, and adding up the probabilities of the worlds that those \(o\) live in, divided by the number of observers in that world that are members of \(o\)'s "reference class", which we still haven't defined.

Now let's look at the normalization constant:
\[\gamma = \sum_{o\in O_e}{\frac{P(w_o)}{|O_o \cap O(w_o)|}}\]
This is pretty similar to the above, but it just sums over observer-moments that have evidence \(e\). In fact, the inside of the sum is the same function of \(o\) as the inside of the sum of the first equation. In fact, I think we can sensibly pull this out into its own function, which semantically I think is something like the prior probability of "being" each observer:
\[P(o) = \frac{P(w_o)}{|O_o\cap O(w_o)|}\]
For each observer, the prior probability of "being" that observer is the probability of being in that world, split equally among all observers in that world that are in the same "reference class". This in turn lets us rewrite the observation equation as:
\[P(h|e) = \frac{\sum\limits_{o\in O_h\cap O_e}{P(o)}} {\sum\limits_{o\in O_e}{P(o)}}\]
This is useful, because it makes it clear that this is basically the formula for conditional probability!
\[P(h|e) = \frac{P(\text{observe }e\text{ and }h\text{ is true})}{P(\text{observe }e)}\]
So, now I feel like I understand how Bostrom's observation equation works. I expect that I'll mostly be arguing in the future about whether \(P(o)\) is defined correctly, and I still need to come back to what exactly an observer's "reference class" is. Spoiler: Bostrom doesn't pin down reference classes precisely, and he thinks there are a variety of choices.

Thursday, September 22, 2016

Observer selection effects from scratch

Suppose that I have only three theories T0, T1, T2, describing three possible worlds W0, W1, and W2. Now, suppose that I observe X, and suppose that the following is true:
  • In W0, there are no observers of X.
  • In W1, there is one observer of X.
  • In W2, there are 100 observers of X.
What should I now believe about my theories? Should my beliefs be sensitive to how many observers of X there are in each world?

It seems pretty clear to me that I shouldn't believe T0, since it's not compatible with my observation of X; that's a minimal level at which my beliefs should be sensitive to the number of observers of X. A way of justifying this is to cash out "I believe in Tn" to mean "I believe I am in Wn", or "I believe that my future observations will be consistent with Tn". Then "I observe X" and "In W0, there are no observers of X" come together to imply "It's not possible that I'm in W0" and hence "I don't believe T0".

What should I think about T1 and T2, though? It's still possible that I'm in either one of their worlds, so I'll believe both of them to some extent. Should I believe one of them more than the other? (Let's assume that T1 and T2 were equally plausible to me before this whole thing started.)

Pretty solid ground so far; now things get shaky.

Let's think about the 101 possible observers distributed among W1 and W2. I think it's meaningful to ask which of those I believe I am; after all, which one I am could imply differences in my future observations.

Nothing about my observation X favors any of these observers over any other, so I don't see how I can believe I'm more likely to be one of them than another one, i.e. I should have equal credence that I'm any one of those observers.

This implies that I should think it's 100 times more likely that I'm in W2 than in W1, since 100 equally likely observers-of-X live in W2 and only one observer-of-X lives in W1. I should think T2 is much more likely than T1. This answers the original question of this blog post.

However, that means that if I'm considering two cosmological theories, and one of them predicts that there are billions of copies of me having the experience I'm having now, I should believe that it's very likely that that theory is true (all else equal). It's weird that I can have that kind of belief about a scientific theory while I'm just sitting in my armchair. (Nick Bostrom calls this "The Presumptuous Philosopher", and thinks you shouldn't reason this way.)

So, it seems like we have to pick one of these weird things:
  1. It's nonsensical to have beliefs about which possible observer I am (even if being different observers implies different future observations).
  2. Something besides my observations and my prior belief in theories of the world should affect my beliefs (in theories of the world, or in which observer I am).
  3. Just by sitting in my armchair and thinking, I can come to strong, justified beliefs about cosmological theories based solely on how many people-thinking-in-armchairs they contain.
  4. I've made some other mistake in my reasoning; like, my account of theories and worlds is wrong, or I'm not thinking carefully enough about what it means to be an observer, or I'm not thinking clearly about normative principles around beliefs, or something else. (Actually, me making a mistake wouldn't be so weird.)
?!

I tend to lean toward 3 (well, if I assume 4 isn't true), but smart people disagree with me, and it's kind of a crazy thing to believe. It could also mean that we're Boltzmann brains, thought I'm not sure. See also this paper.

---

Addendum: consider this similarly plausible-sounding reasoning:
  1. "I observe X" just means "there exists an observer of X".
  2. "There exists an observer of X" rules out T0, but not T1 or T2.
  3. "There exists an observer of X" doesn't favor T1 or T2.
  4. All else equal, I should have equal belief in T1 and T2.
I think this reasoning is too weak, and leaves out some implications. "I observe X" implies "there exists an observer of X", but I'd argue that it implies some additional things: it has implications about what I should believe I'll observe in the future (not just what some existing observer will observe), what theories I should believe are true (not just some observer), and which observers I should believe I could possibly be (ditto). Maybe I should redo my earlier reasoning in terms of expected observations and see what happens? 

Saturday, September 17, 2016

Response post #1

I've got a couple of great commenters, so I wanted to do a quick response post to encourage them to keep commenting (If they'd like to!). Aspirationally titled "Response post #1" -- we'll see if there's a #2.

Post: "Animal Rights"
Comment: here
I think I agree with all the premises of this article. And I think I agree on your categorization of rights as pragmatic. I don't think I agree that we should give animals rights. Buuut, I might have misunderstood, and I might be too stuck in my ways.
First, my understanding: if I'm reading it right, "animal rights" means, among other things, legally enforced veganism. Like, if we can't have animals as property, we probably definitely can't kill them for food, and it's hard to imagine how we could get a cow's agreement to give us milk. (correct me if I'm wrong.)
 Yeah, I agree that animal rights almost certainly mean we can't have them as property. I'm undecided on whether this actually means that you couldn't have an arrangement with a cow where you could get milk from them; we get labor from humans, after all, so there might be a legal arrangement where a cow could be "paid" (in nice living conditions, luxuries, etc?) to give milk. Obviously it's super-hard to do this, and might not be feasible (or oversight might be too expensive), but it's not obviously impossible to me.

Imagine that there was a service that could be performed only by people who weren't able to express themselves linguistically or understand language. I can imagine that an appointed guardian might be able to set up an employment situation that would work for everyone. However, this would be a lot more expensive than just, you know, owning the cows!

Breeding is another huge challenge -- it just doesn't seem likely to be acceptable.
(btw I realize "legally enforced veganism" would totally derail this conversation on many blogs; I'm hoping yours doesn't have the kind of readership that would do that)
Yeah :) Also, jumping suddenly to legally enforced veganism (or something very close to it) wouldn't work -- one of the big policy challenges would be avoiding a Prohibition-like reaction where production is pushed underground and the policy is reversed later anyway.
This seems a bit extreme if we actually just want to improve animal welfare. You can have laws around how you treat property, while it's still property. (I think?) Like, I can buy a car but I can't just drive it around wherever, or let my kid drive it, or ghost ride it. You've got to be responsible with cars (where "responsible" means a basket of things that not everyone agrees on); why not just work on defining the basket of responsibilities with animals?
Why not go all the way and do this with people as well? Welfare is what I really care about, so why not be libertarian and allow people to own other people (maybe only if it's mutually agreed upon), and define a basket of responsibilities for people-owners?

My answer, I think, is something like this:
  • "defining a basket of responsibilities" is a building-up process -- we have to manually add pretty much everything we care about.
  • "rights" are a way of dramatically limiting acceptable behaviors in one shot, and with lots of room for interpretation or refinement later by judges. We sort of know what a "right to freedom" means, or a "right to dignity", and we can later judge whether those things are violated through legal interpretation.
It seems to me like rights are more appropriate for people. It's very lucrative to own a person, and (unlike cars) the space of things you should or shouldn't be able to do with a person is very big, messy, and ill-defined. Also, we've societally decided that it's better to err on the side of restricting what people can do with people more rather than not enough. I think these arguments apply well to animals, as well.

The argument that "we can't legally protect animals effectively while they are property" is an old animal rights argument (see e.g. this, which refers to attempts to protect slaves' welfare without giving them the right to not be property), and I haven't done my homework on the debate over it, but it seems like a plausible argument to me.
On to your proposal of animal rights: How do you "represent" an animal in a decision? How can you represent a critter who can't ever communicate with? I guess you could get, say, a cow expert to tell you about all the things that make a cow's life better or worse. But again, we're in vegan world, so there probably wouldn't even be cow experts anymore. And we're far enough away from our current world that my instincts are probably not great moral judgment tools anymore. So, maybe I'm just too stuck in my ways.
I think this is a great point, and it might turn out that this is a thing we can't figure out how to do. I'm guessing there'll still be animal experts or cow experts, but that's totally a guess. If it turns out we can't get milk, my official answer is ¯\_(ツ)_/¯

---

Post: Peak Experiences
Comment: here
Re. your concern about being too navel-gazey/introspective: it didn't feel that way at all for me, I really liked this post! I have a similar concern as a lot of what I write is very introspective, but I generally just try to make it clear I'm not suggesting my experiences necessarily generalise to others - just that they may be sufficiently similar to be interesting to others. Or I try to frame it more as "here's an experience of mine that made me think about a more general problem/thing, here are my thoughts on the more general thing." Regardless, this post made me think about my own peak experiences in a really useful way (and in a way I hadn't done before), so I don't think you're too much at risk of being overly navel-gazey yet! :) 
Thanks! That's reassuring, and a good suggestion. I think I'll replace my self-deprecating "me" tag with an "examined life" tag to counteract my bashfulness about introspection :)

Wednesday, September 14, 2016

Monday, September 12, 2016

Animal rights?

I was intrigued by this article; I think it correctly points out that caring about animal welfare is pretty different from caring about animal rights, but I was hoping for more of an argument in favor of animals having rights. So, I decided to think about this a bit myself.

At first, I thought that I wouldn't tend to be in favor of animal rights. I generally think about ethics in terms of welfare instead of in terms of rights or obligations, so why would I think animals should have rights? However, after thinking about it more, I've come down in favor of animal rights, and I feel like I have a better understanding of why human rights matter.

So, let's talk about human rights. I haven't seen a metaphysical argument that humans "naturally" have rights, and I'm not sure I'd be convinced by that kind of argument anyway. However, there area couple of reasons that I think it makes sense to assign humans rights:
  • Humans have a strong preference for self-determination (which is partially final and partially instrumental)
  • Rule-consequentialist / rule-utilitarian: "human rights" are a good policy to agree on, because that policy helps us maximize welfare
"Should we decide to give animals rights?" is the natural question for me; we could decide on rights pragmatically, and then follow them dogmatically, even when we don't see why they're useful in a particular case. I generally don't think the first reason above applies to animals, but I think the second does. So, I think we should give animals rights.

(A note on why I think it makes sense to consider rights as pragmatic things we decide on: rights are pretty complicated, sometimes seem inconsistent or made-up, and they're constantly up for debate. For example, what's the deal with children's rights? What are all the trade-offs and edge cases around the right to free speech, or the right to refuse service? I think it used to be considered a "right" of soldiers to defend themselves and their fellows on a battlefield, and to be exempt of moral blame when they follow orders, but revisionist just-war theorists are working on overturning that, IIRC. These things are clearly cultural constructs, and we should choose them as we see fit.)

Some rights that I think probably make sense for animals:
  • They should be represented in decisions (e.g. political decisions)
  • Guardianship should be a matter of political debate (and animals should have representatives in these decisions)
  • If rights that work for kids don't work for animals, then animals should have more rights than kids do (e.g. people care about kids intrinsically, usually, and don't benefit financially from having them, whereas animals don't have these natural protections)
On this last point: imagine a world where having children could be very lucrative. In this world, we'd probably have to restrict the right to have children, and give children rights that prevent parents from exploiting them for financial gain. I think we probably will need to extend those kinds of rights to animals.

So, animal rights: yes! I'm just not sure which ones, or how we get there.

Friday, September 9, 2016

No blogs this weekend

Not blogging this weekend because I have lots of socializing to do, and my next few topics are ones I want to spend enough time on to do well! See you monday.

What I'd like to see studied

Sometimes I daydream about having an organization in charge of making sure the world goes well. There are some things that I'd want them to be studying (not an exhaustive list, since I'm tilted toward listing weird / counterintuitive things):

Cosmologically big questions:
Pressing moral questions:
  • How exactly are animals morally relevant?
Practical questions:
  • We keep having kids and adding them to the world. What are their lives going to be like? We're in charge of their lives and educations for around the first 18 years of their lives. How do we set them up for good lives, and what do good lives look like? What do we tell them, and what cultural goals do we set for them? (Maybe I'll post about this.)
  • How do communities, cities, and countries work? What are their goals, and how are their policies doing at achieving those goals?
Beyond studying these questions, I'd also want this organization to have a Global Status Monitor. It'd give statistics on the whole world, so that you can take in at a glance what the state of play is; ideally, it'd also have the ability to scrub back through history to see how things have changed. (The monitor doesn't need to be instantaneous; I'd be happy if it updated once a year, for instance.)
  • What is everyone on Earth doing for work?
  • How does everyone's labor fit together?
  • What do they eat, how do they find shelter?
  • What are people's overall financial pictures?
  • How is everyone's social life going?
  • How is everyone's mental and physical health?
  • What political things are happening?
  • What social change / civil rights things are happening?
  • What are the cutting edges of science?
  • What are the cutting edges of art?


Wednesday, September 7, 2016

Boltzmann brains

What if almost everything we thought we knew about our position in the universe was wrong? What if we actually were not members of a species that arose around 200,000 years ago, among life forms that started evolving around 4 billion years ago on a planet that formed 4.5 billion years ago, in a universe that began with a Big Bang around 13 billion years ago? A single cosmological discovery that changed all of that would be an amazingly big deal (at least in terms of scientific knowledge -- it might not change what we actually do with our lives).

That's roughly what's at stake with the question of Boltzmann brains -- whether instead of the picture above, it's more likely that we came into existence a short time ago via random (quantum or thermal) fluctuations during an extremely long quiet period in one of the last ages of the universe. Not only our ideas about our position in the universe are at stake; it's also possible that only my brain arose this way, perhaps a few minutes or seconds ago, meaning that much of what I think is real (other people, places beyond my immediate reach, all of human history, etc.) is not actually physically real.

Now, this sounds suspiciously similar to many radically skeptical arguments, like the brain in a vat thought experiment -- how do you know you're not just a brain in a vat? These arguments are great for an intro-to-Philosophy class, but once the shine wears off, they seem a little thin -- what does it really offer to say "well, you might be a brain in a vat, there might be a deceiving demon, etc.", and what more can we say about these arguments? They might be useful thought experiments for epistemologists who need corner-cases to test their ideas of what "knowledge" really is and what we can really know, but they don't feel productive as a way to think about the world. I think the typical arc is to be surprised by these arguments, live with them for a while, and then forget about them, and I think that's fine.

However, I think the Boltzmann brains (BB) argument is importantly different. The BB argument isn't "how do you know you're not a BB", it's "according to some cosmological theories, many BBs will exist, and using some kinds of anthropic reasoning, it's likely that you're one of those BBs." It's as if scientists pointed their telescopes at the sky and saw vast arrays of brains-in-vats; we have (as far as I know) real reasons to take the BB scenario seriously.

I haven't been able to find a comprehensive survey of argumentation around BBs, or even a very rigorous paper that attempts to thoroughly examine the question; it's usually treated as an example or interesting implication in cosmology or philosophy books and papers, as far as I can tell. It looks like it's only been seriously considered for about 20 years, like so many of the ideas that I think are most important.

To be totally honest, I expect the BB argument to fail. I also don't think it's likely to be importantly action-informing; how would I really make decisions differently if I were a BB? However, it's one of a few really big questions about what the world is actually like that I'd really love to see answered. In fact, I think I'll post again to talk more about those big questions -- stay tuned.

I'm playing with the idea of writing and thinking more about BB -- it's an appealing hobby project. If I do, you'll see it here first!

Tuesday, September 6, 2016

Writing about not writing

Well, I stayed up late hoping that I'd think of something to blog about, and no dice. This was probably a bad call. My current criteria, which may be too strict:
  • Not work: I started blogging at all because I found myself only talking about work, and was afraid I was getting boring.
  • Don't use willpower: nothing that I feel like I'm forcing myself to write.
  • Not always introspective: doing too much of this leaves a bad taste in my mouth.
Strategies for coming up with posts in the future:
  • Write about the first thing I can think of: in this case, I had another introspection topic that I could have written on, but I didn't feel like doing another introspective post right this minute.
  • Write about work: I'd prefer to avoid this, but whatever I'd blog about would be different from my day-to-day work, and that might be nice.
  • Write about something I thought about a while ago: I have a few technical topics lying around that I could convert into blog posts quickly. The main blocker is that that doesn't sound very fun; if I've already done the thinking, then I'm just left with the hard part of expressing it.
  • Write about my past: for example, I used to be pretty into Buddhism; I could write about what that was like and what changed. Many other memoirish things could work.
  • Journal: I could write about things I've done recently, or about how I think life is going. Not super-appealing, but maybe I could figure out what's fun about it. (I was really bad at journaling when I was a kid, and I'm probably not great now, never having done it.)
  • Media reviews: books, movies, TV, etc.
  • Micro-posts: collect a few topics where I only have a few sentences to say.
Activities that might help generate ideas I'm actually excited about:
  • Watch TV or read a book.
  • Read a technical paper or go down the rabbit hole on some topic of interest.
  • Take a walk? (Feels un-promising, but based on experience this could work)
  • Talk to K about something, then write it down.
Particular topics that might work for tomorrow, though they don't sound that appealing to me right now:

Work:
  • Intelligence explosion
  • A concrete setting for MIRI's Löb problem
  • Different threads in AI safety
  • EA and x-risk / AI safety
Philosophical / technical / "intellectual":
  • Boltzmann brains
  • Natural selection or abiogenesis in Conway's game of life
  • Patient-centric ethics
  • Some philosophy of mind prompt from Luke
Other:
  • My recent vacation
  • Consequentialist vs expressive modes of being
  • Segments of a Dungeons and Dragons campaign
  • Music I've liked recently

Saturday, September 3, 2016

Peak Experiences

Lately, I've been using the phrase "peak experiences" to refer to really unusually good experiences. Maslow defined peak experiences as "rare, exciting, oceanic, deeply moving, exhilarating, elevating experiences that generate an advanced form of perceiving reality, and are even mystic and magical in their effect upon the experimenter", and associated them with "self-actualizing individuals". I'm not sure I'm describing the same thing, and I'm skeptical that the experiences I'm talking about have to do with "self-actualization". For now, just beware that I'm using "peak experience" to mean "really really good experience".

From talking to friends, I get the impression that some people have fairly infrequent peak experiences that are much better than their average experiences, and some people don't. I'm the former sort of person; operationally, my peak experiences are good enough that I think I'd be willing to trade between a day and two weeks of average experiences for a single peak experience (lasting maybe 30 seconds to a few minutes), assuming all other effects of this trade are neutral. I'm not sure what my rate actually is because I haven't figured out how to make that kind of trade, especially in a way that makes other effects of these trades neutral.

How frequent are my peak experiences, and what are they like? I think I need a more explicit picture of what peak experiences look like in order to get a feeling for frequency; if I don't really know what they look like, it's hard for me to retrospectively say how often they happen (though I could just keep a counter in the future to get better data). I also think it'd be nice to have better vocabulary for peak experiences because then maybe I'd notice and appreciate them more, the same way having a better vocabulary for experiences of food, music, or movies helps me appreciate them more.

Here's an attempt at listing categories of peak experience that I've had, with made-up names that are supposed to get the gist across. I've left out a couple of things that aren't polite to blog about, and are probably obvious :) I'm sure I'm missing some categories or cutting things the wrong way in some cases, and it'd be interesting to find out whether other folks share these categories.

Frisson: actual frisson, a physical tingle or shiver along with a feeling of intense emotional response and emotion to an idea, a piece of music, or both. Often includes tearing up. It's a little embarrassing to give examples of what causes this in me, and I'm not sure why; it feels closely linked to the deepest and scariest-to-expose parts of my personal experience.
- Duration: up to a minute?
- Frequency: several times a week, probably?

On Fire: a feeling of excitement and cockiness about my work and the work of my community more generally. A feeling that my particular skills and idiosyncrasies line up really well with what I'm doing with my life. Sometimes related to reflecting on recent achievements. Kind of manic, but in a pleasant way. Check out how this guy dances; confident! (The end of that video is great, by the way.)
- Duration: a few minutes to 15 minutes?
- Frequency: once every week or two?

Deep Connection: hard to describe; a feeling of the removal of barriers between me and a person or people I'm with, or between me and the rest of the world. It feels like a lot of machinery that normally goes into charting my individual course through the world is turned off, and I'm not calculating risks and benefits anymore. I haven't had this happen without alcohol and a very comfortable social setting (that I'm in or have just left). If I'm alone, music helps a lot.
- Duration: a minute?
- Frequency: not sure; no more than a few times per year.

Pinch Me: this is a feeling I get when I realize I'm in a situation that's way outside what I expected in my gut (in a good way). For example, standing in the back of a water taxi speeding between islands in Venice at sunset (humblebrag!). It's sort of like a really nice form of incredulity, a feeling that a long-shot bet taken on a whim has paid off, or a feeling that I don't have any reasonable right to be experiencing this, but not in a way that pulls me out of the moment or makes me feel guilty about it.
- Duration: a few minutes to a half-hour?
- Frequency: maybe once or twice a year. By their nature, it seems like these can't happen very often.


A few notes from making this list:

  • Three of these categories are basically solitary, and the fourth (deep connection) I think I experience in a basically solitary way, despite it being about removing barriers between me and others / the world. This feels like a property of my intense experiences generally (good and bad). Is this correct, or typical?
  • Music is a big factor!
  • Higher-order awareness: it's tempting to think that these feelings are naturally higher-order ones, where being aware of the experience is part of the experience. I'm not sure this is right; certainly the only peak experiences I can report are ones that I took note of (e.g. by noticing the distinctive physical sensations of frisson), but I think these experiences would still be good if I wasn't taking note of them. Maybe there are whole categories that I wasn't very aware of, and so can't report!
  • Flow is missing; I don't think I've experienced it. Ditto for anything involving raising kids.
This post felt pretty navel-gazey, and I'm generally skeptical that introspection of this kind translates well to writing -- everyone's experiences are different, and the insights that I have about myself might not be very applicable to others. I think I'll try to limit posts that are just about me and my experiences, but this is one I've been meaning to write for a while, so I'm glad I got it out there. (It also resulted in me changing my list of categories, which is useful for me.)

Friday, September 2, 2016

Like a Boss

(This is not the blog post I meant to write, but I kinda got into it! Fun!)

Eventually, I might want or need to be a boss.

I don't think this is a natural role for me, and I think in general it's fairly hard to do well. I've been impressed with my current boss (good thing this blog is anonymous), so I wanted to take some notes. If I ever do become a boss, I'll probably do a couple interviews with good bosses to find out more.

So, here's my current simplistic model of how to be a boss (in fields like mine, where people aren't necessarily filling pre-defined roles):

0. Hire people you feel really good about; a good rule of thumb might be being confident that they'll be great at at least one mission-relevant thing.

1. Ask the person to do something that is mission-relevant and that they're good at. You probably hired them for their ability at some task, or with a suspicion that they'd be good at something; start them there. If you didn't hire them, maybe ask the person who did. If you strike out here, move to step 2.

My most likely mistakes on this step:
  • Trying to get this person to do whatever is most mission-critical, regardless of whether they're good at it.
  • Not asking them to do anything because what they're good at doesn't seem mission-critical enough.
  • Not wanting to give people tasks because I think I'd be better at them. (This is a mistake because you have to push through this in order to get gains from having employees.)
  • Asking a person what they want to do; this just pushes the boss' job onto them.
2. Figure out what (else) they're good at, among mission-relevant things, and ask them to do those things. Most people are good at a variety of things, and some of those will be mission-relevant. Ways to do this:
  • Ask them what they think they were best at at their previous job.
  • If they had a good boss or good co-workers at their last job, ask those people what they were best at.
  • Guess what they might be good at, and ask them "How would you feel about doing x? I'm wondering if you'd be good at general class of things X, and this would be a good way to get some information about it."
  • Ask them if they've seen anyone else at your org doing a kind of task they think they'd be good at. This is a little risky, since it kind of pushes the boss' job onto them.
My most likely mistakes on this step: again, probably being too exacting about what is mission-relevant.

There are some other things that seem good, but less critical, to me:
  • Gauge autonomy; different people need different kinds and amounts of guidance.
  • Give performance feedback.
  • Actually care about their life, and show it.
  • Get data about how their work is going, and demonstrate that you won't abuse that data (and that you'll use it to help them). I suspect "what an employee is doing" is very mysterious, and any data you can get will help, but giving data (like a timesheet of what they did and how long it took) is a vulnerable place for an employee, so you have to acknowledge their fears of being judged and demonstrate that you won't abuse this privilege.
Overall, the thing that a boss seems to need to do, in addition to being cool to people, is hold a picture in their head of what tasks are mission-relevant and what capabilities employees have, and then pair people with tasks in a sensible way. I'm sure there's a lot more to it, but I didn't have this basic model before, so it feels like progress to me.

Thursday, September 1, 2016

Hair-trigger mood and imposter syndrome

I can definitely relate to this post. I have a somewhat different take; maybe this means that I'm experiencing a different psychological situation with similar symptoms, or maybe it's a different view on the same basic phenomenon.

When things start to go wrong -- either mundane things like persistent messes or less-than-optimal personal interactions, or bigger things like an entire day where I don't get my work goals done -- I tend to feel much worse than I think the situation warrants. How to explain this?

I think it feels like my brain is in danger mode -- like my brain thinks my life is on the edge of total collapse, and one thing going poorly could push things over the edge. This could be in terms of my own happiness (that my personal life satisfaction could be pushed from positive to negative because of one mishap), or in terms of success at my various projects (that my work or relationships could fall apart because of one mistake). This is strange, because I get a lot of evidence regularly that things are going fine, and positive feedback from bosses, co-workers, and loved ones that I'm doing well.

More specifically, it feels to me like I'm on the edge of failing to fulfill some roles, like the role of Good Boyfriend, Good Employee, or Competent Adult. This focus on roles, along with the positive feedback I'm obviously getting, suggests to me that what's actually going on is something like imposter syndrome.

Imposter syndrome fits: I get a lot of positive feedback because I can keep people fooled, but just a small number of slips could let someone see through the façade. I didn't think about imposter syndrome before, because I usually associate it with doubting abilities, and I'm pretty confident in my abilities; it's my ability to fit these roles, or maybe something like my virtues (diligence, conscientiousness, general competence at being adult) that are at play here.

So what do you do about imposter syndrome? A few obvious options:
  1. Get better calibrated, and come to appreciate that I'm actually fulfilling these roles pretty well (my performance is higher than I thought).
  2. Decide not to try to play these roles.
  3. Find out that I have the wrong idea about these roles and what they imply (the requirements are lower than I thought; the thing I'm an imposter of is imagined).
I think I've actually had the most success with option 3. The main thing that's happened lately is that I've found out that hardly anyone fulfills these roles the way I have in mind -- nobody (or almost nobody) is the Competent Adult or Good Employee that I'm thinking of, and I'm actually well within the typical distribution of performance. This has just come from talking to people more openly about my difficulties and hearing that they have similar difficulties. My high standards were mostly imaginary.

Another thing that's helped is finding out that when people see my problems, they don't conclude that I'm an imposter. A big example is when I recently had a really bad work week. I logged like 10 hours of actual work, and dreaded having my weekly check-in with my boss. Instead of firing or (more likely) lecturing me, he said "eh, that happens to me sometime -- I'm a pretty high-variance worker", and asked whether anything was going on in my personal or work life that was affecting my ability to get work done. That was awesome!

I do worry that getting over this will lower my standards and result in worse performance, but I actually think it's more likely to improve my performance -- I waste enough energy worrying about this stuff, and being frozen / de-motivated as a result, that I think avoiding this kind of situation will more than offset any drop in performance from exorcising these imagined roles (and I haven't actually seen that kind of effect in practice at all yet, so I'm not sure it'll really happen).

Final note: this expansion of the applicability of imposter syndrome also provides a nice explanation for my difficulties with vacationing successfully -- I'm worried enough about being a Good Vacationer that I'm not paying attention to what I'd like to do in the moment. This suggests that option 2 -- abandoning the idea of playing roles -- might be the best solution here. I'm excited to give it a try!