Hide table of contents

There are a lot of things about this community that I really love, but possibly my favourite is a thing people often do when they're trying to make a difficult and/or important decision:

  1. Write out your current thinking in a google doc.
  2. Share it with some people you think might have useful input, asking for comments.
  3. ???
  4. Profit.

I like this process for lots of reasons: 

Writing out your reasoning is often helpful

My job involves helping people through difficult decisions, and I often find that a lot of the value I provide comes from asking people questions which make considerations and tradeoffs salient to them. Trying to write out how you're weighing the various factors that are going into your decision is a good way of helping you work out which ones actually matter to you, and how much. You may even get some big wins for free, for example realising that two options might not be mutually exclusive, or that one of the things you're trying to achieve is because of a preference that you don't, on reflection, endorse. 

People often ask good questions.

Even when you're doing the above well, other people trying to understand your reasoning will ask clarifying questions. Responding to these will often cause you to better understand your own thought process, and might identify blindspots in your current thinking.

People often give good advice.

To some extent this is the obvious reason to go through this process. I'm listing it here mostly to highlight that this clearly is a big source of value, though it's not clear that it's bigger than the previous two.

It's fun.

I find it really interesting, and fairly easy, to comment on decision documents for people I know well, and I know many people feel the same. Also, they often say thank you, or that you helped, and that's nice too!


What does doing this well look like?

Use the method at all!

If you're facing a decision and haven't done this, I would much rather you just went and followed the steps at the start before reading further. Don't let perfect be the enemy of good.

Be concise, but complete.

People are more likely to read shorter documents, and it will take them less time to do so, but leaving out a consideration or piece of information that is an important factor to you will cost people more time and/or make their advice worse in the long run. I think a reasonable method to try first is brain-dumping everything into the document, then editing for clarity before you share it.

I've had a few people share Excel models with me. In one case I ended up finding a fairly severe mistake in their model, which was helpful, but overall I think this is a bad strategy. Unless you put a ton of detail in comments on different cells (which then makes the document a nightmare to read), you're probably missing a lot of reasoning/detail if this is the format you go with.

Let people know what you're hoping to get from them

Often it can be difficult to know how honest to be when giving feedback to a friend, especially if you're not super close and/or haven't already established norms for how much honesty/criticism to expect. It might be the case that you don't have a clear view for what you're uncertain about, and roughly just want an overall 'sense check', but it also might be that there's a particular part of the decision you're hoping for feedback on, and everything else is just context which seems relevant but is already fixed. Consider putting clear instructions for commenters early in the document to help with this.

Put some thought into who to ask for comments.

'Smart, kind people I know' is a perfectly reasonable start, but after that it might help to ask yourself what specifically you expect people to help with. There can often be pretty sharply diminishing returns to sharing with too many people, and having a clear idea in mind for what people are adding can help prevent this. Here are a few ideas on who you might want to ask and why they'd be particularly helpful. The list is neither mutually exclusive nor collectively exhaustive.

  • People who know you well. They can often give a good overall take, bring up considerations you might be missing but do matter to you, call you out on your bullshit motivated reasoning you might not have noticed.
  • People with specific expertise in the decision. In this case it can be good to ask them a specific question, or for a take on a specific aspect of the decision, and make it clear that just answering that is fine, though they might be welcome to comment on the rest.
  • People who have a different perspective to you. This can (but doesn't have to) include non-EAs. This community is great, but it certainly isn't the only source of good advice and guidance that exists, and sharing a google doc and asking for comments isn't that weird a favour to ask a friend for.
  • People whose reasoning you particularly trust, and/or who you know won't mince their words. You can give them express permission to be pessimistic, or skeptical.
  • People who like you and will be supportive. Encouragement actually really matters for some people! I'm one of them!

Should you go and make a document right now?

Stop reading and do it...


Thanks to Aaron, whose comment on a document of the form described below prompted this piece, and Luisa, for some incredibly valuable advice about how to interpret that comment. Thanks also to Emma and Chana for helpful comments on a draft of this post.


Sorted by Click to highlight new comments since:

Quick way to create a Google Doc—browse to this web address:


I've found that having a quick way to create new docs makes me more likely to do so.

(To set your typing focus to the browser address bar, press CMD+L or CTRL+L)

To make things even faster: create a bookmark for "doc.new" and give it the name "nd". Then you can just type "nd" and press "enter".

It's funny, I've done this so many times (including commenting on others' docs of this sort) that I sort-of forgot that not everyone does this regularly.

Yes, effective altruism has many unusual norms and practices like this, which ultimately derive from our focus on impact. The benefits of receiving advice often outweigh the costs of giving them, so it makes sense for an impact-focused community to have this kind of norm. 

It's also true that it's easy to forget that these norms are unusual because you're so used to them.

re: writing it out:
I've long been a proponent of what I'm temporarily calling the CoILS (Counterfactuality, Implementation, Linkage, Significance) framework for breaking down pros and cons into smaller analytical pieces, primarily because:

  1. At the heuristic level:
    1. It seems that breaking down complex questions into smaller pieces is generally helpful if the process does not leave out any considerations  and does not involve significant duplication (and I believe that the four considerations in the framework are indeed collectively exhaustive and mostly mutually exclusive for any conceivable advantage/disadvantage and its associated decision, as I explain in more detail in the post)
    2. The EA community has glommed around the ITN heuristic as useful (despite its flaws), and the ITN heuristic bears a lot of resemblance to this framework (as I explain in more detail in the post, including how COILS does not share some of the main flaws in the ITN heuristic).
  2. At the specific-effect level:
    1. It seems helpful for checking some of your key assumptions, especially when you're already biased in favor of believing some argument;
    2. It standardizes/labelizes certain concepts (which seems helpful for various reasons).

Applying it is not too difficult (although one can definitely get better with practice): for any given advantage/disadvantage for a decision (e.g., "this plan leads to X which is good/bad"), one asks questions such as:

  1. Would X occur (to a similar extent) without the plan? (counterfactuality)
  2. What would the plan actually involve doing/what can actually be implemented? (implementation)
  3. Would X occur if the plan is implemented in a given way? (linkage)
  4. How morally significant is it that X occurs? (significance)

For what it's worth, I really liked the chunk at the bottom of this comment (starting at "Applying it is not..." and it made it feel like a system I'd want to use, but when I clicked on your link to the original piece I bounced off of it because of the length and details. Might just be an unvirtuous thing about me, and possibly the subtleties are really important to doing this well, but I could imagine this having more reach if it was simplified and shortened.

Well, it was worth a shot, but it doesn't seem to have gotten any more traction in a simplified/shortened post, unfortunately.

Thanks for the reply/feedback! I've realized that the length of the article is probably a problem, despite my efforts to also include a short, standalone summary up front. I just thought it would be important to include a lot of content in the article, especially since I feel like it makes some perhaps-ambitious claims (e.g., about the four components being collectively exhaustive, about the framework being useful for decision analysis). More generally, I was seeking to lay out a framework for decision analysis that could compete with/replace the INT heuristic (at least with regards to specific decision analysis vs. broad "cause area prioritization")...

But yeah, it has one of the highest bounce rates of all my posts, so I figure I probably should have done it differently.

And it was also my second attempt at writing a post on that concept (i.e., my first attempt at improving on the original post), and it did even worse than my first attempt in Karma terms, so my motivation to try again has been pretty low (especially since only one person ever even engaged with the idea, and it definitely felt like it was out of pity).  

That being said, I suppose I could try again to just write a simple (<750 words) summary that largely resembles my comment above, albeit with the order flipped (explanation first, justification second).

Thank you so much for this!! Incredibly helpful and inspired This Post - feedback appreciated!!

Here's a framework I use for A or B decisions. There are 3 scenarios:

  1. One is clearly better than the other.
  2. They are both about the same
  3. I'm not sure; more data is needed.

1 & 2 are easy. In the first case, choose the better one. In the second, choose the one that in your gut you like better (or use the "flip a coin" trick, and notice if you have any resistance to the "winner". That's a great reason to go with the "loser").

It's the third case that's hard. It requires more research or more analysis. But here's the thing: there are costs to doing this work. You have to decide if the opportunity cost to delve in is worth the investment to increase the odds of making  the better choice.

My experience shows that—especially for people who lean heavily on logic and rationality like myself 😁—we tend to overweight "getting it right" at the expense of making a decision and moving on. Switching costs are often lower than you think, and failing fast is actually a great outcome. Unless you are sending a rover to Mars where there is literally no opportunity to "fix it in post-", I suggest you do a a nominal amount of research and analysis, then make a decision and move onto other things in your life. Revisit as needed.

Note that A or B decisions are often false dichotomies, and you may be overlooking alternative options that combine the advantages. So narrowing in on given options too soon may sometimes be a mistake, and it can be useful to try to come up with more alternatives.

Also, in my experience many of the decisions I get stuck with fall somewhere between 2 and 3: I know their implications and have most of the information, but the results differ on various dimensions. E.g. option 1 is safe and somewhat impactful, while option 2 is potentially higher impact but much riskier and comes at the cost of disappointing somebody you care about. I'm not sure to what degree a decision doc is suitable for these types of problems in particular - but I've at least had a few cases where friends came up with some helpful way to reframe the situation that led to a valuable insight.

(But I should mention I definitely see your point that many EAs may be overthinking some of their decisions - though even then I personally wouldn't feel comfortable in case of value conflicts to just flip a coin. But in many other cases I agree that getting to any decision quickly rather then getting stuck in decision paralysis is a good approach.)

If you are planning to work on a project with multiple people (this could be something like running an intro fellowship, or starting a x-risk initiative at your university), you should probably spend at least 5% of your time on finding your top level goal and thinking about any risks associated with your project, i.e., writing a meta strategy document. I did this when I founded CERI, and it's been one of the most useful documents which we still use for strategy meetings etc. 

Curated and popular this week
Relevant opportunities