Hide table of contents

There are a lot of things about this community that I really love, but possibly my favourite is a thing people often do when they're trying to make a difficult and/or important decision:

  1. Write out your current thinking in a google doc.
  2. Share it with some people you think might have useful input, asking for comments.
  3. ???
  4. Profit.
     

I like this process for lots of reasons: 

Writing out your reasoning is often helpful

My job involves helping people through difficult decisions, and I often find that a lot of the value I provide comes from asking people questions which make considerations and tradeoffs salient to them. Trying to write out how you're weighing the various factors that are going into your decision is a good way of helping you work out which ones actually matter to you, and how much. You may even get some big wins for free, for example realising that two options might not be mutually exclusive, or that one of the things you're trying to achieve is because of a preference that you don't, on reflection, endorse. 

People often ask good questions.

Even when you're doing the above well, other people trying to understand your reasoning will ask clarifying questions. Responding to these will often cause you to better understand your own thought process, and might identify blindspots in your current thinking.

People often give good advice.

To some extent this is the obvious reason to go through this process. I'm listing it here mostly to highlight that this clearly is a big source of value, though it's not clear that it's bigger than the previous two.

It's fun.

I find it really interesting, and fairly easy, to comment on decision documents for people I know well, and I know many people feel the same. Also, they often say thank you, or that you helped, and that's nice too!

 

What does doing this well look like?

Use the method at all!

If you're facing a decision and haven't done this, I would much rather you just went and followed the steps at the start before reading further. Don't let perfect be the enemy of good.

Be concise, but complete.

People are more likely to read shorter documents, and it will take them less time to do so, but leaving out a consideration or piece of information that is an important factor to you will cost people more time and/or make their advice worse in the long run. I think a reasonable method to try first is brain-dumping everything into the document, then editing for clarity before you share it.

I've had a few people share Excel models with me. In one case I ended up finding a fairly severe mistake in their model, which was helpful, but overall I think this is a bad strategy. Unless you put a ton of detail in comments on different cells (which then makes the document a nightmare to read), you're probably missing a lot of reasoning/detail if this is the format you go with.

Let people know what you're hoping to get from them

Often it can be difficult to know how honest to be when giving feedback to a friend, especially if you're not super close and/or haven't already established norms for how much honesty/criticism to expect. It might be the case that you don't have a clear view for what you're uncertain about, and roughly just want an overall 'sense check', but it also might be that there's a particular part of the decision you're hoping for feedback on, and everything else is just context which seems relevant but is already fixed. Consider putting clear instructions for commenters early in the document to help with this.

Put some thought into who to ask for comments.

'Smart, kind people I know' is a perfectly reasonable start, but after that it might help to ask yourself what specifically you expect people to help with. There can often be pretty sharply diminishing returns to sharing with too many people, and having a clear idea in mind for what people are adding can help prevent this. Here are a few ideas on who you might want to ask and why they'd be particularly helpful. The list is neither mutually exclusive nor collectively exhaustive.

  • People who know you well. They can often give a good overall take, bring up considerations you might be missing but do matter to you, call you out on your bullshit motivated reasoning you might not have noticed.
  • People with specific expertise in the decision. In this case it can be good to ask them a specific question, or for a take on a specific aspect of the decision, and make it clear that just answering that is fine, though they might be welcome to comment on the rest.
  • People who have a different perspective to you. This can (but doesn't have to) include non-EAs. This community is great, but it certainly isn't the only source of good advice and guidance that exists, and sharing a google doc and asking for comments isn't that weird a favour to ask a friend for.
  • People whose reasoning you particularly trust, and/or who you know won't mince their words. You can give them express permission to be pessimistic, or skeptical.
  • People who like you and will be supportive. Encouragement actually really matters for some people! I'm one of them!

Should you go and make a document right now?

Stop reading and do it...


Appreciation.

Thanks to Aaron, whose comment on a document of the form described below prompted this piece, and Luisa, for some incredibly valuable advice about how to interpret that comment. Thanks also to Emma and Chana for helpful comments on a draft of this post.


 

Comments13


Sorted by Click to highlight new comments since:

Thanks for this post @alex lawsen, I continually revisit this as inspiration and to remember the usefulness of this process when I am making hard decisions, especially for my career.

From your blog, I know you are a big user of LLMs. I was wondering if you had successfully used them to replace, or complement, this process? When I feed one my Google Doc, I find the output is too scattergun or vague to be useful, compared to sharing the same Doc with friends.

If you've succeeded using LLMs, would you please share what prompts and models that have worked well for you?

Quick way to create a Google Doc—browse to this web address:

doc.new

I've found that having a quick way to create new docs makes me more likely to do so.

(To set your typing focus to the browser address bar, press CMD+L or CTRL+L)

To make things even faster: create a bookmark for "doc.new" and give it the name "nd". Then you can just type "nd" and press "enter".

It's funny, I've done this so many times (including commenting on others' docs of this sort) that I sort-of forgot that not everyone does this regularly.

Yes, effective altruism has many unusual norms and practices like this, which ultimately derive from our focus on impact. The benefits of receiving advice often outweigh the costs of giving them, so it makes sense for an impact-focused community to have this kind of norm. 

It's also true that it's easy to forget that these norms are unusual because you're so used to them.

re: writing it out:
I've long been a proponent of what I'm temporarily calling the CoILS (Counterfactuality, Implementation, Linkage, Significance) framework for breaking down pros and cons into smaller analytical pieces, primarily because:

  1. At the heuristic level:
    1. It seems that breaking down complex questions into smaller pieces is generally helpful if the process does not leave out any considerations  and does not involve significant duplication (and I believe that the four considerations in the framework are indeed collectively exhaustive and mostly mutually exclusive for any conceivable advantage/disadvantage and its associated decision, as I explain in more detail in the post)
    2. The EA community has glommed around the ITN heuristic as useful (despite its flaws), and the ITN heuristic bears a lot of resemblance to this framework (as I explain in more detail in the post, including how COILS does not share some of the main flaws in the ITN heuristic).
  2. At the specific-effect level:
    1. It seems helpful for checking some of your key assumptions, especially when you're already biased in favor of believing some argument;
    2. It standardizes/labelizes certain concepts (which seems helpful for various reasons).

Applying it is not too difficult (although one can definitely get better with practice): for any given advantage/disadvantage for a decision (e.g., "this plan leads to X which is good/bad"), one asks questions such as:

  1. Would X occur (to a similar extent) without the plan? (counterfactuality)
  2. What would the plan actually involve doing/what can actually be implemented? (implementation)
  3. Would X occur if the plan is implemented in a given way? (linkage)
  4. How morally significant is it that X occurs? (significance)

For what it's worth, I really liked the chunk at the bottom of this comment (starting at "Applying it is not..." and it made it feel like a system I'd want to use, but when I clicked on your link to the original piece I bounced off of it because of the length and details. Might just be an unvirtuous thing about me, and possibly the subtleties are really important to doing this well, but I could imagine this having more reach if it was simplified and shortened.

Well, it was worth a shot, but it doesn't seem to have gotten any more traction in a simplified/shortened post, unfortunately.

Thanks for the reply/feedback! I've realized that the length of the article is probably a problem, despite my efforts to also include a short, standalone summary up front. I just thought it would be important to include a lot of content in the article, especially since I feel like it makes some perhaps-ambitious claims (e.g., about the four components being collectively exhaustive, about the framework being useful for decision analysis). More generally, I was seeking to lay out a framework for decision analysis that could compete with/replace the INT heuristic (at least with regards to specific decision analysis vs. broad "cause area prioritization")...

But yeah, it has one of the highest bounce rates of all my posts, so I figure I probably should have done it differently.

And it was also my second attempt at writing a post on that concept (i.e., my first attempt at improving on the original post), and it did even worse than my first attempt in Karma terms, so my motivation to try again has been pretty low (especially since only one person ever even engaged with the idea, and it definitely felt like it was out of pity).  

That being said, I suppose I could try again to just write a simple (<750 words) summary that largely resembles my comment above, albeit with the order flipped (explanation first, justification second).

Thank you so much for this!! Incredibly helpful and inspired This Post - feedback appreciated!!

Here's a framework I use for A or B decisions. There are 3 scenarios:

  1. One is clearly better than the other.
  2. They are both about the same
  3. I'm not sure; more data is needed.

1 & 2 are easy. In the first case, choose the better one. In the second, choose the one that in your gut you like better (or use the "flip a coin" trick, and notice if you have any resistance to the "winner". That's a great reason to go with the "loser").

It's the third case that's hard. It requires more research or more analysis. But here's the thing: there are costs to doing this work. You have to decide if the opportunity cost to delve in is worth the investment to increase the odds of making  the better choice.

My experience shows that—especially for people who lean heavily on logic and rationality like myself 😁—we tend to overweight "getting it right" at the expense of making a decision and moving on. Switching costs are often lower than you think, and failing fast is actually a great outcome. Unless you are sending a rover to Mars where there is literally no opportunity to "fix it in post-", I suggest you do a a nominal amount of research and analysis, then make a decision and move onto other things in your life. Revisit as needed.

Note that A or B decisions are often false dichotomies, and you may be overlooking alternative options that combine the advantages. So narrowing in on given options too soon may sometimes be a mistake, and it can be useful to try to come up with more alternatives.

Also, in my experience many of the decisions I get stuck with fall somewhere between 2 and 3: I know their implications and have most of the information, but the results differ on various dimensions. E.g. option 1 is safe and somewhat impactful, while option 2 is potentially higher impact but much riskier and comes at the cost of disappointing somebody you care about. I'm not sure to what degree a decision doc is suitable for these types of problems in particular - but I've at least had a few cases where friends came up with some helpful way to reframe the situation that led to a valuable insight.

(But I should mention I definitely see your point that many EAs may be overthinking some of their decisions - though even then I personally wouldn't feel comfortable in case of value conflicts to just flip a coin. But in many other cases I agree that getting to any decision quickly rather then getting stuck in decision paralysis is a good approach.)

If you are planning to work on a project with multiple people (this could be something like running an intro fellowship, or starting a x-risk initiative at your university), you should probably spend at least 5% of your time on finding your top level goal and thinking about any risks associated with your project, i.e., writing a meta strategy document. I did this when I founded CERI, and it's been one of the most useful documents which we still use for strategy meetings etc. 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr