Hide table of contents

Making good decisions under significant uncertainty is a central skill for anyone trying to understand how they can maximise the positive impact of their work or donations. This is something we look for at AIM for both our Charity Entrepreneurship program and when hiring new staff, such as in the four roles we currently are advertising.

In good news, decision-making is a skill that you can improve. Thinking about decisions as a weighted factor model (WFM) is a useful framework I’ve found for improving my decision-making. 

I think about a lot of decisions in terms of WFM: I ask myself, what would make a great model versus a bad model for thinking about this issue? I think a good WFM, and thus a good decision, often comes down to two major factors I see a lot of variance in:

  • Number of meaningfully different heuristics considered (the criteria in the columns of a WFM)
  • Number of meaningfully different solutions considered (the options to be evaluated in the rows of a WFM)

Example Weighted Factor Model for selecting a country in which to launch a pilot from one of our charities

Considering many heuristics

One of the skills I value the most is multi-heuristic decision making. By this, I mean when a person makes a decision using multiple different rules or heuristics rather than a singular framework. As a weighted factor model, this is easily expressed in the number of columns, and thereby different criteria, included. Modelled more simply, this is the difference between a simplified pros and cons list versus a more complex one. Let’s look at an example.

Example decision: Should Joey go to the (entirely fictional) Impactful Philanthropy Conference (IPC)?

Weak model: few heuristics

ProsCons

It’s a fairly important event

I get significant value from events like this
Few people with similar views get invited, so I could have a lot of leverage

It will cost significant time

It will cost significant money

Maybe someone else could cover the same ground

Strong model: several heuristics

ProsCons

Direct

It’s a fairly important event

I get significant value from events like this

Historically, these type of events have been worth the use of time for AIM
 

Flow-through effects for AIM

It likely makes AIM seem more cooperative

Probably positive for our relationships with key actors

 

Flow-through effects for IPC

Few people with similar views get invited so I could have a lot of leverage

If we don’t go, even fewer people who share similar priorities may get invited in the future - how do we expect this to change year to year?

 

Counterfactuals

More people from our network got invited which could increase the marginal value?

How many people going would it be 1) useful to chat to and 2) I could not set up a remote call with easily instead?

Direct

It will cost significant time

It will cost significant money

Last year, it seemed like there were a lot of promising leads, but there was a high flake rate

 

Flow-through effects for AIM

It may associate AIM with views we don’t support

 

Flow-through effects for IPC

This area is a less important space for us now

AIM reputation concerns

 

Counterfactuals

More people from our network got invited so the marginal value is lower?

Maybe X person could cover the same ground

Y person is going regardless, maybe covering the same ground.

Key factors / Cruxes
  1. Can X person cover enough of the same ground?
  2. Do we generally want to move towards or away from this sphere of actors?
  3. How many people going would it be 1) useful to chat to and 2) I could not set up a remote call with easily instead?

Of course, added depth in decision-making has trade-offs. The last table takes more time to put together but shows clearly how a good decision is much more likely to be made. We’ve identified meaningful alternative avenues for achieving key goals and considered which aspects of an event’s ‘importance’ might matter most for our goals.

Even for fairly trivial decisions, I typically try to think of it from 3-5 angles instead of simply from 1-2. With practice, this becomes fairly quick to do such that it is easy to think about a bigger number of considerations very quickly even for more trivial decisions.

For bigger decisions, it can be a good idea to assign clear weightings of importance, perhaps putting the most important factors into a WFM spreadsheet.

Generating more options: numbers and divergence

The other area in which I see large differences between people who are stronger and weaker at making good decisions is how many solutions are brainstormed before deciding on a solution. It is easy to anchor pretty hard on the first idea or to only come up with a couple of solutions. When I prompt people to come up with 10 solutions instead, they often come up with a few that are better than the first 1-2 that initially came to their minds.

Crucially, what matters here is producing ideas for meaningfully different solutions. Thinking of 10 solutions is a good bar to aim for, but producing 3 very different, divergent ways of looking at the problem might actually be more valuable.

Example solution brainstorm: Should Joey go to the Impactful Philanthropy Conference?

A low number of ideas + convergent solutions

  1. Yes
  2. No

High number of ideas + convergent solutions

  1. Joey goes
    1. Yes
    2. For part of it
    3. Calls in remotely
    4. No
  2. Someone else goes
    1. Person X
    2. Person Y
    3. Person Z
    4. Other

High number of ideas + divergent solutions

  1. Joey goes
    1. Yes
    2. For part of it
    3. Calls in
    4. Go but without sharing publicly
    5. No
  2. Someone else goes
    1. X
    2. Y
    3. A (already going?)
    4. B (already in the area?)
    5. Other
  3. See if there are other worthwhile events happening around the same time to make the travel and time investment more worthwhile
    1. Animal welfare events
    2. Forprofit founder events
    3. Global health events
    4. Something else on the same city/side of the world (sync with a personal visit?)
  4. Thinking about if there are ways to reach the key goal of affecting the key ideas discussed at the Impactful Philanthropy Conference without going to the event
    1. Set up remote calls with people attending in advance of the conference
    2. Write a few strategy blog posts to send to key members just before they go so they talk about connected topics
    3. Have a preemptive call with a couple of people, including those who might go representing AIM
  5. Find alternatives
    1. Set up a competing event?
    2. See if there is a similar event in London
    3. Invite some key people to London to have similar conversations

Combining a high number of ideas and high divergence in approach generates solutions that are far more likely to be useful. In this case, sending someone else from the team for part of the event plus setting up a few calls with key attendees both before and after the conference will likely lead to most of the same value with reduced costs.

Bringing it all together

3x3 decision making

Here’s a simple, practical model for this. Before settling on a decision, see if you have brainstormed at least three different angles of attack and three different divergent solutions. Once you choose the top three options of those nine, compare them on at least 9 of their pros and cons.

It’s worth being clear that this still won’t guarantee you make the best possible decision. I’d give about 66% odds that the solution at the end of this process will be the best possible option in hindsight. However, this is great compared to the 20% odds I’d put on you picking the best option with a more typical, less divergent approach to decision-making.

With practice, this whole process should take ~30 minutes or less. It can help to do this with someone else a couple of times to get better at coming up with ideas and divergent solutions.

3x3 decision-making template (30-60 minute version)

You can also make a copy of a spreadsheet version of this framework here.

Brainstorm
  • Angle of attack 1
    • Variation 1
    • Variation 2
    • Variation 3
  • Angle of attack 2
    • Variation 1
    • Variation 2
    • Variation 3
  • Angle of attack 3
    • Variation 1
    • Variation 2
    • Variation 3
Top option 1Top option 2Top option 3
  • Pros
    • 1
    • 2
    • 3
    • 4
  • Cons
    • 1
    • 2
    • 3
    • 4
    • 5
  • Pros
    • 1
    • 2
    • 3
    • 4
    • 5
  • Cons
    • 1
    • 2
    • 3
    • 4
  • Pros
    • 1
    • 2
    • 3
    • 4
  • Cons
    • 1
    • 2
    • 3
    • 4
    • 5
Key factors / Cruxes
  1. Key crux 1
  2. Key crux 2
  3. Key crux 3

Why does this matter?

As the leader of an organisation, trust and buy-in are largely earned by demonstrating great decision-making. This applies in many other situations. As a manager, when I see a team member demonstrate great decision-making, I’m likely to give them more autonomy and trust in their decisions. As a leader, demonstrating great decision-making is crucial in maintaining and extending buy-in from the team at AIM. Similar lessons apply in other domains. Grantmakers tend to give small grants until they see how an organisation makes decisions on spending and prioritisation with this money. Hiring managers will likely probe your decision-making approach and skills as a core part of assessing your fit for roles you might apply to.

I encourage you to pick an important decision you might have in the background, or perhaps be procrastinating on, and test this framework for yourself.

Comments5


Sorted by Click to highlight new comments since:

Thanks, Joey and Ben.

  • Number of meaningfully different heuristics considered (the criteria in the columns of a WFM)
  • Number of meaningfully different solutions considered (the options to be evaluated in the rows of a WFM)

For cost-effectiveness analyses (CEAs) and a WFM covering the same variables and options, would the CEAs be preferable? I think so. For example, I like that GiveWell uses cost-effectiveness analyses of the most promising countries instead of a WFM.

I think it depends quite a bit on the quality of the CEA. I would take a sub-5-hour WFM as more useful than a sub-5-hour CEA every time. At 50 hours, I think it becomes a lot less clear. CEAs are much more error-prone and more punishing of those errors compared to WFMs, thus the risk of weaker CEAs. We have more writing on WFMs and CEAs that go into depth about their comparative strengths and weaknesses.

I also think the assessment of GW as CEA-focused is a bit misleading. They have four criteria, two of which they do not explicitly model in their CEA, and many blog posts express their skepticism about taking CEAs literally (my favorite of these, though old).

Thanks for clarifying, Joey!

I think it depends quite a bit on the quality of the CEA. I would take a sub-5-hour WFM as more useful than a sub-5-hour CEA every time. At 50 hours, I think it becomes a lot less clear. CEAs are much more error-prone and more punishing of those errors compared to WFMs, thus the risk of weaker CEAs.

I agree the value of CEAs relative to a WFM increases with time invested.

I also think the assessment of GW as CEA-focused is a bit misleading. They have four criteria, two of which they do not explicitly model in their CEA, and many blog posts express their skepticism about taking CEAs literally (my favorite of these, though old).

Elie Hassenfeld (GiveWell's co-founder and CEO) mentioned on the Clearer Thinking podcast that (emphasis mine):

GiveWell cost- effectiveness estimates are not the only input into our decisions to fund malaria programs and deworming programs, there are some other factors, but they're certainly 80% plus of the case.

Isabel Arjmand (GiveWell's special projects officer at the time) also said (Isabel's emphasis):

The numerical cost-effectiveness estimate in the spreadsheet is nearly always the most important factor in our recommendations, but not the only factor. That is, we don’t solely rely on our spreadsheet-based analysis of cost-effectiveness when making grants.

Executive summary: Using a weighted factor model (WFM) approach to decision-making, which considers multiple heuristics and generates diverse solution options, can significantly improve the quality of decisions under uncertainty.

Key points:

  1. Good decision-making often involves considering many different heuristics (criteria) and generating multiple diverse solution options.
  2. The "3x3 decision making" model recommends brainstorming at least 3 angles of attack and 3 divergent solutions, then comparing the top 3 options on at least 9 pros and cons.
  3. This approach increases the odds of selecting the best option from about 20% to 66%, though it's not guaranteed to always find the optimal solution.
  4. With practice, this process can be completed in about 30 minutes for most decisions.
  5. Demonstrating strong decision-making skills is crucial for earning trust and autonomy in professional settings, including leadership roles and grant applications.
  6. The post provides a template and spreadsheet for implementing this decision-making framework.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

You raise an excellent point about the importance of multi-heuristic decision-making, especially in uncertain situations. The Weighted Factor Model (WFM) you described really showcases how depth in our analysis can lead to better outcomes. It’s intriguing how expanding our criteria and solutions can help mitigate the risk of anchoring on initial ideas.

I appreciate your emphasis on the trade-offs involved in decision-making depth. Finding that balance between thoroughness and efficiency is crucial, especially when time is limited. Your suggestion to brainstorm a high number of divergent solutions is a great strategy to ensure we don’t overlook valuable options. I’d love to hear more about how you’ve seen teams implement this in practice—what specific techniques have been most effective in encouraging that kind of expansive thinking?

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal