I'm curious to hear your thoughts on the rough model described below. The text is taken from our one-pager on organising the EAGx Netherlands conference in late June. 

EDIT: By implying below that, for example, a social entrepreneur should learn about population ethics from a Oxford professor to increase impact (and the professor can learn more about organisational processes and personal effectiveness), I don't mean to say that they should both become generalists. Rather, I mean to convey that the EA network enables people here to divide labour at particular decision levels and then pass on tasks and learned information to each other through collaborations, reciprocal favours and payments.
In a similar vein, I think it makes sense for CEA's Community Team to specialise in engaging existing community members on high-level EA concepts at weekend events and for Local Effective Altruism Network to help local groups get active and provide them with ICT support. However, I can think of 6 past instances where it seems that either CEA or LEAN could have potentially avoided making a mistake by incorporating the thinking of the other party at decision levels where it was stronger.

  

"EA Netherlands’ focus in 2018 is on building up a tight-knit and active core group of individuals that are exceptionally capable at doing good. We assume that ‘capacity to do good’ is roughly log-normal distributed, and that an individual's position on this curve results from effective traits acting as multipliers. We’ve found this ‘values-to-actions chain’ useful for decomposing it:

                      capacity values x epistemology x causes x strategies x systems x actions

That is, capacity to do good increases with the rigour of chained decisions – from higher meta-levels (e.g. on moral uncertainty and crucial considerations) to getting things done on the ground.

 

However, the capabilities of individuals in the Dutch EA community – and the social networks they are embedded in – tend to be unevenly distributed across these levels. On one end, people at LessWrong meetups seem relatively strong at deliberating what cause areas are promising to them (AI-alignment, mental health, etc.) but this often results in intellectual banter rather than next actions to take. The academics and student EA group leaders that we’re in touch with face similar problems. On the other end, some of the young professionals in our  community (graduates from university colleges, social entrepreneurs, etc.) as well as philanthropists and business leaders (through Effective Giving) have impressive track records in scaling organisations, but haven’t deliberated much yet on what domain they should focus their entrepreneurial efforts on.

 

Our tentative opinion is that the individuals who build their capacity to do good the fastest are those most capable of rationally correcting and propagating their decisions through a broad set of levels (since a person’s corrigibility at a given level of abstraction determines how fast they update their beliefs there with feedback)."

Comments5


Sorted by Click to highlight new comments since:

Could you be a little more specific about the levels/traits you name? I'm interpreting them roughly as follows:

  • Values: "how close are they to the moral truth or our current understanding of it" (replace moral truth with whatever you want values to approximate).
  • Epistemology: how well do people respond to new and relevant information?
  • Causes: how effective are the causes in comparison to other causes?
  • Strategies: how well are strategies chosen withing those causes?
  • Systems: how well are the actors embedded in a supportive and complementary system?
  • Actions: how well are the strategies executed?

I think a rough categorisation of these 6 traits would be Prioritisation (Values, Epistemology, Causes) & Execution (Strategies, Systems, Actions), and I suppose you'd expect a stronger correlation within these two branches than between?

Yeah, I more or less agree with your interpretations.

The number (as well as scope) of decision levels are arbitrary because they can be split. For example:

  • Values: meta-ethics, normative ethics
  • Epistemology: defining knowledge, approaches to acquiring it (Bayes, Occam's razor...), applications (scientific method, crucial considerations...)
  • Causes: the domains can be made as narrow or wide as seems useful for prioritising
  • Strategies: career path, business plan, theory of change...
  • Systems: organisational structure, workflow, to-do list...
  • Actions: execute intention ("talk with Jane"), actuate ("twitch vocal chords")

(Also, there are weird interdependencies here. E.g. if you change the cause area you work on, the career skills acquired before might not be as effective there. Therefore, the multiplier changes. I'm assuming that they tend to be fungible enough for the model still to be useful.)

Your two categories of Prioritisation and Execution seem fitting. Perhaps some people lean more towards wanting to see concrete results, and others more towards wanting to know what results they want to get?

Does anyone disagree with the hypothesis that individuals – especially newcomers – in the international EA community tend to lean one way or the other in terms of attention spent and the rigour with which they make decisions?

To clarify: by implying that, for example, a social entrepreneur should learn about population ethics from a Oxford professor to increase impact (and the professor can learn more about organisational processes and personal effectiveness), I don't mean to say that they should both become generalists.

Rather, I mean to convey that the EA network enables people here to divide labour at particular decision levels and then pass on tasks and learned information to each other through collaborations, reciprocal favours and payments.

In a similar vein, I think it makes sense for CEA's Community Team to specialise in engaging existing community members on high-level EA concepts at weekend events and for Local Effective Altruism Network to help local groups get active and provide them with ICT support.

However, I can think of 6 past instances where it seems that either CEA or LEAN could have potentially avoided making a mistake by incorporating the thinking of the other party at decision levels where it was stronger.

I think it would be better to include this in the OP.

Will do!

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
Relevant opportunities