Hide table of contents

Why do we focus so heavily on cause prioritization within the EA movement?

Because all that’s standing between us and the grey goo is an altruist with a spreadsheet! It’s a bird, it’s a plane, it’s -- Project Man(agement).

And that’s what EA boils down to, in the end. EA is a push for better altruistic project management. Institutions and people fill different roles in the movement, and everybody does a little bit of everything. But in theory, the institutional flow of influence is something like this:

1. Foundational vision: Establishing a mission statement and movement culture (i.e. Giving What We Can and Toby Ord coining "Effective Altruism")

2. Intellectual underpinnings: Making the vision more nuanced and grounding it in academic theory (i.e. Global Priorities Institute)

3. Choosing a cause: Using theory to establish broad, applied themes, such as “AI safety” or “good governance” (i.e. Future of Humanity Institute)

4. Meta-level project management: Comparing object-level strategies and institutions working on a given cause (i.e. OpenPhil, GiveWell, 80,000 Hours)

5. Object-level project management: Planning, evaluating, and coordinating direct work with tangible outcomes after committing to a strategy (i.e. OpenAI)

6. Direct work: Executing concrete projects, including EA movement advocacy (i.e. Paul Christiano)

7. Personal development: Education, advice, enhancing wellbeing (i.e. Center for Applied Rationality)

I think this is a useful framework for thinking about the roles of institutions in the movement. And many EAs seem to have identified with this perspective. To them, success at EA means getting a job at an EA institution, or maybe founding one. And that’s great. Nobody’s a genius in all ways. Heck, I’ve managed to get a fair amount done without being a genius at all! Getting a job at an EA org means you don’t have to figure it all out for yourself.

The problem is that there just aren’t enough EA jobs. So EAs are stuck with earning to give, or doing the really hard work of writing EA forum articles.

So really, why all the cause prioritization talk?

Especially those of us working outside of EA institutions?

For most people associated with the EA movement, their own personal most important, tractable, and neglected strategy isn’t “getting an EA job.” It’s skill-building. They need to shape themselves into the kind of person capable of either getting hired for an EA job, or creating one for themselves (and maybe for others).

80,000 Hours provides advice on this, from what sorts of degrees to earn, to how to build career capital, to recommendations that people focus on acquiring transferrable and quantitative skills.

Is this what we’re talking about online? Let’s have a look at the last 20 posts each on the Forum, Reddit sub, and Facebook group. My classifications are intuitive, rough, and based only on a glance at each post. However, it looks like only about 5% of posts can be seen as primarily about individual skill building. The vast majority are about engaging with the EA foundational vision on an institutional level.

Of course, the intellectual work of writing and reading these posts is a form of personal development. And I can think of a few reasons why overt personal development writing isn’t well-represented in this sample:

  • We talk about this in in other community-linked forums that aren’t represented in my sample
  • People have plenty of external resources and motivation for personal development; EA is filling a different niche
  • Personal development relies on too many personal factors to make for a useful collective dialog topic
  • Personal development is difficult to improve in general
  • The EA movement selects for highly effective people whose main problem is making sure they’re applying their energies to the right problem

There’s probably some truth in each of these, and I might be missing some. Yet Tyler Cowen routinely calls for researching and teaching the habits of highly effective people. The business world publishes lots of books on how to be a more competent worker. Maybe we need to be focusing more on these questions, as a community. I tentatively think that this is a neglected topic in our movement.

EA is having considerable success in motivating people to see their career as a way to help the world. It’s reaching people who might have been altruistically oriented throughout their lives, but who hadn’t really seen just how important their career could be to making a difference in the world. They discover EA, suddenly feel as though they should have been working a lot harder for a lot longer on the right projects, and then they discover that they’re not cut out for any of the highly-competitive EA jobs and feel stuck.

So why aren’t they generating a large volume of public dialog about how to develop their skills? My guess is that they’re mirroring the conversation they see.

Why we should talk more about personal development.

There are a few reasons for us to do discuss this topic more, even though there’s lots and lots of guidance on the topic external to EA.

  • The primary skills necessary for a competent EA worker might be different from what makes a person effective in the world of business
  • The people we have in the EA movement might have a different distribution of strengths and weaknesses
  • More robust dialog on personal development within EA might be a more effective use of grassroots energy than institutional development writing.
  • Focusing more on personal development might address the common critique of EA being elitist or disappointing
  • Our altruistic motivations and commitment to intellectual rigor might make us unusually good at this

I’ll conclude by listing some fundamental efforts that we should explore in this area.

Next steps

  1. Review personal development literature through an EA/rationalist lens
  2. Link to and discuss academic work on personal development
  3. Catalog pre-existing efforts and resources for personal development, both internal and external to the EA community
  4. Classify personal development skills and the problems they address
  5. Create assessment tools to help individual people determine what skills they most need to work on
  6. Interview highly effective EAs with a track record of success to understand their approach, what’s made them successful, and how they’ve overcome barriers
  7. Interview struggling EAs to understand their problems and connect them with resources for personal development
  8. Organize workshops and other institutions related to EA personal development
Comments3


Sorted by Click to highlight new comments since:

I suspect that part of the reason why this is happened is that the EA community is closely associated with the rationality community, so it's often easiest just to have the personal development discussion online over there. Plus another reason that people mightn't feel a need for it online is that lots of discussion of personal development occurs informally at meetups.

I agree with this. But I also want to add -- I think a lot of EAs are put off from the rationalist community for different reasons (e.g. seemingly less-than-altruistic motivations, inaccessible language, discussions about things that don't always feel practically relevant, etc.)

From a personal anecdote: I've had an eye on LessWrong and other rationalist spaces for some time, but never thought it was my territory for some of the reasons mentioned above. It wasn't until I went to a CFAR workshop when I finally felt I knew enough about rationality to actually contribute to rationality discussions.

I see a lot of work being done to make EA more accessible to people who don't have personal ties to the EA community, but not as much effort from the rationalist community on this. I feel like this could potentially be impactful for contributing to the personal development of EAs.

I think Training for Good is in this niche.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f