Hide table of contents

As part of 'strategy fortnight' (and in part inspired by this post) I decided to write this short post clarifying the relationship, as I see it,[1] between 80,000 Hours and the EA community. I chose these questions because I thought there might be some people who care about the answers and who would want to know what (someone on the leadership team at) 80,000 Hours would say.

Is 80,000 Hours's mission to build the EA community?

No — our aim is to help people have careers with a lot of social impact. If the EA community didn't exist, we could still pursue our mission.

However, we count ourselves as part of the EA community in part because we think it's pretty great. It has flaws, and we don't blanket recommend getting involved to our readers (a recommendation we feel better about making widely is to get involved in some kind of community that shares your aims). But we think the EA community does do a lot to help people (including us) figure out how to have a big positive impact, think through options carefully, and work together to make projects happen.

For that reason, we do say we think learning about and getting involved in the EA community could be a great idea for many of our readers.

And we think building the EA community can be a way to have a high-impact career, so we list articles focused on it high up on our problem profiles page and among our career reviews.

Both of these are ways in which we do contribute substantially to building the effective altruism community.

We think this is one of the ways we've had a positive impact over the years, so we do continue to put energy into this route to value (more on this below). But doing so is ultimately about helping the individuals we are writing for to increase their ability to have a positive impact by getting involved, rather than to benefit the community per se.

In other words, helping grow the ea community is part of our strategy for pursuing our mission of helping people have high-impact careers.[2]

Does 80,000 Hours seek to provide "career advice for effective altruists"?

Somewhat, but not mostly, and it would feel misleading to put it that way.

80,000 Hours focuses on helping a group much larger than the (current) EA community have higher impact careers. For example, we estimate the size of the group we are trying to reach with the website to be ~100k people — which is around 10x larger than the EA community. (For context, we currently get in the range of 4M visitors to the website a year, and have 300k newsletter subscribers.)

Some of the people in our audience are part of the EA community already, but they're a minority.

One reason we focus so broadly is that we are trying to optimise the marginal counterfactual impact of our efforts. This often translates into trying to focus on people who aren't already heavily involved and so don't have other EA resources to draw on. For someone who hasn't heard of EA or who has heard of it but doesn't know much about it, there is much lower hanging fruit for counterfactually helping them improve the impact of their careers. For example, we can introduce them to well-known-within-EA ideas like the ITN framework and cause selection, or particularly pressing issues like AI safety and biosecurity, as well as the EA community itself. Once someone is involved in EA, they are also more likely and able to take advantage of resources that are less optimised for newer people.

This is not an absolute rule, and it varies by programme – for example, the website tends to focus more (though not exclusively) on 'introductory' materials than the podcast which aims to go more in-depth, and one-on-one advising tries to tailor their discussion to the needs of whoever they're talking to. This also could change with time.

We hope people in the EA community can benefit from some of our advice and programmes, and we welcome their engagement with and feedback on our ideas. But overall, we are not focused on career advice for members of the effective altruism community in particular.[3]

Does 80,000 Hours seek to reflect the views of the EA community?

No – we want our advice to be as true and useful for having a greater impact as it can be, and we use what people in the EA community think as a source of evidence about that (among others). As above, we consider the EA community to be a rich source of advice, thinking, research, and wisdom from people who largely share our values. For example, we tend to be pretty avid readers of the EA forum and other EA resources, and we often seek advice from experts in our network, who often count themselves as part of the EA community.[4]

But to the extent that our best guesses diverge from what the EA community thinks on a topic, we go with our best guesses.

To be clear, these guesses are all-things-considered judgements, attempting to account for empirical and moral uncertainty and the views of subject-matter experts. But, for example, going with our best guesses leads us to prioritise working on existential risk reduction and generally have a more longtermist picture of cause prioritisation than we would have if we were instead trying to reflect the views of the community as a group.[5]

So what is the relationship between 80,000 hours and the EA community?

We consider ourselves part of the EA community, in the same way other orgs and individuals who post to this forum are. We share the primary goal of doing good as effectively as we can and think it's helpful and important to use evidence and careful reasoning to guide us in doing that. We try to use these principles of effective altruism.

The history of 80,000 Hours is also very connected to the history of the broader EA movement. E.g. 80,000 Hours helped popularise career choice as a priority for people part of the EA community, and our founders, staff, and advisors have had a variety of roles in other EA affiliated organisations.

As above, we also often draw on collected research, wisdom, and resources that others in this community generously share – either through channels like the EA forum, or privately. And also as above, we often recommend EA to our readers — as a community to learn from or help build — because we think it's useful and that its animating values are really important.

Moreover, we do think much of our impact is mediated by other members of the EA community. They are a huge source of ongoing support, connections, and dynamic information for helping our readers, listeners, and advisees continue their (80,000-hour-long!) journeys doing good with their careers beyond what we can provide. Introducing people to the community and helping them get more involved might be among the main ways we've had an impact over the years.

We are institutionally embedded. 80,000 Hours, like the Centre for Effective Altruism, Giving What We can, and others, is a project of the Effective Ventures group, and our biggest funder is Open Philanthropy, through their Effective Altruism Community Growth (Longtermism) programme. Our other donors are also often involved in EA.

Overall, we regard the EA community as full of great collaborators and partners in our shared goal of doing good effectively. We are grateful to be part of the exchange of important ideas, research, infrastructure, feedback, learning experiences, and financial support from other community members.

 

  1. ^

    This post was reviewed by 80,000 Hours CEO Brenton Mayer and the other programme directors Michelle Hutchinson and Niel Bowerman, as well as 80,000 Hours writer Cody Fenwick, before publishing.

  2. ^

    Related: "cause first" and "member first" approaches to community building — this post suggests that 80,000 Hours 'leans cause first' – focusing on building the EA community as a way of getting more people contributing to solving (or prioritising, or otherwise building capacity to solve) pressing global problems. I don't agree with everything in that post with regards to the shape of the distinction (e.g. I think a 'cause first' approach defined in the way I just did should often centre principles and encourage people to think for themselves), but I agree with the basic classification of 80,000 Hours there.

  3. ^

    This feels like a good place to note that we are supportive of other career-advice-focused organisations cropping up (like Probably Good and Successif), and it also seems great for individuals to post their takes on career advice (like Holden's career advice for longtermists and Richard Ngo's AI safety career advice) - not only does this produce content more aimed at audiences that 80,000 Hours' content doesn't currently focus as much on (like current members of the EA community), there is just generally room for lots of voices in this space (and if there isn't room, competition seems healthy.)

  1. ^

    Also, like other members of the community, we are informally influenced in all kinds of ways by thinking in EA – we come across books and thinkers more often if they are recommended by community members; we are socially connected to people who share ideas, etc. These are ways in which we might be too influenced by what the EA community thinks — it's harder to find and keep salient ideas from others, so we have to put more work into it.

  2. ^

    A lot of these questions are really hard, and we're very far from certain we have the right answers — others in or outside the EA community could be more right than we are on cause prioritisation or other questions. Also, a lot of our advice, like our career guide and career planning materials, are more designed to be useful to people regardless of which issues are most pressing or which careers are highest impact.

Show all footnotes
Comments5


Sorted by Click to highlight new comments since:

We hope people in the EA community can benefit from some of our advice and programmes, and we welcome their engagement with and feedback on our ideas. But overall, we are not focused on career advice for members of the effective altruism community in particular.

 

This seems like it could mean different things:

  1. "The 80k advice is meant to be great for a broad audience, which includes, among others, EAs. If we'd focus specifically on EAs it would be even better, but EAs are our target audience like anyone else is", or
  2. "The 80k advice is targeted at non-EAs. EAs might get some above-zero value from it, or they might give useful comments, and we don't want to tell EAs not to read 80k, but we know it is often probably bad-fit advice for EAs. For example, we talk a lot about things EAs already know, and we only mention in brief things that EAs should consider in length."
    1. Or even, ".. and we push people towards direction X while most EAs should probably be pushed towards NOT-X. For example, most non-EAs should think about how they could be having more impact, but most EAs should stop worrying about that so much because it's breaking them and they're already having a huge impact"

Could you clarify what you mean?

This feels fairly tricky to me actually -- I think between the two options presented I'd go with (1) (except I'm not sure what you mean by "If we'd focus specifically on EAs it would be even better" -- I do overall endorse our current choice of not focusing specifically on EAs).

However, some aspects of (2) seem right too. For example, I do think that we talk about a lot of things EAs already know about in much of our content (though not all of it). And I think some of the "here's why it makes sense to focus on impact" - type content does fall into that category (though I don't think it's harmful for EAs to consume that, just not paritcularly useful).

The way I'd explain it:

Our audience does include EAs. But there are a lot of different sub-audiences within the audience. Some of our content won't be good for some of those sub-audiences. We also often prioritise the non-EA sub-audiences over the EA sub-audience when thinking about what to write. I'd say that the website currently does this the majority of the time. but sometimes we do the reverse.

We try to produce different content that is aimed primarily at different sub-audiences, but which we hope will still be accessible to the rest of the target audience. So for example, our career guide is mostly aimed at people who aren't currently EAs, but we want it to be at-all useful for EAs. Conversely, some of our content -- like this post on whether or not to take capabilities-enhancing roles if you want to help with AI safety (https://80000hours.org/articles/ai-capabilities/), and to a lesser extent our career reviews -- are "further down our funnel" and so might be a better fit for EAs; but we also want those to be accessible to non-EAs and put work into making that the case.

This trickiness is a downside of having a broad target audience that includes different sub-audiences.

I guess if the question is "do I think EAs should ever read any of our content" I'd say yes. If the question is "do I think all of our content is a good fit for EAs" I'd say no. If the question is "do I think any of our content is harmful for EAs to read" I'd say "overall no" though there are some cases of people (EAs and non-EAs) being negatively affected by our content (e.g. finding it demoralising).

Thanks

I was specifically thinking about career guides (and I'm most interested in software, personally).

 

(I'm embarrassed to say I forgot 80k has lots of other material too, especially since I keep sharing that other-material with my friends and referencing it as a trusted source. For example, you're my go-to source about climate. So totally oops for forgetting all that, and +1 for writing it and having it relevant for me too)

Have the answers to these questions changed over the years ? E.g. how might you have answered them in 2017 or 2015?

I don't know the answer to this, because I've only been working at 80k since 2019 - but my impression is this isn't radically different from what might have been written in those years.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f