Hide table of contents

This is not a well-developed thought and I am certainly missing something, but it has been coming back to me for a while so I want to share it. This is also not a very new topic, and clearly an “EA should” post, I am sorry I am not quoting every post I am drawing inspiration from, but I have read a lot of people in this forum with similar ideas to these. I don’t have much access to the internet where I am and I can’t give proper credit and back up my statements with links, apologies for that.

I think this idea is particularly powerful for people who so far have had mediocre, average, or slightly above average performance in their education or workplace. If you are one of them, please read twice.

I am not criticizing any institution, if anything I am appealing to those individuals who are unhappy with the current state of affairs to start valuable things now. I am not so much criticizing EA as a possible misunderstanding of some EA ideas. Most of what I am saying is even “EA Mainstream”. 80,000 hours, for example, actively advises people against dropping good careers in “normal” organizations doing good in favor of organizations with the EA label.

 

Main point:

I think the focus on having an EA career (employment in EA, founding something EA) might be the wrong advice for most people. I think the other two major options are earning to give, which is no longer prioritized, and raising awareness. So, simplifying, it comes down to having an EA career or convincing others to have it. I think we have much better options that are practical and don’t require you to change pathways. 

I think if we get millions of people to 1) be more rational and 2) be better to the world we will have many positive outcomes that we cannot even imagine now. If we stay in our bubble we can’t reach there, no matter how many people EA-labelled organizations can hire. It is safe to stay within a closed community, but it is not the most impactful thing to do.

I think having 1% of humanity lightly engaged in EA-related activities is more valuable than having 0,0001% deeply engaged. 

So, what could we be doing?

  • We need activism, especially to change the way governments engage with existential risks, and EA causes in general. 
  • We need much more earning-to-give
  • We need people wherever they are to make their immediate context more charitable and more rational. (Their team, institution or company)
  • We need volunteer-run projects

Activism:

Extending some beneficial ideas to be accepted by more people and almost all political groups in a state, country, or town. I think many of the key areas EA is prioritizing (Existential Risk being the most prominent) need change within governments and public opinion, and it might be difficult for small groups of employed EAs to get to attain it. This change will come about much faster if these full-time dedicated researchers, donors, project managers, and service providers are supported by communities of millions of activists aligned with their agenda. They can’t employ two million people but they can surely be supported by them.

A group of two million loosely engaged people, able to vote in one direction, sign petitions, convince others to write petitions, share social media posts, contact their local representatives, and participate in peaceful public acts have a great chance of success even if they only dedicate 2 hours of their lives a month to a cause.

One great big thing people in the EA movement have achieved was done by people who did not change careers: Increasing allocation for global development aid. EA Zurich more than doubled the city of Zurich's international Aid Budget with a ballot initiative in 2018. 

Earning to give:

We need more resources. Thousands of times more. The idea that “we have so much more funding now” has been going around for a while. I think there might be a mistaken logical association.

  1. Longtermism is the most important EA Cause
  2. Longtermism has more funds than ever and might not need more funding right now
  3. Therefore, EA causes don’t need more funding.

We might have enough funding for longtermism for a while, and definitely for Meta activities, but global health and poverty is not finished. We have not dewormed the world, we have not given a cash transfer to even 1% of the extremely poor, and we have not stopped malaria deaths. The size of the funding that is needed to make those things happen is so big that we have evolved intellectually as a community and grown out of that focus before we have even started on the job.

We could say similar things about Animal Welfare and other causes maybe even about some existential risks.  EA giving is still a tiny part of all giving, there are many great projects not getting the funding they need, there are so many communities not getting the great projects they need.

Promotion of EA mindset:

Consider two careers:

James is a software engineer and an EA who tries to apply for AI Safety jobs, he is fairly smart but for some reason, the interviews are not working well and he has spent the last 2 years applying without success. He spends his time working on his CV and trying to be present in the EA space, being part of fellowships and global conferences, networking, and participating in the forum. He eventually lands a job in an EA organization and he is helping them have a better website, he is not working on AI safety problems but on helping the Meta EA. He had to fill 600 applications and gotten much better at applying for jobs and passing interviews.

John has the same capacity as James, he applies for one EA job, gets rejected, and applies for whatever… entry-level computer engineering job at a food processing company. He spends two years working there and building experience, he donates a portion of this salary to GiveDirectly every month and in these 2 years, he has helped 5 families receive a potentially life-changing cash transfer. In the next few years, because of his rational and charitable mindset, he will bring in a few improvements in his company that improve the interaction of his company with the environment, animal welfare, and charitable giving. 

The issue is not just that James might be worse off than John in all metrics that matter to EA (Career Capital, net impact, mental health), but that you can have a million Johns but you can’t have a million Jameses.

Second comparison:

Gustav works for a development organization implementing Child Protection projects in East Africa. He reads about EA and realizes he has been wasting his time, he starts applying for jobs and competes with people like James. He does 200 applications and eventually lands an opportunity to be an operations assistant at an AI Safety organization.

Karl works for the same organization and, with Gustav, he gets to know about EA. He eventually evolves his thinking to tweak their Child Protection organization to be more impactful and rational, he leads the process of renewing the Impact Evaluation system in his organization, and he pushes for a re-focus on evidence-backed interventions. This leads to slightly improved well-being outcomes for thousands of children.

One point salient in my examples and previous lines is that people trying to get EA jobs might sacrifice A) their existing career capital and networks and B) their potential career capital and networks. Karl and John have better positions and influence in their jobs than Gustav and James, it might not be the case 10 years from now but it is now. Job hunting is a very inefficient use of time, and while you are dedicated to job hunting you are not doing other things with other people. If you dedicate 200 hours to job hunting and don’t get a job, the only good thing is whatever you learned about answering interview questions and preparing your CV, and there is a limit to what you can learn from a process that tends to have a heavy dose of luck involved. Those are 200 hours that you could have spent improving outcomes in your current job place, or getting a more attainable job where you are more quickly put in value creation chains.

Another reason why I love this type of thinking is because of the counterfactual:

You might be the only EA in your workplace so you have a unique opportunity to change things. Meanwhile when you are applying for EA jobs you are competing with someone very similar to you. So, if you get the EA job you are doing good but someone else could have been doing it anyway. If you are the guy changing things for good wherever you are and they replace you, nobody else is going to do it. This advice is not only for people working in big NGOs or government social services. Not even for people only working for positive things, if you could be the key person mitigating some of the horrible things that your horrible company is doing, you are a unique hero. This is a high-value possibility for literally everyone except those who already work in EA-labelled organizations.

Volunteer-run direct impact projects

If we accept that EA needs more people, we should be aware that we can’t currently accommodate them all. There are not enough things to do for us. This is making the entry point unnecessarily steep. The only accessible thing is to discuss EA.

I think EA groups and individual EAs should think of how EA ideas apply to their country, their immediate environment, and community; try local prioritization, and find actionable things for people to do.

These are some examples, there are other good ideas in this forum:

  • Local charity evaluation (some people will only give local, what is the best option in my town?), publish lists, give awards.
  • Group analysis and coaching on the previous point (EA mindset in the workplace)
  • Look at the broad lists of important causes that EA has considered and see which ones are more actionable for each person, each group, each town, each country, then translate them into the quickest path to action and volunteer-based projects. They are all very impactful and if you have the personal fit or proximity reasons, you could pick any of them as the most impactful for you.
  • Coordinate earning to give (eg. Setting group marks, encouraging each other, giving advice on negotiation and financial management).
  • Fundraising campaigns for highly impactful organizations.
  • Independent evaluation of grants within the EA environment (not necessarily local, but better do it in a team) or by key donors such as the Global Fund, Humanitarian Response Funds, European Commission, or USAID.
  • Rationality awareness (some interesting existing EA-aligned groups focus on this first)

I understand that volunteers and activists are not experts and doing some of these “projects” can be unprofessional. However, this might be one of the best ways to start (test something, iterate, evolve) before founding an organization.

In a few sentences, my advice is: 

If you care about doing good more efficiently find out what is the most impactful thing you can do, and start the first steps within a week. Just two limitations:  it can’t be changing your career and it can’t be communicating about EA, even if you think those are the most impactful things, pick the next one to act. 

Put more aggressively: Please, if you are the type of person who sometimes binges the 80,000 hours job board (or this forum) and starts applying to things randomly, just don't. You don’t need to be employed by anyone to be a great source of good to the world. The career move might not be worth it, and the time spent fantasizing and applying is also costing you a small piece of your life.

Comments7


Sorted by Click to highlight new comments since:

Thanks for this! I agree that the case for working at an EA org. seems less clear if you have already established career capital in a field or organization.

Regardless, the most important crux here is this belief

I think having 1% of humanity lightly engaged in EA-related activities is more valuable than having 0,0001% deeply engaged. 

The necessity of EA alignment/engagement is an enduring question within movement-building. Perhaps the most relevant version of it right now is around AI safety: I know several group organizers who believe that a) AI is one of the most important causes and b) EA alignment is crucial to being able to do good alignment work, which means that it's more important to get the right % of humanity deeply engaged in EA activities.

Another way of framing this is that impact might be heavy-tailed: that is, ~most of the impact might come from people at the very tail-end of the population (e.g., people who are deeply engaged in EA). If that were true, then that might mean that it's still more impactful to deeply engage a few people than to shallowly engage many people.

I guess that the people who are likeliest to believe that impact is heavy-tailed would also prioritize x-risk reduction (esp. from AI) the most, which would also reduce their perception of the impact of earning-to-give (because of longtermism's funding situation, as you note). I'm not sure that those kinds of group organizers would agree that they should prioritize activities that promote 'shallow' EA engagement (e.g., local volunteering) or high-absorbency paths (e.g. earning-to-give), because it's plausible that the marginal impact of deeper engagement outweighs the increased exposure.

But none of this contravenes your overall point that for some individuals, the most marginally impactful thing they could do may not be to work at an EA org. 

edit: used "shallowly" twice, incorrectly

I think having 1% of humanity lightly engaged in EA-related activities is more valuable than having 0,0001% deeply engaged. 

I agree that this is the crux, but I don't think it's an either-or scenario. I guess the question may be how to prioritize recruiting for high priority EA jobs, while also promoting higher-absorbency roles to those that can't work in the high priority ones. 

Being "mediocre, average, or slightly above average" is not always going to be a permanent state. People develop career capital and skills, and someone who isn't a good fit for a priority role out of university (or at any particular moment), may become one over time. Some of Marc's suggestions could be thought of as stepping stones (he mentioned this in a few places, but it seems worth calling out).

Related to that, the EA jobs landscape is going to change a lot in the next few years as funding pours in, and projects get started and need to staff-up. It seems worthwhile to keep the "collateral damage" engaged and feeling like a part of the EA community, so that they can potentially help fill the new roles that are created.

This is a really good point, thank you for adding important nuance! I think coordination within the EA community is important for ensuring that we engage + sustain the entire spectrum of talent. I'd be keen for people with good fits* to work on engaging people who are less likely to be in the 'heavy-tail' of impact.

*e.g., have a strong comparative advantage, are already embedded in communities that may find it harder to pivot

I also have a strong reaction to Marc's "collateral damage" phrase. I feel sad that this may be a perception people hold, and I do very much want people to feel like they can contribute impactfully beyond mainstream priority paths. I think this could be partly a communication issue, where there's conflation between (1) what the [meta-]EA community should prioritize next, (2) what the [cause-specific, e.g. x-risk] community should prioritize next,** and (3) what this specific individual could do to have the most impact. My original comment was intended to get at (1) and (2), but acknowledge that (3) can look very different - more like what Marc is suggesting.

**And that's ignoring that there aren't clear distinctions between (1) and (2). Usually there's significant overlap!

I find the claim that people could upskill into significantly more impactful paths to be really interesting. This seems ~related to my belief that far more people than we currently expect can become extremely impactful, provided we identify their specific comparative advantages. I'd be excited for someone to think about potential mechanisms for (a) supporting later-stage professionals in identifying + pivoting  higher-impact opportunities and (b) constructing paths for early-career individuals to upskill specifically with a higher-impact path in mind.

I am thinking along  similar lines Miranda, and I may have some of that comparative advantage too :)

I don't like to talk about plans too much before actually getting down to doing, but I am working on a project to find ways to support people coming to EA mid-career/mid-life (as I did). I expect to write a top level post about this in the next few weeks.

The goals are crystalizing a bit:

1. helping to keep people engaged and feeling like a part of the community even if they can't (aren't a good fit for, or aren't yet ready to) consider a high impact career change
2. helping people figure out how to have the most impact in the immediate term, within current constraints
3. helping people work towards higher impact, even if it's in the longer term 

Some ideas for how to do it: 

1. compiling and organizing resources that are specifically relevant for the demographic
2. interfacing with EA orgs (80k, local groups, EA Anywhere, Virtual Programs, etc.) in appropriate, mutually beneficial ways 
3. peer-based support (because situations mid-career/life vary widely) - IE probably taking the form of a group to start and then hopefully figuring out what kind of 1-on-1 stuff could work too (mentorship, buddy system, etc.)

That sounds very exciting. Will be keeping my eyes posted for your post (though I'd be grateful if you could ping me with it when you post, too)!

Will do!

Thank you for explaining it so well.

I guess EA is interested in getting the best and that justifies giving hope to many people who are between OK and almost the best. But that process has some collateral damage. This post is maybe about options to deal with the collateral.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 3m read
 · 
We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges –  and in particular the 🔸10% Pledge – the core focus of GWWC’s work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve seen almost 10,000 people take it and give nearly $50M USD to high-impact charities annually. We believe it could become a norm among at least the richest 1% — and likely a much wider segment of the population — which would cumulatively direct an enormous quantity of financial resources towards tackling the world’s most pressing problems.  We initiated this focus on pledges in early 2024, and are doubling down on it in 2025. In line with this, we are retiring various other initiatives we were previously running and which are not consistent with our new strategy. Introducing our BHAG We are setting ourselves a long-term Big Hairy Audacious Goal (BHAG) of 1 million pledgers donating $3B USD to high-impact charities annually, which we will start working towards in 2025. 1 million pledgers donating $3B USD to high-impact charities annually would be roughly equivalent to ~100x GWWC’s current scale, and could be achieved by 1% of the world’s richest 1% pledging and giving effectively. Achieving this would imply the equivalent of nearly 1 million lives being saved[3] every year. See the BHAG FAQ for more info. Working towards our BHAG Over the coming years, we expect to test various growth pathways and interventions that could get us to our BHAG, including digital marketing, partnerships with aligned organisations, community advocacy, media/PR, and direct outreach to potential pledgers. We thin
Recent opportunities in Career choice
14
Ryan Kidd
·
54