In thinking about what it means to lead a good life, people often struggle with the question of how much is enough: how much does our morality demand of us? People have given a wide range of answers to this question, but effective altruism has historically used "giving 10%". Yes, it's better if you donate a larger fraction, switch to a job where you can earn more, or put your career to use directly, but if you're giving 10% to effective charity you're doing your share, you've met the bar to consider yourself an EA, and we're happy to have you on board.

I say "historically", because it feels like this is changing; I think EAs would generally still agree with my paragraph above, but while in 2014 it would have been uncontroversial now I think some would disagree and others would have to think for a while.

EA started out as a funding-constrained movement. Whether you looked at global poverty, existential risk, animal advocacy, or movement building, many excellent people were working as volunteers or well below what they could earn because there just wasn't the money to offer competitive pay. Every year GiveWell's total room for more funding was a multiple of their money moved. In this environment, the importance of donations was clear.

EA has been pretty successful in raising money, however, and the primary constraint has shifted from money to people. In 2015, 80k made a strong case for focusing on what people can do directly, not mediated by donations, and this case is even stronger today. Personally, I've found this pretty convincing, though in 2017 I decided to return to earning to give because it still seemed like the best fit for me.

What this means, however, is that we are now trying to build a different sort of movement than we were ten years ago. While people who've dedicated their careers toward the most critical things have made up the core of the movement all along, the ratio of impact has changed.

Imagine you have a group of people donating 10% to the typical mix of EA causes. You are given the option to convince one of them to start working on one of 80k's priority areas, but in doing so N others will get discouraged and stop donating. This is a bit of a false dilemma, since ideally these would not be in conflict, but let's stick with this for a bit because I think it is illustrative. In 2012 I would have put a pretty low number for N, perhaps ~3, partly because we were low on money, but also because we were starting a movement. In 2015 I would have put N at ~30: a factor of 6 because of the difference between 10% and the most that people in typical earning to give roles can generally donate (~60%) and a factor of 5 because of the considerations in Why you should focus more on talent gaps, not funding gaps. With the large recent increases in EA-influenced spending I'd roughly put N at ~300 [1], though I'd be interested in better estimates.

Unfortunately, a norm of "10% and you're doing your part" combines very poorly with the reality of 100% of someone's career having ~300x more impact than 10%. This makes EA feel much more demanding than it used to: instead of saying "look at the impact you can have by donating 10%", we're now generally saying "look at the impact you can have by building your entire career around work on an important problem."

(This has not applied evenly. People who were already planning to make EA central to their career are generally experiencing EA as less demanding: pay in EA organizations has gone up, there is less stress around fundraising, and there is less of a focus on frugality or other forms of personal sacrifice. In some cases these changes mean that if someone does decide to shift their career it is less of a sacrifice than it would've been, though that does depend on how the field you enter is funded.)

While not everyone is motivated by a sense that they should be doing their part (see: excited vs. obligatory altruism), I do think this is a major motivation for many people. Figuring out how to encourage people who would thrive in an EA-motivated career to go in that direction without discouraging and losing people for which that would be too large a sacrifice seems really important, and I don't see how to solve it.

Inspired by conversations with Alex Gordon-Brown, Denise Melchin, and others.


[1] I expect people working in EA movement building have an estimate (a) the value of a GWWC pledge and (b) the value of a similar person going into an 80k priority area, and this is essentially the ratio of these. I did a small amount of looking, however, and didn't see public estimates. I guessed ~$10k/y for (a) and ~$3M/y for (b), giving N=~300. Part of why I have (b) this high is that I think it's now difficult to turn additional money into good work on the most important areas. If you would give a much higher number for (b), my guess is that you are imagining someone much stronger than the typical person donating 10%.

142

0
0

Reactions

0
0

More posts like this

Comments22


Sorted by Click to highlight new comments since:

the primary constraint has shifted from money to people

This seems like an incorrect or at best misleading description of the situation. EA plausibly now has more money than it knows what to do with (at least if you want to do better than GiveDirectly) but it also has more people than it knows what to do with. Exactly what the primary constraint is now is hard to know confidently or summarise succinctly, but it's pretty clearly neither of those. (80k discusses some of the issues with a "people-constrained" framing here.) In general large-scale problems that can be solved by just throwing money or throwing people at them are the exception and not the rule.

For some cause areas the constraint is plausibly direct workers with some particular set of capabilities. But even most people who want to dedicate their careers to EA could not become effective e.g. AI safety researchers no matter how hard they tried. Indeed merely trying may be negative impact in the typical case due to opportunity cost of interviewers' time etc (even if EV-positive given the information the applicant has). One of the nice things about money is that it basically can't hurt, and indeed arguments about the overhead of managing volunteer/unspecialised labour were part of how we wound up with the donation focus in the first place.

I think there is a large fraction of the population for whom donating remains the most good they can do, focusing on whatever problems are still constrained by money (GiveDirectly if nothing else) because the other problems are constrained by capabilities or resources which they don't personally have or control. The shift from donation focus to direct work focus isn't just increasing demandingness for these people, it's telling them they can't meaningfully contribute at all. Of course inasmuch as it's true that a particular direct work job is more impactful than a very large amount of donations it's important to be open and honest about this so those who actually do have the required capabilities can make the right decisions and tradeoffs. But this is fundamentally in tension with building a functioning and supportive community, because people need to feel like their community won't abandon them if they turn out to be unable to get a direct work job (and this is especially true when a lot of the direct work in question is "hits-based" longshots where failure is the norm). I worry that even people who could potentially have extraordinarily high impact as direct workers might be put off by a community that doesn't seem like it would continue to value them if their direct work plans didn't pan out.

I strongly agree with this comment, especially the last bit.

In line with the first two paragraphs, I think the primary constraint is plausibly founders [of orgs and mega-projects], rather than generically 'switching to direct work'.

Maybe, though given the unilateralist's curse and other issues of the sort discussed by 80k here I think it might not be good for many people currently on the fence about whether to found EA orgs/megaprojects to do so. There might be a shortage of "good" orgs but that's not necessarily a problem you can solve by throwing founders at it.

It also often seems to me that orgs with the right focus already exist (and founding additional ones with the same focus would just duplicate effort) but are unable to scale up well, and so I suspect "management capacity" is a significant bottleneck for EA. But scaling up organizations is a fundamentally hard problem, and it's entirely normal for companies doing so to see huge decreases in efficiency (which if they're lucky are compensated for by economies of scale elsewhere).

 I think this post does a great job of capturing  something I've heard from quite a few people recently.

Especially for longtermist EAs, it seems direct work is substantially more valuable relative to donations than it was in the past, and I think your thought experiment about the number of GWWC pledges it'd make sense to trade for one person working on an 80k priority pathway is a reasonably clear way of illustrating that point. 

But I think that this is a false dilemma (as you suggest it might be).  This isn't just because I doubt that the pledge (or effective giving generally) are in tension, but because I think they're mutually supportive. Effective giving is a reasonably common way to enter the effective altruism community. Noticing that you can have an extraordinary impact with donations — which, even from a longtermist perspective, I still think you can have — can inspire people to begin to taking action to improve the world, and potentially continue onto working directly. I think historically it's been a pretty common first step, and though I anticipate more direct efforts to recruit highly engaged EAs to become relatively more prominent in future, I still expect the path from effective giving --> priority path career, to continue much more often than effective giving --> someone not taking a priority path.

I've heard a lot of conflicting views on whether the above is right; it seems quite a few people disagree with me, and think there's much more of a tension here than I do, and I'd be interested to hear why. (For disclosure, I work at GWWC and personally see getting more people into EA as one of the main ways GWWC can be impactful) 

I suppose the upshot on this, if I'm right, is that the norm that "10% and you're doing your part" can continue, and it's not so obvious that it's in tension with the fact that doing direct work may be many times more impactful. While it may be uncomfortable that there are significant differences in the impactfulness of members of the community, I think this is/was/always will be the case.

Another thing worth adding is that I think there's also room for multiple norms on what counts as "doing your part". For example, I think you should also be commended and feel like you've done your part if you apply to several priority paths, even if you don't get one / it doesn't work out for whatever reason. Maybe Holden's suggestion of trying to get kick-ass at something, while being on standby to use your skill for good, could be another.

By way of conclusion, I feel like what I've written above might seem dismissive of the general issue that EA has yet to figure out — given the new landscape — how to think about demandingness. But I really think there is something to work out here, and so I really interesting this post for raising it quite explicitly as an issue. 

"I still expect the path from effective giving --> priority path career, to continue much more often than effective giving --> someone not taking a priority path."

I parsed this as over 50% of people who do effective giving or take the GWWC pledge of similar go on to (or you predict will go on to) do full time impact work. Is that what was intended?

I interpreted the arrows to be causal and not just temporal. So effective giving is more often going to cause people to work in a priority path than it will cause people to not work in a priority path where they otherwise would.

What Bec Hawk said is right: my claim is that that the number of people effective giving causes to go into direct work will be greater than the number people it causes to not go into direct work (who otherwise would). 

For what it's worth, I don't think >50% of people who take the GWWC pledge will go onto doing direct work. 

mic
31
0
0

Is it possible to have a 10% version of pursuing a high-impact career? Instead of donating 10% of your income, you would donate a couple hours a week to high-impact volunteering. I've listed a couple opportunities here. In my opinion, many of these would count as a high-impact career if you did full-time.

  • Organizing a local EA group
    • Or in-person/remote volunteering for a university EA group, to help with managing Airtable, handling operations, designing events, facilitating discussions, etc. Although I don't know that any local EA groups currently accept remote volunteers, from my experience with running EA at Georgia Tech, I know we'd really benefit from one!
    • If you're quite knowledgeable about EA/longtermism and like talking to people about EA, being something like an EA Guides Program mentor could be a great option. One-on-one chats can be quite helpful for enabling people to develop better plans for making an impact throughout their life. I don't know the Global Challenges Project is looking for more mentors for its EA Guides Program at this time, but it would be valuable if it had a greater capacity.
  • Facilitating for EA programs that are constrained by the number of (good) facilitators. In Q1 2022, this included the AGI Safety Fundamentals technical alignment and governance tracks. (Edit) EA Virtual Programs is also constrained by the number of facilitators.
  • Signing up as a personal assistant for Pineapple Operations (assuming this is constrained by the number of PAs, though I have no idea whether it is)
  • Phone banking for Carrick Flynn's campaign (though this opportunity is only available through May 17)
  • Gaining experience that would be helpful for pursuing a high-impact career (e.g., by taking a MOOC on deep learning to test your fit for machine learning work for AI safety)
  • Distilling AI safety articles
  • Volunteering for Apart Research's AI safety or meta AI safety projects
  • Volunteering for projects from Impact CoLabs, perhaps
  • Running a workplace EA group, especially if you're able to foster discussion about working on pressing problems

Part-time volunteering might nprovide as much of an opportunity to build unique skills, compared to working full-time on direct work, but I think it could still be pretty valuable depending on what you do.

In a way, sacrificing your time might be more demanding than sacrificing your excess income. But volunteering can help you feel more connected to the community and feel more fulfilling than just donating money as an individual. It might not even be a sacrifice as for some opportunities, you could get paid, either directly (as in the case of Pineapple Operations) or through applying to the EA Infrastructure Fund or Long-Term Future Fund.

I expect 10 people donating 10% of their time to be less effective than 1 person using 100% of their time because you don't get to reap the benefits of learning for the 10% people. Example: if people work for 40 years, then 10 people donating 10% of their time gives you 10 years with 0 experience, 10 with 1 year, 10 with 2 years, and 10 with 3 years; however, if someone is doing EA work full-time, you get 1 year with 0 exp, 1 with 1, 1 with 2, etc. I expect 1 year with 20 years of experience to plausibly be as good/useful as 10 with 3 years of experience. Caveats to the simple model:

  • labor-years might be more valuable during the present
  • if you're volunteering for a thing that is similar to what you spend the other 90% of your time doing, then you still get better at the thing you're volunteering for

I make a similar argument here.

I expect 10 people donating 10% of their time to be less effective than 1 person using 100% of their time because you don't get to reap the benefits of learning for the 10% people [emphasize mine]

"benefits of learning" doesn't feel like the only reason, or even the primary reason, why I expect full-time EA work to be much more impactful than part-time EA work, controlling for individual factors. To me, network/coordination costs seem much higher. E.g. it's very hard to manage a team of volunteer researchers or run an org where people volunteer 4h/week on average, and presumably less consistently.

My bad, I meant to write "Part-time volunteering might not provide as much of an opportunity to build unique skills, compared to working full-time on direct work". Fixed.

[anonymous]12
0
0

I think in most cases, this doesn't look like using 10% of your time, but rather trading off the an optimally effective career for a less effective career with that improves along selfish dimensions such as salary, location, work/life balance, personal engagement, etc. 

This picture is complicated by the fact that many of these characteristics are not independent from effectiveness, so it isn't clean. Personal fit for a career is a good example of this because it's both selfish and you'll be better at your job if you find a career with relative better fit. 

[anonymous]3
0
0

assuming this is constrained by the number of PAs, though I have no idea whether it is

It is.

Related: Scalaby using labour tag and the concept of Task Y

(This has not applied evenly. People who were already planning to make EA central to their career are generally experiencing EA as less demanding: pay in EA organizations has gone up, there is less stress around fundraising, and there is less of a focus on frugality or other forms of personal sacrifice. In some cases these changes mean that if someone does decide to shift their career it is less of a sacrifice than it would've been, though that does depend on how the field you enter is funded.)

Thanks, I found this discussion of in what ways EA is now more vs less demanding quite clarifying. I appreciate the point that for some people EA is much less demanding than it used to be, while for others it's much more so.

My original title for this post was actually "Increasing and Decreasing Demandingness", but as I got to writing I found I had a lot more to say on the increasing side.

Thank you for this post that touches on the important point of demandingness. Personally, I can see it in two ways.

On a global level giving 10% to effective causes is relatively rare. Giving What We Can has grown impressively but still, less than 1 in every 50.000[1] of the world's high-income population have taken it. 10% is higher than the average donations that are below 2% of GDP. Even in the EA survey, only 1/3 have said to donate at least this amount. While some of the top areas in EA seem less funding constraint, there is still much room for spending until for example GiveDirectly can't give away any more money. In that sense, I'm very grateful to anyone who is able and willing to commit to giving 10% or more of their income and would not want to exclude them from seeing themselves as Effective Altruists. If we've funded everything that is equivalent to GiveDirectly's impact or we have at least 50 Mio. people donating 10+% then I'd revisit this but currently, there is still enough to do.

On a personal level, the concept of demandingness has no limit. 10% is just a Schelling point, something that is easy to communicate for people new to the movement, a goal to be reached. Doing good better doesn't stop there and it doesn't stop at thinking about donations. I like the framing of excited altruism better or altruism as a central purpose. Another framing could be that of aiming higher: Continuously stretching for ways to have more impact while taking care of oneself. Each of these framings will have its supporters and I would encourage anyone to select the one that motivates them best. At the same time, the community and its support structure are very important to keep people healthy and motivated when they feel they are failing at their self-set goals.

  1. ^

    Taking the number of 500 Mio. high-income people in the world and 8500 GWWC members

Re footnote, the only public estimate I've seen is $400k-$4M here, so you're in the same ballpark.

Personally I think $3M/y is too high, though I too would like to see more opinions and discussion on this topic.

Thanks! I had missed that part of the article when skimming it again in writing this.  Note that a bit earlier, in discussing the highest priority roles, they give "typically" over $3M and "often" over $10M.

Thank you, I needed to hear this stated clearly. The trend you point at closely tracks my own anxiety over time around having enough impact.

The relatively huge value of directing your whole career at EA is something that hasn't fully sunk in for me intuitively, and I expect the same for others who don't work at EA orgs.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Recent opportunities in Community
46
Ivan Burduk
· · 2m read