Recently I got published an op-ed in The Crimson advocating, sort of, for an Earning to Give strategy.

The Crimson is widely read among Harvard students, and its content runs through many circles — not just those who care about student journalism.

I thought the piece was important to write.

I’ve noticed a recurring trend in conversations about careers here at Harvard: people want to do good, but have no idea how. So either — they give up and “sell out” to a comfy lifestyle, or they follow their passions/work at an NGO/etc. without even considering Earning to Give as a legitimate option.

I’m aware that orgs like 80,000 Hours have moved away from their (original) primary focus on Earning to Give as a career strategy.

But I think, based on folks I’ve talked to at Harvard, it’s still one of the most compelling ways to at least get people on board — it doesn’t require sacrifice of a well-paid lifestyle, but more importantly, it doesn’t require sacrifice of a prestigious career (which is what so many here care about).

80,000 hours also has a set of bulletpoints intended to determine whether you’d be a good fit: https://80000hours.org/articles/earning-to-give/

They ask four questions:

  1. Do you have high earning potential? (Yes. As I note in the article, Harvard students are lucky enough to be recruited by some of the highest-paying firms in the world.)
  2. Do you want to gain skills and career capital in a higher-earning option? (Yes as well. Harvard kids want to preserve optionality.)
  3. Are you uncertain about which problems are most pressing? (Resounding yes. I commonly hear things like “I want to do good for the world, I just don’t know how.”)
  4. Do you want to contribute to an area that is funding-constrained? (This is fuzzier, I think, seeing as the answer to this question would probably have to come after the last one.)

Anyway, I would appreciate if you gave my article a read. Feedback appreciated!

https://www.thecrimson.com/article/2024/3/26/climaco-harvard-sell-out/

40

0
0
1

Reactions

0
0
1
Comments13


Sorted by Click to highlight new comments since:

Nice punchy writing! I hope this sparks some interesting, good faith discussions with classmates. 

I think a powerful thing to bring up re earning to give is how it can strictly dominate some other options. e.g. a 4th or 5th year biglaw associate could very reasonably fund two fully paid public defender positions with something like 25-30% of their salary. A well-paid plastic surgeon could fund lots of critical medical workers in the developing world with less.

One important thing to keep in mind when you have these chats is that there are better options; they're just harder to carve out and evaluate. One toy model I play with is entrepreneurship. Most people inclined towards working for social good have a modesty/meekness about them where they just want to be a line-worker standing shoulder-to-shoulder with people doing the hard work of solving problems. This suggests there might be a dearth of people with this outlook looking to build, scale, and importantly sell novel solutions.

As you point out, there are a lot of rich people out there. Many/most of them just want to get richer, sure, but lots of them have foundations or would fund exciting/clever projects with exciting leaders, even if there wasn't enormous (or any) profitability in it. The problem is a dearth of good prosocial ideas – which Harvard students seem well positioned to spin up: you have four years to just think and learn about the world, right? What projects could exist that need to? Figure it out instead of soldiering away for existing things.  

I object to calling funding two public defenders "strictly dominating" being one yourself; while public defender isn't an especially high-variance role with respect to performance compared to e.g. federal public policy, it doesn't seem that crazy that a really talented and dedicated public defender could be more impactful than the 2 or 3 marginal PDs they'd fund while earning to give.

Yes, in general it's good to remember that people are far from 1:1 substitutes for each other for a given job title. I think the "1 into 2" reasoning is a decent intuition pump for how wide the option space becomes when you think laterally though and that lateral thinking of course shouldn't stop at earning to give. 

A minor, not fully-endorsed object level point: I think people who do ~one-on-one service work like (most) doctors and lawyers are much less likely to 10x the median than e.g. software engineers. With rare exceptions, their work just isn't that scalable and in many cases output is a linear return to effort. I think this might be especially true in public defense where you sort of wear prosecutors down over a volume of cases.  

Yes, I absolutely agree.

I mention in another comment that I don't actually think "selling out" is the best career option for every single person, especially folks at Harvard.

I do think it's a persuasive one though — because it's a path of less resistance. It feels harder to say "Hey, you should go explore under-researched areas in search for the most effective way to do good," and actually persuade people.

The target audience was those who were generally uninformed about doing good, or people on the fence about it.

Interesting! I actually wrote a piece on "the ethics of 'selling out'" in The Crimson almost 6 years ago (jeez) that was somewhat more explicit in its EA justification, and I'm curious what you make of those arguments.

I think randomly selected Harvard students (among those who have the option to do so) deciding to take high-paying jobs and donate double-digit percentages of their salary to places like GiveWell is very likely better for the world than the random-ish other things they might have done, and for that reason I strongly support this op-ed. But I think for undergrads who are really committed to doing the most good, there are two things I would recommend instead. Both route through developing a solid understanding of the most important and tractable problems in the world, via reading widely, asking good questions of knowledgeable people, doing their own writing and seeking feedback, probably aggressively networking among the people working on these problems. 

This enables much more effective earning to give — I think very plugged-in and reasonably informed donors can outperform even top grantmaking organizations in various ways, including helping organizations diversify their funding, moving faster, spotting opportunities that the grantmakers don't, etc. 

And it's also basically necessary for doing direct work on the world's most important problems. I think the generic advice to earn to give misses the huge variation in performance between individuals in direct work; if I understand correctly, 80k agrees with this and thinks this should have been much more emphasized in their early writing and advice. Many Harvard students, in my view, could relatively quickly become excellent in roles like think tank research in AI policy or biosecurity or operations at very impactful organizations. A smaller but nontrivial number could be excellent researchers on important philosophical or technical questions. I think it takes a lot of earning potential to beat those.

Wow that's awesome. Great to connect with a Crimson alum!!

Your article is great — it covers a lot of bases, ones that I wish I had gotten the chance to talk about in my op-ed.

The original version was a lot heavier on the EA-lingo. Discussed 80,000 hours explicitly, didn't make such a strong claim that "selling out" was the best strategy, etc., but I decided that a straightforward & focused approach to the problem would be most useful.

I don't think I'd truly say selling out is the "best" thing to do for everyone (which is the language my article uses), and that's for reasons others have laid out in this comment section.

But I do think it's a useful nudge. I've gotten a lot of reactions like "Wow, these stats are really eye-opening," and "That's a cool way to think about selling out," which was, honestly, the intention, so I'm glad it's played out that way.

It seems hard to EA-pill everyone from the outset. We all got here in small steps, not with everything thrust at us from once. I'm hopeful that it's at the very least a good start for a few people :)

[anonymous]6
5
4

First of all, kudos on writing an op-ed! I think it’s a good thing to do, and I think earning to give is a much better path than what most Ivy League grads wind up doing, so if you persuade a few people, that’s good.

My basic problem with the argument you make here (and with earning to give in general) is that some bad things tend to go along with “selling out” (as you put it), rendering it difficult to maintain one’s initial commitment to earning to give. Some worries I have about college students deciding to do this:

  1. Erosion of values. When your social group becomes full of Meta employees (vs. idealistic college students), you find a partner (who may or may not be EA), you have kids, and so on, your values shift, and it becomes easier to justify not donating. I have seen a lot of people become gradually less motivated to do good between the ages of 20 and 30, but while having committed to a career path in, eg, global health makes it harder for this value shift to be accompanied by a shift in the social value of one’s work (since most global health jobs are somewhat socially valuable), having committed to a career path in earning to give presents no such barriers.

  2. Relatedly, lifestyle creep occurs. As you get richer (and befriend your colleagues at Meta and so on), people start inviting you to expensive birthday dinners and on nice trips and stuff. And so your ability to maintain a relatively more frugal lifestyle can be compromised by desire/pressure to buy nice stuff.

In other words, I think it’s harder to maintain your EA values when you’re earning to give vs. working at, eg, an NGO. These challenges are then further compounded by:

(3) Selection bias. I suspect that the group of EA-interested people who are drawn to earning to give in the first place are more interested in having a bougie lifestyle (etc) than the average EA who isn’t drawn to earning to give. And, correspondingly, I think they’re more likely to be affected by (1) and (2).

Although I agree that all of these are challenges, I don't really believe they're enough to undermine the basic case. It's not unusual in high-paying industries for some people to make several times as much as their colleagues. So there's potentially lots of room to have higher consumption than you would working at a nonprofit while also giving away more than half your salary.

Empirically my impression is also that people who went into earning to give early tended to stay giving later in their careers 10+ years later; although that's anecdotal rather than data-driven.

Do you know if there's empirical data on this? Like Owen, I think these are relatively minor risks.

Unsurprisingly, the people I know who donate the most are the ones who earn the most (although they probably don't give particularly effectively), and I know people with experience in non-profits who were disillusioned and moved back to "normal" jobs.
I could also imagine NGO workers might have strong incentives to work for a "White Landcruiser" NGO ⁠⁠⁠⁠⁠⁠⁠and be less focused on actually helping others. Basically, I'm not sure if the value drift is actually that much higher for an earn-to-giver compared to a worker at a median NGO. (Because it's high in both cases and because most NGOs are not particularly impact-driven)

I'm also not sure how much of an issue lifestyle creep is in practice. If you earn twice as much, and spend twice as much on yourself because of lifestyle creep, you're still donating twice as much as before. And empirically, we see higher-income people give higher percentages of their incomes.

Also, there are very significant "career capital" benefits of working in a high-paying job before moving into a direct role, some of which are highlighted in this recent post. The founders of the Against Malaria Foundation had lots of for-profit experience that I think probably helped them save so many lives when they switched to non-profit work. (Incidentally, it seems that Rob Mather has an MBA from Harvard itself)

That's all much better analyzed in the new 80000hours page on Earning to Give though, which makes many more points for and against

I'm pretty pro-ETG. But I do agree with these points Lilly.

I wonder if showcasing and building on the fun of giving effectively would be helpful? I actually have very little experince to draw on here myself - but it seems to me that doling out one's wealth actually can pretty be enjoyable, if we attempt to make it so?

There's the basic fuzzies - but also the impression of building something. Some people collect old cars or stock tropical aquariums. In so far as value erosion is typified by declining interest when one leaves fertile EA social circles in college (I think ideology and lifestyle changes as causality are a little bit over empathized comparatively), keeping up those networks might help. Giving as a fun hobby you do with your friends. Just like other hobbies, but it's donation data spreadsheets and counterfactual impact that you collect instead of rare coins or vintage sneakers.

Relatedly - I've heard of parties where people came together to compile donations on giving days? Never been to one, but these seem great.

There's a good (but somewhat muddled) forum post on this: What to do with people? https://forum.effectivealtruism.org/posts/oNY76m8DDWFiLo7nH/what-to-do-with-people

Especially for people without direct involvement like ETG people and recent grads - we can't just assume they'll stay in EA because "it's true/right", some people need that social push. EAGs are good, but are simply too big, too costly and too formal.

Cool article. I like the writing style a lot. I hope it helps convince others to do EtG or creates general interest in EA. I myself try to have an impact mainly through recruiting and outreach in academia, so I share much of your enthusiasm. I think encouraging future high-earners (or even low- and mid-earners!) to donate a portion of their income is a great part of that strategy, particularly with folks who would not enter direct work alternatively.

One thing that strikes me as important to add to this basic pitch is the extreme differences in effectiveness of charities. As I see it, a fair amount of people in the US do donate a ton of their money after becoming rich, but they do so in dubious ways - e.g., donating to their Alma Mater or the local hospital. 
 

I think there's some nice downstream effects of encouraging more people to do this. For one, donor diversity is always nice. I also think there may be a fair amount of folks doing this, and then at some point considering more direct involvement, be it part- or full time. 

I actually did include a pretty meaty paragraph about the effectiveness of charities (linking this article), and a brief explanation of GiveWell and its mission. Unfortunately we were over the word limit and my editor and I decided to cut it.

Though you're making me wish I'd chopped something else.

Great post!
Do note that given the context and background, a lot of your peers are probably going to be nudged towards charitable ideas. I would want to be generally mindful that you are doing things that have counterfactual impacts while also taking into account the value of your own time and potential to do good.

I encourage you to also be cognizant of not epistemically taking over other people's world models with something like "AI is going to kill us all" - I think an uncomfortable amount of space inadvertently and unknowingly does this and is one of the key reasons why I never started an EA group at my university.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr