Hide table of contents

[EDIT: I realize that this is not always true and am definitely interested in arguments/evidence for that too]
For context, I lead a university group and constantly find myself talking to members about why I don't think there is a real sacrifice to wellbeing in choosing to work on the most pressing problems [as opposed to the ones that students gravitated to when they were young]. Any resources that address concerns about sacrificing happiness when using EA to inform career plans would be much appreciated!

17

0
0

Reactions

0
0
New Answer
New Comment


6 Answers sorted by

Honestly, I don't think this is true for the top EA cause areas. These have been selected for impact and not for utilizing people with a wide range of backgrounds and preferences.

OTOH, it's pretty self-evident that people can do "the most good they can do."

Direct work in the top cause areas is a relatively narrow interpretation of EA principles. And, personally, I find the broader interpretation more encouraging and even somewhat relaxing.

Hi Misha. Thanks for your answer. I was wondering why you believe top EA cause areas to not be capable of utilizing people with a wide range of backgrounds and preferences. It seems to me like many of the top causes require various backgrounds. For example, reducing existential risk seems to require people in academia doing research, in policy enacting insights, in the media raising concerns, in tech building solutions, etc. 

So let's be more specific, current existential risk reduction focuses primarily on AI risk and biosecurity. Contributing to these fields requires quite a bit of specialization and high levels of interest in AI or biotechnology — this is the first filter. Let's look at hypothetical positions DeepMind can hire for: they can absorb a lot of research scientists, some policy/strategy specialists, and a few general writers/communication specialists. DM probably doesn't hire much if any people majoring in business and management, nursing, educations, criminal justice, anthropology, history, kinesiology, and arts — and these are all very popular undergraduate majors. There is a limited number of organizations, these organizations have their peculiarities and cultural issues — this is another filter.

Seconding Khorton's reply, as a community builder you deal with individuals, who you can help select the path of most impact. It might be in an EA cause area or it might be not. The aforementioned filters might be prohibitive to some or might not pose a problem to others. Everyday longtermism is likely the option available to most. But in any case, you deal with individuals and individuals are peculiar :)

This makes a lot of sense and thanks for sharing that post! It's certainly true that my role is to help individuals and as such it's important to recognize their individuality and other priorities. 

I suppose I also believe that one can contribute to these fields in the long-run by building aptitudes like Ines' response discusses, but maybe these problems are urgent & require direct work soon, in which case I can see what you are saying about the high levels of specialization. 

Agree; moving into "EA-approved" direct work later in your career while initially doing skill- or network-building is also a good option for some. I would actually think that if someone can achieve a lot at the conventional career, e.g., achieving some local prominence (just as a goal in itself or as preparation to move into a more "directly EA role"), that's great. My thinking here was especially influenced by an article about the neoliberalism community.

(Urgency of some problems, most prominently AI risk, might be indeed a decisive factor under some worldviews held in the community. I guess most people should plan their career as it most makes sense to them under their own worldviews, but I can imagine changing my mind here. I need to acknowledge that I think that short timelines and existential risk concerns are "psychoactive," and people should be carefully exposed to them to avoid various failure modes.)

Why not encourage students to experiment for themselves? Try a summer internship or volunteering or taking a class on a topic where they could help solve one of the world's top problems, as well as exploring areas they've been drawn to since childhood, and keep an open mind as they explore.

I think a lot of people will find it really satisfying to see how they can help people, but some people might genuinely be happier working on something they've been interested in since childhood, and we shouldn't try to deceive those people!

This is a good option. I hadn't really considered this. And I agree that we definitely shouldn't try to deceive anyone. 

I believe this primarily due to arguments in So Good They Can't Ignore You by Cal Newport that suggest that the application of skills we excel at is what leads to enjoyable work as opposed to a passion for a specific job or cause, but also because I think that community & purpose is super important for happiness and most top EA causes seem to provide both. 

I think this is often a real tradeoff, but there are other ways of framing it that might help:

A) You should work in something you at least somewhat enjoy and have a good personal fit for in order to avoid burnout (I think this is 80k's position as well). Within the range of things that meet this criteria, some will be more impactful than others, and you should choose the most impactful one. EA frameworks are very useful for discerning which one this might be.

B) The aptitude-building approach (from Holden Karnofsky's 80k podcast episode): You should become great at something you like and are very good at, and then wield it in the most impactful way you can, which knowledge of EA is again useful for. (Even if it is not initially obvious how, most skills can be applied to EA in some way—for example, creative writing like HPMOR has served as a great tool for community building.)

If someone is unwilling to move away from a low-impact cause, there are still ways EA can be useful for helping them be more impactful within their cause. Similarly, if someone is set on a certain skill, EA can help them use it to do good effectively. 

Thanks Ines for this thoughtful answer! It makes me want to emphasize the aptitude-building approach more at my group.

This article by 80,000 Hours on job satisfaction is probably a useful resource on how working on the most pressing problems doesn't necessarily have to involve sacrificing happiness.

Thanks Ben for sharing this!

Cal Newport argues in favor of this in his book So Good They Can’t Ignore You, as a uni group leader myself I’ve found his points useful when talking with new members.

I believe Cal Newport’s career advice has been quite influential in 80000 hour’s own, so you might not find anything terribly new there but I do think it’s worth checking out.

Curated and popular this week
 ·  · 1m read
 · 
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma