"That article made me angry. All this time was spent on making sure your job was helping others and no time was spent on whether it brought you joy." I assigned high school seniors in our economics class to read Benjamin Todd's summary essay on planning a high-impact career. Some of the students were offended by the essay. Here's how it went.

In general, I teach career advice using the What Color is Your Parachute? framework. I really like the framework, because it provides a lot of direction for thinking about one's own desires and how those can be corralled to serve a higher mission. This year I added two segments to the career advice, one on globalization, and another on EA. This is in the context of an Econ class, remember, so I felt justified this year in leaning in on these frameworks.

MRU has a nice little segment about how to think about a career given globalization. We started with international trade and made our way to a discussion on globalization and the elephant graph and what that means for careers. I thought this was depressing, because of the narrow number of careers in which one can expect to see wage growth. So we also discussed how much easier it is to be the best person with a set of three skills than the best person with one skill, as an example of how one can still be competitive amidst global competition. That combinatorial insight is very helpful.

Next, there is much more to a career than maximizing profit in the context of globalization. And I would be surprised if more than 10% of that class end up in jobs with the term 'engineer' attached, so we talked about other high-impact careers. I gave them this Benjamin Todd essay, and they came to class with a ton of pushback. 

In the first ten minutes of class, before I even had said a word, they laid out a series of arguments against the EA approach to career advice. I relished this and took notes as they went. Such good material, such familiar objections! These very normal, American students from median American households in a non-coastal city sensed something deeply challenging about EA, but these are challenges which I think the community has already addressed.

Objection 1: Giving you life to others in this obsessive way will result in burnout.

Objection 2: This type of advice is for "people-pleasers", not independent, self-sufficient people.

Objection 3: This type of advice is for only super-self-sufficient people who want to take on huge responsibilities. I might not want big responsibilities.

Objection 4: This type of advice ignores the indirect value of normal careers, like working in shipping logistics.

Objection 5: If everyone tried to have a high-impact career, then the career wouldn't be high-impact anymore. 

Against these objections is the material in the essay itself. Close reading is hard! "You can divide career aims into three categories: (i) personal priorities (ii) impartial positive impact, and (iii) other moral values." And: 

Turning to personal priorities, research suggests that people are most satisfied when they have work that’s: 

  1. Meaningful
  2. Something they’re good at
  3. Engaging & with autonomy"

Properly understood, these two quotes answer Objections 1, 2, and 3. Burnout is antithetical to what should be a personal priority: personal flourishing with a healthy mind and healthy body. But I found it fascinating that several students, having no previous exposure to EA, immediately worried about burnout. I wonder if the lack of vocabulary about individual psychology sounds an internal alarm for postulants? Secondly, EA is about finding the right fit, and that does mean knowing your own strengths, weaknesses, and desire for responsibility. 

A close reading together cleared up these objections.

A student, one of my quiet thoughtful ones, leaned back in his chair and responded slowly to Objection 5. "Well," he said, "There are diminishing returns. If too many people go into a field, it's not a neglected problem anymore."

Then we turned to the final objection - indirect impact. This is one which I only could address because I have been in the EA community for 6 years. A low-key, not consensus, but common enough EA opinion is that "normal" jobs provide goods too, and if you are able to be exceptional at an important "normal" job or be high-impact within that career, that can be good too.  

The opinions addressed, they read the next essay on three career stages.

The next day I asked each student if they disagreed with the EA advice here or had any new objections to the previous article.

I was mildly disappointed to find that they no longer resisted Ben's advice. They loved Big EA. 

Comments11


Sorted by Click to highlight new comments since:

I really enjoyed this post.  I personally feel as though I don't understand our users enough or have detailed enough models of how they are likely to react to our content, and so I appreciate write-ups like this.  

This should probably be its own essay at some point, but here's the short and sloppy version:

Against these objections is the material in the essay itself. Close reading is hard.

I think this line touches on something which is important to understand. My college required me to take an English class, and I took it online last summer. This gave me the opportunity to read almost every essay and scrap of writing produced by the thirty-odd people of the class in the context of writing their thoughts in response to essays about better writing, a lovely bit of recursion which I found enlightening. I think I have a better model now of how slightly-above-average (if they were worse they wouldn't be in college, if they were better they'd have skipped the class) people engage with the written word.

The teacher linked an essay begging new college students not to worry about the pointless things high schoolers are graded on, and instead focus on writing compellingly--those pointless things were enumerated, by way of example. This essay was quite scathing, strident! Several students replied by saying that while they'd forgotten those pointless rules they were glad for the reminder--expressed concern that they hadn't conformed to the rules in their introductory essays, and vowed to obey them in the future

This wasn't an isolated occurrence; there were always people who read something and got exactly the opposite of the author's point. Those who didn't got a point utterly removed, that I'd have to dig through the essay to see what line they'd misread if I wanted to understand them. People who got from the essay what the instructor hoped the class would were maybe a tenth of the total (myself not among them; I learned a lot in that class, but nothing the teacher had set out to teach).

When asked to choose which of three works expressed a particular point best--there was a personal essay, an analytic essay, and an inane video--the class overwhelmingly preferred the video. Detailing why, they said that they could hear inflection and tone in the video, that they didn't have to struggle with individual words and lose their place in the sentence, that they didn't have to reread things to understand what was being said. 

My conclusion is that if something is expressed only in writing it cannot reach the absolute majority of the population, any more than a particularly well-written verse in French can permeate the Anglosphere. I think that in many cases where highly literate people think they've identified an important problem, they've instead failed to diagnose illiteracy. (I watched the course instructor struggle with that; they didn't seem any more able to understand that the class couldn't understand them than the class was able to understand them. They were always engaging with them on a level which implied they didn't realize the vast gulf of inferential distance.)

Fascinating! I would appreciate an essay arguing for this rather strong claim 

My conclusion is that if something is expressed only in writing it cannot reach the absolute majority of the population, any more than a particularly well-written verse in French can permeate the Anglosphere.

I have read weaker versions of how hard successful communication is, such as Double Illusion of Transparency and You Have About Five Words – but I think that your example is even stronger than this and an interesting addition.

Personally, I think I also belong to the group of 2nd-order-illiterate people in that I need to push my concentration a lot in order to read with sufficient care. My default way of reading is nowhere near enough and I need to read a text several times until I feel that it doesn't contain 'new thoughts' even if it is well-written. I do profit a lot from podcasts and lectures, even if it is just by 'watching a person think about the topic' and the content is the same as in a text book.

Mau
28
0
0

Thanks for sharing this!

I can imagine people coming away from this with the impression that impact-oriented career communications like those of 80K should change their framings to better pre-empt these reactions, e.g. by more strongly emphasizing that taking on big problems is not for everyone (not because the author explicitly drew this conclusion, but because it seems like a natural one). It's pretty non-obvious to me that this is a right takeaway. Arguably, a majority of the impact of places like 80K comes from supporting people who are very dedicated to impact. Catering to audiences with lukewarm interest in impact will have some benefits, but I worry these might come at the cost of e.g. 80K failing to do very well at motivating and guiding people who are most excited to prioritize impartial positive impact. At least personally, I found it very motivating to come across a site that works with assumptions like "of course we're happy to take on big problems/responsibilities--how could we look away and do nothing, when these problems are out there?"

A bit more about where my intuitions are coming from:

  • I suspect there's lots of motivated reasoning behind objections 1-5 (since 1 & 4 are strawmen, 2 is name-calling, and 5 is irrelevant) . Addressing weak objections that come from motivated reasoning seems like a doomed time sink--if people are looking for reasons to believe something, they'll find them.
  • I've seen groups that focus on supporting people in getting into high-impact careers spend lots of time trying to engage people who aren't that interested. It doesn't seem to have paid off much--people who aren't all that interested seem to drift away often, and not do very impact-targeted work even when they do remain somewhat engaged.
  • "if you are able to be exceptional at an important "normal" job or be high-impact within that career, that can be good too"--some version of this seems plausible, but as stated this feels close to watering down career advice in a way that seems very risky (by de-emphasizing the potentially huge difference between different career paths' impacts).

Related to this, I am wondering the extent to which (I'm being slightly hyperbolic here)

  • they accepted you had won the argument logically, but were looking for ways to recover and marshall another rhetorical attack, and/or
  • you 'browbeat them' into thinking 'it's easier to agree with this'.

I'd be very curious to know whether these students really follow up on this... .... whether they would take time consuming/costly steps to pursue and learn about impactful careers ... on their own time, after you leave the room, in the future.

In my opinion, objection 4 is a result of some people taking the (good) idea that “all people are equal” and developing the (bad) intuition that “all professions / causes are equally important”.

Subsequently, they’re offended by EA ideas of some career paths / causes being higher impact than others, because “professions / causes aren’t equally important” starts to sound like “people aren’t equally important” to them.

I have noticed this line of thinking with 1 friend but I don’t know how prevalent it is. We could consider adding clarifying statements like “people in lower impact careers do not have less intrinsic value as human beings” to EA careers advice. But my guess is that it would not be worth it, because I think people who’s intuitions are against prioritising between careers and cause areas are very unlikely to ever be influenced by EA ideas.

This thinking has come up in a few separate intro fellowship cohorts I’ve facilitated. Usually, somebody tries to flesh it out by asking whether it’s “more effective” to save one doctor (who could then be expected to save five more lives) or two mechanics (who wouldn’t save any other lives) in trolley-problem scenarios. This discussion often gets muddled, and many people have the impression that “EAs” would think it’s better to save the doctor, even though I doubt that’s a consensus opinion among EAs. I’ve found this to be a surprisingly large snag point that isn’t discussed much in community-building circles.

I think it would be worth it to clarify the difference between intrinsic and instrumental value in career advice/intro fellowships/other first interactions with the EA community, because there are some people who might agree with other EA ideas but find that this argument undermines our basic principles (as well as the claim that you don’t need to be utilitarian to be an EA). Maybe we could extend current messaging about ideological diversity within EA.

That said, I read Objection 4 differently. Many people (especially in cultures that glorify work) tie their sense of self-worth to their jobs. I don’t know how universal this is, but at least in my middle-class American upbringing, there was a strong sense that your career choice and achievement is a large part of your value as a person. 

As a result, some people feel personally judged when their intended careers aren’t branded as “effective”. If you equate your career value with your personal value, you won’t feel very good if someone tells you that your career isn’t very valuable, and so you’ll resist that judgment.

I don’t think that this feeling precludes people from being EAs. It takes time to separate yourself from your current or intended career, and Objection 4 strikes me as a knee-jerk defensive reaction. Students planning to work in shipping logistics won’t immediately like the idea that the job they’ve been working hard to prepare for is “ineffective,” but they might come around to it after some deeper reflection. 

I could be misreading Objection 4, though. It could also mean something like “shipping logistics is valuable because the world would grind to a halt if nobody worked in shipping logistics,” but then that’s just a variant of Objection 5.

I’m very curious to know more about the sense in which these students gave Objection 4. 

You write well. Funny at the end.

This may be the best execution I've seen of one of my EA Forum writing prompts:

Have you tried to explain EA to anyone recently? How did it go? Based on your experience, are there any frames or phrasings that you would/wouldn’t recommend?

Wonderful work!

[comment deleted]2
0
0
[comment deleted]0
0
0
Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we