Hide table of contents

This is a quick outline of two worries that come up for me when I consider EA's focus on community-building amongst university-age people, sometimes younger. I am mostly focussed on possible negative consequences to young people rather than EA itself. I don’t offer potential solutions to these worries, but rather try to explain my thinking and then pose questions sitting at the top of my mind. 

Intro

At a past lunch with coworkers, I brought up the topic of, “Sprout EAs”. Currently, this is the term I’m using to describe people who have spent their entire full-time professional career in the EA ecosystem, becoming involved at university-age, or occasionally, high school-age.[1] 

Anyways, there are two things I worry about with this group: 

Worry one: Sprout EAs stay in EA because it is often easier to stay in things than to leave, especially when you’re young

There’s your standard status quo bias that can get particularly salient around graduation time. At that point, many people are under-resourced and pushing towards more stable self-reliance, uncertain what next steps to take, relatively early in their journey of life and their professional career. Many undergraduate students are familiar with the, “unsure what to do next? Just do grad school!” meme, because when so much of your adult life is ahead of you and you’re confused, it’s enticing to do more of what you know.

In a similar vein: I think those entering the professional world, who have become heavily embedded in EA during their time as a student, have a lot of force behind them pushing them to remain in the EA ecosystem. Maybe this doesn’t really matter, because maybe lots of them will find jobs they really enjoy and have an impact and develop into their adult life, and it’s all good. And also, maybe it’s kind of a moot point because, you have to choose something. This is just a fact about life and being young, and how is anyone supposed to address the reality that, “young people have to make choices and there are lots of uncontrollable factors influencing those choices.” 

But, if EA is going to put concerted effort into community building on university campuses, and sometimes with high school students, these are probably important dynamics to think about. Additionally, EA has some unique and potent qualities that can grab young people:

  • It can offer a very clear career-path, which is incredibly comforting 
  • It can offer a sense of meaning
  • It can offer a social community

All these things have the potential to make "off-boarding" from EA extra difficult, especially at a time in life when people generally have less internal, social, experiential, and material resources. I worry about young people who could gain a lot of personal benefit from "off-boarding" or just distancing a bit more from EA, yet struggle to do so (for reasons of the flavour described above) or forget this is even an option/find it too mentally aversive to consider.

Worry two: EA offers young people things it isn’t “trying to” or “built to,” which can lead to negative outcomes for individuals

I think this is an important point that can get muddled. There’s the thing EA “actually is,” which is debatable and a bit abstract. It’s a community, an idea, maybe a question? It’s not a solved, prescriptive, infallible philosophy. It is, maybe, a powerful framework with a highly active professional and social community built around it, attempting to do good. But the way it can hit people differs quite a bit. No one can control if EA fills holes in people’s lives, even if that isn't an express or even desirable goal.

On one level, EA can easily hit as a straightforward career plan and life purpose that young people can scoop up and run with, if they’re positioned to do so. That anyone can scoop up, of course. But young people, being young and often more impressionable, less established, etc., can be particularly positioned to scoop. I don’t know how to avoid that or if avoiding it is even possible. However, this reality leads to a whole host of outcomes, many of which are not that concerning, and some that are a little concerning.

Example of something not concerning to me: Undergraduate student A hears about EA and gets really excited. They like biology already, but weren't sure what career path to pursue. They decide to do biosecurity research at graduate school based on 80,000 hours.[2] They join an EA-aligned biosecurity lab, they enjoy it and are good at their work. They conduct rigorous research and also live their life, and that’s that. Maybe they just, “scooped up the idea and ran,” without thinking too hard about whether they’re a longtermist or what their cause prioritization is, etc., and really, that’s completely fine. It’s not to say they didn’t think at all, but perhaps, in no small part, they chose biosecurity research because they liked the sense of community and meaning that came with their work, they were planning to do something research-related anyway, and they didn’t scrutinize that too much.

Also not concerning: Undergraduate student B hears about EA and gets really excited. They become actively involved in their local EA group and make lots of friends. They think deeply about the core principles of EA and potential career paths they could pursue. They spend one summer completing a research internship at their university and another summer completing a research internship on existential risk. After weighing many factors, they decide to build career capital by working for the US government. They really enjoy participating in EA (through meet-ups and online spaces) and find it personally fulfilling, but have also tried to stress-test it with friends outside EA. They live with a friend from university who is also interested in a policy career, but not particularly EA.

Example of something more concerning: Undergraduate student C hears about EA and gets really excited. They go to all their university’s EA meet-ups, and soon, all of their friends are “EAs”. They graduate and aren’t sure what they want to do, but they know they want it to be at an explicitly "EA org". This feels emotionally important to them because EA has sort of totalised their social environment and mental space. They’ve really internalised the idea that having an impact is imperative and the way to do that is to do EA labeled things. They live in an EA house with people working at various EA organisations. They…..

  1. Keep trying to get a job at an EA organisation (in research or operations, whatever really), and it’s hard, so they accept sub-optimal work environments. 
  2. Can’t find a position at an explicitly EA-aligned organisation and this is really upsetting. They overextend themselves throughout the job-hunting process and take on significant personal costs, well past the point at which they should have pivoted to investigating roles that are not explicitly EA-aligned.
  3. Over-optimise on, “getting an EA job” and learn how to, “talk the talk”. They’re good at sounding as though they have reasoning transparency and have thought deeply about their values. On some level, they genuinely have. But the core thing driving them is, “get an EA job” (whether conscious or not, probably not).

Numbers 1 and 2 are damaging to individuals and number 3 could be damaging to the community/epistemic environment. I have no idea how frequently these things are happening. There is anecdotal evidence that 1 has happened at least a handful of times. I can think of a few examples that map onto 2 pretty well, though maybe less extreme.

3 is one outcome that could lead to the slow erosion of epistemic rigour over time. This concern is certainly not novel and has been discussed at length in different places (e.g. Bad Omens in Current Community Building).

I also want to caveat: worries 1 and 2 are surely present in other (many?) professional spaces. There is an extent to which these realities are unavoidable. But they are worries nonetheless and worth thinking about.

Questions

As a result of these worries, the questions I have are: 

  1. Are we considering these types of dynamics when deciding how to structure community-building efforts targeted at “young people”? I'm especially worried about potential programs targeted at high school students, for whom I imagine this is all further amplified (generally speaking). 
  2. How much of these worries are just, “the reality of life and the world and trying to do things” versus, “dynamics we could better consider and try to avoid”? In what ways do we currently encourage or guard against these dynamics, if any?
  3. How can we all keep some of these dynamics in mind when offering advice to more junior people interested in working within EA? Especially those of us explicitly working on community-building. What does that look like?
  1. ^

    "Sprout EA" is not a great term, I’m sorry. I tried asking ChatGPT and it suggested "Eternal Change Agents" and "Continuous Impact Enthusiasts," among others, which are bad. My friend Kirsten suggested "Career EAs," but then the acronym is CEAs and abbreviations are only meant to bear so many loads (ideally, one, but EA tends to push it). Let me know if you have any ideas. The actual term I like is "EA Babies," but comes across infantilising :( 

  2. ^

    My friend wanted to point out that biosecurity is, in fact, an interdisciplinary cause area and you should check out, Biosecurity needs engineers and materials scientists and Laboratory Biorisk Management Needs More Social Scientists.

Comments5


Sorted by Click to highlight new comments since:

Strongly endorse this post. I  came to the EA movement relatively late in life. And I notice that A LOT of younger EAs are really invested in getting an EA org job, to an extent that makes me uncomfortable. I think this is actually unhealthy for the movement, not just the individuals. But there's an incentives problem, where the orgs all say "no, people should apply even though it's super-competitive; let us make the decision about who's the best fit", and that really is optimal from the narrow perspective of that org. But especially given the time commitment required to put together a serious application, it doesn't really account adequately for all time being wasted by unsuccessful job applicants. I can't blame the orgs for following their incentives, but those of us with a more neutral perspective should probably somewhat discourage the current levels of persistence in applying to EA org jobs, and/or encourage EA orgs to use less time-consuming hiring processes. 

 

A few years ago I told a college-aged EA that I would be inconvenienced but not devastated if the movement collapsed, because as much as I enjoy being a part of it and having people to collaborate with or even just share ideas with, the underlying principles and not the movement are the thing that really matters, and I would still do my best to live up to the principles even if I had to do it all alone. This person was visibly offended by my comment. That's not a healthy response, and I've been worried about the young'uns ever since. 

the underlying principles and not the movement are the thing that really matters

I think both matter a lot. I want the principles to be acted upon, and in a coordinated fashion.

How about “early-start EA (EEA)”? As a term, could sit neatly beside “highly-engaged EA (HEA)”.

Hi Frances,
thanks for putting this out here, I enjoyed reading it as another Community Builder and someone who works on Community Health. 
I'll give my opinions to your questions (as I don't think of these as answers). I'd be keen to read what you think :) 

1. I can't really speak for programs reaching high schoolers, these don't currently exist in Germany. If they did, I'd want to thoroughly lace them with lots of messages about making sure to explore others paths, opinions, ideas than just EA, and having friends that don't have anything to do with EA. 
For our regular outreach at universities: Our young team in Germany has taken the general attitude of developing EA as a professional environment with space for personal connection. But the focus lies on professional networking, learning and mentorship. Plus, I personally recommend and encourage people that I speak to to have non-EA friends, try out non-EA work environments, and learn from non-EA sources. I think that's one way to avoid filling a hole and get people to lock into the EA system for fear of changing gears. 

2. I would say a mix of both. EA is a system (with the informal hierarchies and emphasis on networking, and an express focus on ambition) that is imo more likely than other systems to create people looking to stay "inside", or getting burnt out while trying to prove themselves. I think recent efforts to professionalize the movement more are one way of guarding against this. 

3. Advice for younger people (all this is pretty basic, but I think important to stress especially with the non-conforming people we often attract):

  • Follow simple rules about work, like: Don't accept work without a contract, maintain a degree of separation between work and private life, take breaks & vacations to recharge.
  • Make sure to explore at the start of your career, and explore outside of EA too. 
  • Find a mentor with more work experience than you, ideally in the field you aspire to enter. 
  • This is a marathon not a sprint. If you want to make an impact with your career, make sure you can sustain one for 30-odd years.
  • Most of us are still pretty young and in the process of figuring it out. Some of the work culture at central EA orgs might change over time as people age and mature.

Executive summary: The post outlines two potential pitfalls when building community among young people in effective altruism: 1) Difficulty leaving EA due to youth and uncertainty, and 2) EA filling unintentional roles in young peoples' lives.

Key points:

  1. Young people may stay in EA due to status quo bias and EA's ability to offer career paths, meaning, and community. This may prevent valuable "off-boarding."
  2. EA can fill holes in young peoples' lives in unintended ways. This could lead some to accept suboptimal jobs or personal costs in pursuit of EA alignment.
  3. Some may overoptimize on "getting an EA job" without internalizing EA principles. This could erode epistemic rigor over time.
  4. Worth considering if these dynamics factor into outreach to youth and university students. How to encourage reflection on if EA meets personal needs beyond stated goals.
  5. Advice-givers should consider potential biases and pressures those new to EA may face. Highlight options beyond explicit EA alignment.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 12m read
 · 
Economic growth is a unique field, because it is relevant to both the global development side of EA and the AI side of EA. Global development policy can be informed by models that offer helpful diagnostics into the drivers of growth, while growth models can also inform us about how AI progress will affect society. My friend asked me to create a growth theory reading list for an average EA who is interested in applying growth theory to EA concerns. This is my list. (It's shorter and more balanced between AI/GHD than this list) I hope it helps anyone who wants to dig into growth questions themselves. These papers require a fair amount of mathematical maturity. If you don't feel confident about your math, I encourage you to start with Jones 2016 to get a really strong grounding in the facts of growth, with some explanations in words for how growth economists think about fitting them into theories. Basics of growth These two papers cover the foundations of growth theory. They aren't strictly essential for understanding the other papers, but they're helpful and likely where you should start if you have no background in growth. Jones 2016 Sociologically, growth theory is all about finding facts that beg to be explained. For half a century, growth theory was almost singularly oriented around explaining the "Kaldor facts" of growth. These facts organize what theories are entertained, even though they cannot actually validate a theory – after all, a totally incorrect theory could arrive at the right answer by chance. In this way, growth theorists are engaged in detective work; they try to piece together the stories that make sense given the facts, making leaps when they have to. This places the facts of growth squarely in the center of theorizing, and Jones 2016 is the most comprehensive treatment of those facts, with accessible descriptions of how growth models try to represent those facts. You will notice that I recommend more than a few papers by Chad Jones in this
LintzA
 ·  · 15m read
 · 
Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achieve 25% on its Frontier Math
Omnizoid
 ·  · 5m read
 · 
Edit 1/29: Funding is back, baby!  Crossposted from my blog.   (This could end up being the most important thing I’ve ever written. Please like and restack it—if you have a big blog, please write about it). A mother holds her sick baby to her chest. She knows he doesn’t have long to live. She hears him coughing—those body-wracking coughs—that expel mucus and phlegm, leaving him desperately gasping for air. He is just a few months old. And yet that’s how old he will be when he dies. The aforementioned scene is likely to become increasingly common in the coming years. Fortunately, there is still hope. Trump recently signed an executive order shutting off almost all foreign aid. Most terrifyingly, this included shutting off the PEPFAR program—the single most successful foreign aid program in my lifetime. PEPFAR provides treatment and prevention of HIV and AIDS—it has saved about 25 million people since its implementation in 2001, despite only taking less than 0.1% of the federal budget. Every single day that it is operative, PEPFAR supports: > * More than 222,000 people on treatment in the program collecting ARVs to stay healthy; > * More than 224,000 HIV tests, newly diagnosing 4,374 people with HIV – 10% of whom are pregnant women attending antenatal clinic visits; > * Services for 17,695 orphans and vulnerable children impacted by HIV; > * 7,163 cervical cancer screenings, newly diagnosing 363 women with cervical cancer or pre-cancerous lesions, and treating 324 women with positive cervical cancer results; > * Care and support for 3,618 women experiencing gender-based violence, including 779 women who experienced sexual violence. The most important thing PEPFAR does is provide life-saving anti-retroviral treatments to millions of victims of HIV. More than 20 million people living with HIV globally depend on daily anti-retrovirals, including over half a million children. These children, facing a deadly illness in desperately poor countries, are now going
Recent opportunities in Building effective altruism
27
CEEALAR
· · 1m read