Hide table of contents

A witty politician once said, "The world belongs to the old. The young ones just look better." He had a point. Power, seniority, and influence tend to come with age. Walking around the conference halls this February at EAG Global in the Bay Area, the average age seemed to be in the mid-20s or so. While youthfulness has its benefits, such as fewer family or career commitments, openness to pivoting occupation, and potentially more time for volunteering, I believe we miss out on the power, wisdom, and diversity that older participants could bring. Even people in their mid-late 40s appeared quite rare at the conference.

Why is it important to have age diversity and more older participants?

  • Power and Influence: Leaders and managers are typically older than their employees. The median age in the U.S. House of Representatives is around 58. Baby Boomers (now mostly in their 60s and 70s) hold about 50% of America's wealth. I assume similar trends apply globally.
  • Experience and Wisdom: Older individuals bring experience gained through years of trial and error.
  • Diversity of opinions, perspectives and priorities naturally evolve with age.

The Effective Altruism movement remains notably youthful. According to the EA Survey 2020, participants had a median age of 27 (mean 29).

Why is our movement, after being around for ~15 years (depending on how you count), less appealing to older participants?

How can Effective Altruism become more attractive to older demographics?

44

0
0

Reactions

0
0
Comments10


Sorted by Click to highlight new comments since:

I agree this is a potential concern.

As it happens, since 2020, the community has continued to age. As of the end of last year, it's median 31, mean 22.4, and we can see that it has steadily aged across years.

It's clear that a key contributor to our age distribution is the age at which people first get involved with EA, which is median 24, mean 26.7, but the age at which people first get involved has also increased over time.

I think people sometimes point to our outreach focusing on things like university groups to explain this pattern. But I think this is likely over-stated, as this accounts for only a small minority of our recruiting, and most of the ways people first hear about EA seems to be more passive mechanisms, not tied to direct outreach, which would be accessible to people at older ages (we'll discuss this in more detail in the 2024 iteration of this post). 

That said, different age ranges do appear to have different levels of awareness of EA, with highest awareness seeming to be at the 25-34 or 35-44 age ranges. (Though our sample size is large, the number of people who we count as aware of EA are very low, so you can see these estimates are quite uncertain. Our confidence in these estimates will increase as we run more surveys). This suggests that awareness of EA may be reaching different groups unevenly, which could partly contribute to lower engagement from older age groups. But this need not be the result of differences in our outreach. It could result from different levels of interest from the different groups.

Nice! By the way, I really appreciate how consistently your team helps ground discussions like this in data. I opened this post and as I was scrolling through, I thought, "oh I bet David has already responded with something helpful." It's a great public service!

As someone who heard from EA at age 40 and is now 50, this question comes up often in discussions with more experienced professionals. I wrote about my personal journey, which I couldn't have done without the luxury of much free time to learn and explore in this space. A recent post by Jim Chapman also describes the effort that can be needed to transition into the space.

In my local group, I'm mostly the oldest participant, as many other people my age don't feel very welcome in a group of younger people. They mostly also see donations as their pathway to contributing, which doesn't require the level of involvement in the community that people with career ambitions have. The group is simply less useful in an instrumental way; the same applies to EA conferences.

In organizations, the hiring process can sometimes be more focused on younger candidates. Some organizations prioritize a strong alignment of values, which involves cultivating hard-to-fake signals such as investing in networking, volunteering, attending conferences and retreats, and making a pledge that may present a significant challenge for individuals with family responsibilities. 

The first time I applied for jobs was in my late 40s in EA organizations, as I was previously accustomed to networking and receiving invitations for work opportunities. Completing work trials under time pressure often seemed tailored to those with more fluid intelligence, which is typically higher in younger individuals, as opposed to the crystallized intelligence that develops later in life. 

Sometimes, experienced professionals will vent how they were invited to a job and then had to start at the first stage, how they were treated unprofessionally in the hiring process, or even when they start how their expertise is not valued in an organization led by people with little prior leadership experience. This can lead to losing out on more experienced people. (At Successif, we help mid-career and more senior people navigate these challenges in the area of AI risk).

This leads me to the question if EA is the right place for more senior people. When I talk to people my age about impact, I'm more likely to recommend the donation opportunities at Effektiv Spenden, the 10% Pledge, or the book Moral Ambition for career inspiration than the global or national EA websites. While I often enjoy being the oldest person and spending much time in deep discussions with philosophically minded people 20 years my junior, I expect this to be the exception. People with families and busy jobs are probably looking for a quick way to shift their focus and connect to people. at a similar point in their life Other services and brands are probably now better suited to cater to this need than EA.

It's unfortunate that more senior professionals are having this experience, particularly as experience and expertise are so important for tackling big issues. 

"Other services and brands are probably now better suited to cater to this need than EA." Do you have examples of these?

Walking around the conference halls this February at EAG Global in the Bay Area, the average age seemed to be in the mid-20s or so.

The average age of EAG Bay Area 2025 feedback survey respondents was 30, FYI. 

I don't think this removes the thrust of your questions, which I think are good and important questions, but people do seem to consistently underestimate the average age of EA Global attendees.

(30 is the mean, median is 29)

One project aimed at mid-career people: https://www.successif.org/our-work

I expect there are some cohort effects (people in more recent generations have a higher probability of being involved).  In particular, many people get into EA via university groups (although it may not be the place they 'first heard' about it; see David Moss' reply), and these groups have only been around a decade or so. 

But I also imagine some pure age effects (as people age they leave/are less likely to enter), perhaps driven by things like 

1. Homophily/identity/herding: If you only see people unlike you (agewise) you're less likely to think you belong. This leads to inertia. 

2. Cost and family priorities: EA ~expects/encourages people to donate a substantial share of their income, or do directly impactful work (which may be less remunerative or secure). For older people the donation share/lost income could seem more substantial, esp. if they are used to their lifecycle. Or probably more significantly, for parents it may be harder to do what seems like 'taking money away from their children. 

3. Status and prestige issues: EA leaders tend to be young, EA doesn't value seniority or credentials as much (which is probably a good thing). But older people might feel ~disrespected by this. Or second order: they might think that their age-peers and colleagues will think less of them if they are following or 'taking direction' from ~'a bunch of kids'. E.g., as a jr. professor at an academic conference if you are seated at the grad students' table you might feel insecure.

4. Issues and expertise that are relevant tends to be 'new stuff' that older people won't have learned or won't be familiar with. AI Safety is the biggest one, but there are other examples like Bayesian approaches.

(Identity politics bit: I'm 48 years old, and some of this is based on my own impressions, but not all of it.)
 

I expect that some of the older EA's are more senior and therefore have more responsibilities competing against attending EA Global.

Here are a few reasons off the top of my head:

  • The movement is still young. People introduced in college are still only in their mid-30s.
  • Generational differences in moral partiality vs. impartiality. Moral impartiality is still not widely accepted.
  • Age discrimination
  • A good amount of funding is geared towards those coming out of post-grad.
  • Technological advances are typically adopted by the younger generations first (AI risk).
  • Being underprepared for retirement. Not everyone can (or wants to) be subject to a decrease in compensation, especially in years where retirement savings need to be accumulated.
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in Building effective altruism
6
2 authors
· · 3m read