Hide table of contents

Hi reader! This is the first post in a sequence I’m trying out, and I’m still figuring out how to present the findings I’m presenting, and making the posts useful to you. Please let me know if you have any ideas or suggestions on how to do this in the comments. 


Epistemic Status: I am 100% sure people have told me what I’ve reported, but I am very unsure about their conclusions or the validity of their claims.

 

Intro

In my role as a community builder in Boston (and online) for the past year, I have heard the same theme/concern arise out of ~10 different conversations which I have had about the EA community with non-EAs. Unfortunately, none of them were willing to write this up themselves, but I felt that their views were significant enough that I would try and offer a very low-effort way for them to have their views shared, which they did think was important to do. 

So what I’ve done here is (with permission) summarized and collated these concerns into a succinct couple of points which I hope will be taken into consideration with other points about EA community building and mental health within the EA community. I also hope that it might (even temporarily) cause us to pay closer attention to our own and our EA friends’ mental health.


For the sake of readability I am writing the first two sections in the voice of the collective of people who expressed these views - these are not my views (my views are the last two sections). 

“EA as a Crutch

We don’t know many EAs (some of us know one, some of us know a couple, some of us have attended a house party or an event with EAs also in attendance) but those who we do know seem to have bad mental health, and don’t recognize it. One way we see this manifesting is that our EA friends seem to be more and more sucked-in to EA as their depression or anxiety gets worse - they spend more time with other EAs, or spend more time working long stints, and not seeking help. There are two ways this seems to work: 

  1. EAs tend to focus on really drastic and dire topics and, by working on them, we think our friends are able to ‘justify’ their depression or anxiety as being a rational response to what they’re thinking about all the time. 
  2. We think our EA friends are using discussions with other EAs (or within the EA community) and with us as a way of trying to address some of these issues instead of seeking professional therapy, which we think they need."



 

"How this relates to Selecting for “Rationalism” 

We’ve come to think that EAs tend to lean in the direction of ‘emotionally inexpressive’, or ‘emotionally unaware’. We think this might be a trait which exists prior to people joining EA, rather than something happening to them as a result of joining EA. This seems important because of the way that rationalism seems to lead practitioners to try and engage with ‘heavy’ topics using logic and reason, rather than emotion, which one might then try to suppress in order to ‘become better’ at rational thinking. 

We think the type of ‘emotional expressiveness’ we’re worried about  is when someone is experiencing physiological pain or dealing with complex and uncomfortable emotions, but doesn't feel comfortable or articulate enough to ask for help or support. It seems like this could lead to very bad outcomes because it's likely that the types of causes EAs think about might make mental health worse, and then these friends get trapped in a downward spiral. 

Another thing we think EA might be selecting for is people who are bad at conversing with normal people/having regular conversations. Whilst this is not a bad thing, it does likely mean that when a non-EA meets an EA, chances are that the EA is more likely to behave in a socially unpleasant way, which is probably not good for the public impression of EA, and what EAs are like - but more worryingly, what EA ‘does’ to people in it.” 

 

My Commentary on these views

I think it's plausible that these claims are true. It's interesting to think about what “EA” selects for - we already think it's likely that it selects for people who have some background in economics and philosophy, wealthier people, and younger people, but we haven’t looked much into selection based on psychological or temperamental traits, as far as I know.  The idea that EA selects for lower EQ or the presence of mental health issues seems like a massive and spurious claim, but it’d be super interesting to investigate. It’d be especially interesting if it was true that there is something about EA that selects for (or causes) poor social interaction or low personability, as this is something I think is a limiting factor in current outreach/community building efforts on the ground. 

 

Notes on the methodology of this post

This is my first time trying to report on what is essentially a single-blind focus group on the forum, which is not something I think I've seen on the forum in general. That’s understandable, given how iffy and contentious most of the information and sourcing is, and how it's probably not epistemically sound.



 

31

0
0

Reactions

0
0
Comments4


Sorted by Click to highlight new comments since:

There's a really good point there, I'll restate it: People act like the difficult problems in front of them are the reason for the low moods they are having. As a result of misidentifying the source of the their low mood, they try to solve the mood problem by pouring themselves into working on the issue, but this often just wont work.

I think this is resting on a common myth about human psychology. There actually doesn't need to be a relationship between the difficulty of the problems in front of us and our emotional affect, or energy levels. It's a nonsequitur. No matter what problem is in front of you, there's always something you can do, some next step to take (if you don't know what the next step is then the next step is figuring out the next step!), and if you are walking forward as well as you can, you should be able to take satisfaction in that. If not, it's a health thing.

People act like the difficult problems in front of them are the reason for the low moods they are having.

Sometimes this is true! In which case I recommend contemplating "Detach the grim-o-meter."

Here's a separate error that I've made many times: People believe that their intellectual knowledge of the world's problems causes them to act a certain way, when in reality they act that way because of their mood.

I should elaborate the model a little: I think it's common to be have your mood influenced by the difficulty of problems (I've experienced that a lot), but it doesn't need to be, and this is usually a result of not respecting the problems enough to acknowledge that every small step towards solving them counts for a lot, or not having enough faith that you'll be able to continue making progress, or believing too much that you are trapped on a particular course.

Good post!  I think that addressing these concerns is definitely important; I have recently updated based on some similar conversations that I have had that the non-EA perception of EA is worse than I thought for reasons like these.  However, on the object level, while I think that the mental health and social skills claims are probably true, I would be very  surprised if EA's were particularly bad at paying attention to their own feelings.  Particularly in the Bay Area EA community, but AFAIK more broadly as well, I feel like there is a lot of focus on mental health, techniques like rationalist "focusing," meditation, IFS etc. to get you in touch with your feelings, and lots of community discourse about these topics.  Similarly, there is a lot of community attentions to problems like EA being a social bubble, burnout, and other ways in which EA can become too all encompassing.   Am I missing something?

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f