Hide table of contents

(Cross-posted from my website.)

I recently resigned as Columbia EA President and have stepped away from the EA community. This post aims to explain my EA experience and some reasons why I am leaving EA. I will discuss poor epistemic norms in university groups, why retreats can be manipulative, and why paying university group organizers may be harmful. Most of my views on university group dynamics are informed by my experience with Columbia EA. My knowledge of other university groups comes from conversations with other organizers from selective US universities, but I don’t claim to have a complete picture of the university group ecosystem.

Disclaimer: I’ve written this piece in a more aggressive tone than I initially intended. I suppose the writing style reflects my feelings of EA disillusionment and betrayal.

My EA Experience

During my freshman year, I heard about a club called Columbia Effective Altruism. Rumor on the street told me it was a cult, but I was intrigued. Every week, my friend would return from the fellowship and share what he learned. I was fascinated. Once spring rolled around, I applied for the spring Arete (Introductory) Fellowship.

After enrolling in the fellowship, I quickly fell in love with effective altruism. Everything about EA seemed just right—it was the perfect club for me. EAs were talking about the biggest and most important ideas of our time. The EA community was everything I hoped college to be. I felt like I found my people. I found people who actually cared about improving the world. I found people who strived to tear down the sellout culture at Columbia.

After completing the Arete Fellowship, I reached out to the organizers asking how I could get more involved. They told me about EA Global San Francisco (EAG SF) and a longtermist community builder retreat. Excited, I applied to both and was accepted. Just three months after getting involved with EA, I was flown out to San Francisco to a fancy conference and a seemingly exclusive retreat.

EAG SF was a lovely experience. I met many people who inspired me to be more ambitious. My love for EA further cemented itself. I felt psychologically safe and welcomed. After about thirty one-on-ones, the conference was over, and I was on my way to an ~exclusive~ retreat.

I like to think I can navigate social situations elegantly, but at this retreat, I felt totally lost. All these people around me were talking about so many weird ideas I knew nothing about. When I'd hear these ideas, I didn't really know what to do besides nod my head and occasionally say "that makes sense." After each one-on-one, I knew that I shouldn't update my beliefs too much, but after hearing almost every person talk about how AI safety is the most important cause area, I couldn't help but be convinced. By the end of the retreat, I went home a self-proclaimed longtermist who prioritized AI safety.

It took several months to sober up. After rereading some notable EA criticisms (Bad Omens, Doing EA Better, etc.), I realized I got duped. My poor epistemics led me astray, but weirdly enough, my poor epistemics gained me some social points in EA circles. While at the retreat and at EA events afterwards, I was socially rewarded for telling people that I was a longtermist who cared about AI safety. Nowadays, when I tell people I might not be a longtermist and don't prioritize AI safety, the burden of proof is on me to explain why I "dissent" from EA. If you're a longtermist AI safety person, there's no need to offer evidence to defend your view.

(I would be really excited if more experienced EAs asked EA newbies why they take AI safety seriously more often. I think what normally happens is that the experienced EA gets super excited and thinks to themselves “how can I accelerate this person on their path to impact?” The naïve answer is to point them only towards upskilling and internship opportunities. Asking the newbie why they prioritize AI safety may not seem immediately useful and may even convince them not to prioritize AI safety, God forbid!)

I became President of Columbia EA shortly after returning home from the EAG SF and the retreat, and I'm afraid I did some suboptimal community building. Here are two mistakes I made:

  1. In the final week of the Arete Fellowship (I was facilitating), I asked the participants what they thought the most pressing problem was. One said climate change, two said global health, and two said AI safety. Neither of the people who said AI safety had any background in AI. If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong (Note: prioritizing any non-mainstream cause area after Arete is epistemically shaky. By mainstream, I mean a cause area that someone would have a high prior on). I think that poor epistemics may often be a central part of the mechanism that leads people to prioritize AIS after completing the Arete Fellowship. Unfortunately, rather than flagging this as epistemically shaky and supporting those members to better develop their epistemics, I instead dedicated my time and resources to push them to apply to EAG(x)'s, GCP workshops, and our other advanced fellowships. I did not follow up with the others in the cohort.
  2. I hosted a retreat with students from Columbia, Cornell, NYU, and UPenn. All participants were new EAs (either still completing Arete or just finished Arete). I think I felt pressure to host a retreat because "that's what all good community builders do." The social dynamics at this retreat were pretty solid (in my opinion), but afterwards I felt discontent. I had not convinced any of the participants to take EA seriously, and I felt like I had failed. Even though I knew that convincing people of EA wasn't necessarily the goal, I still implicitly aimed for that goal.

I served as president for a year and have since stepped down and dissociated myself from EA. I don't know if/when I will rejoin the community, but I was asked to share my concerns about EA, particularly university groups, so here they are!

Epistemic Problems in Undergraduate EA Communities

Every highly engaged EA I know has converged on AI safety as the most pressing problem. Whether or not they have a background in AI, they have converged on AI safety. The notable exceptions are those who were already deeply committed to animal welfare or those who have a strong background in biology. The pre-EA animal welfare folks pursue careers in animal welfare, and the pre-EA biology folks pursue careers in biosecurity. To me, some of these notable exceptions may not have performed rigorous cause prioritization. For students who converge on AI Safety, I also think it's unlikely that they have performed rigorous cause prioritization. I don't think this is that bad because cause prioritization is super hard, especially if your cause prioritization leads you to work on a cause you have no prior experience in. But, I am scared of a community that emphasizes the importance of cause prioritization yet few people actually cause prioritize.

Perhaps, people are okay with deferring their cause prioritization to EA organizations like 80,000 Hours, but I don't think many people would have the guts to openly admit that their cause prioritization is a result of deferral. We often think of cause prioritization as key to the EA project and to admit to deferring on one's cause prioritization is to reject a part of the Effective Altruism project. I understand that everyone has to defer on significant parts of their cause prioritization, but I am very concerned with just how little cause prioritization seems to be happening at my university group. I think it would be great if more university group organizers encourage their members to focus on cause prioritization. I think if groups started organizing writing fellowships where people focus on working through their cause prioritization, we could make significant improvements.

My Best Guess on Why AI Safety Grips Undergraduate Students

The college groups that I know best, including Columbia EA, seem to function as factories for churning out people who care about existential risk reduction. Here's how I see each week of the Arete (Intro) Fellowship play out.

  1. Woah! There's an immense opportunity to do good! You can use your money and your time to change the world!
  2. Wow! Some charities are way better than others!
  3. Empathy! That's nice. Let's empathize with animals!
  4. Doom! The world might end?! You should take this more seriously than everything we've talked about before in this fellowship
  5. Longtermism! You should care about future beings. Oh, you think that's a weird thing to say? Well, you should take ideas more seriously!
  6. AI is going to kill us all! You should be working on this. 80k told me to tell you that you should work on this.
  7. This week we'll be discussing WHAT ~YOU~ THINK! But if you say anything against EA, I (your facilitator) will lecture for a few minutes defending EA (sometimes rightfully so, other times not so much)
  8. Time to actually do stuff! Go to EAG! Go to a retreat! Go to the Bay!

I'm obviously exaggerating what the EA fellowship experience is like, but I think this is pretty close to describing the dynamics of EA fellowships, especially when the fellowship is run by an inexperienced, excited, new organizer. Once the fellowship is over, the people who stick around are those who were sold on the ideas espoused in weeks 4, 5, and 6 (existential risks, longtermism, and AI) either because their facilitators were passionate about those topics, they were tech bros, or they were inclined to those ideas due to social pressure or emotional appeal. The folks who were intrigued by weeks 1, 2, and 3 (animal welfare, global health, and cost-effectiveness) but dismissed longtermism, x-risks, or AI safety may (mistakenly) think there is no place for them in EA. Over time, the EA group continues to select for people with those values, and before you know it your EA group is now a factory that churns out x-risk reducers, longtermists, and AI safety prioritizers. I am especially fearful that almost every person who becomes highly engaged due to their college group is going to have world views and cause prioritizations that are strikingly similar to those who compiled the EA handbook (intro fellowship syllabus) and AGISF.

It may be that AI safety is in fact the most important problem of our time, but there is an epistemic problem in EA groups that cannot be ignored. I’m not willing to trade off epistemic health for churning out more excellent AI safety researchers (This is an oversimplification. I understand that some of the best AI researchers have excellent epistemics as well). Some acclaimed EA groups might be excellent at churning out competent AI safety prioritizers, but I would rather have a smaller, epistemically healthy group that embarks on the project of effective altruism.

Caveats

I suspect that I overestimate how much facilitators influence fellows' thinking. I think that the people who become highly engaged don't become highly engaged because their facilitator was very persuasive (persuasiveness is a smaller part); rather, people become highly engaged because they already had worldviews that mapped closely to EA.

How Retreats Can Foster an Epistemically Unhealthy Culture

In this section, I will argue that retreats cause people to take ideas seriously when they perhaps shouldn't. Retreats make people more susceptible to buying into weird ideas. Those weird ideas may in fact be true, but the process of buying into those weird ideas rests on shaky epistemics grounds.

Against Taking Ideas Seriously

According to LessWrong, "Taking Ideas Seriously is the skill/habit of noticing when a new idea should have major ramifications." I think taking ideas seriously can be a useful skill, but I'm hesitant when people encourage new EAs to take ideas seriously.

Scott Alexander warns against taking ideas seriously:

for 99% of people, 99% of the time, taking ideas seriously is the wrong strategy. Or, at the very least, it should be the last skill you learn, after you’ve learned every other skill that allows you to know which ideas are or are not correct. The people I know who are best at taking ideas seriously are those who are smartest and most rational. I think people are working off a model where these co-occur because you need to be very clever to resist your natural and detrimental tendency not to take ideas seriously. But I think they might instead co-occur because you have to be really smart in order for taking ideas seriously not to be immediately disastrous. You have to be really smart not to have been talked into enough terrible arguments.

Why Do People Take Ideas Seriously in Retreats?

Retreats are sometimes believed to be one of the most effective university community building strategies. Retreats heavily increase people's engagement with EA. People cite retreats as being key to their onramp to EA and taking ideas like AI safety, x-risks, and longtermism more seriously. I think retreats make people take ideas more seriously because retreats disable people's epistemic immune system.

  1. Retreats are a foreign place. You might feel uncomfortable and less likely to “put yourself out there." Disagreeing with the organizers, for example, “puts you out there." Thus, you are unlikely to dissent from the views of the organizers and speakers. You may also paper over your discontents/disagreements so you can be part of the in-group.
  2. When people make claims confidently about topics you know little about, there's not much to do. For five days, you are bombarded with arguments for AI safety, and what can you do in response? Sit in your room and try to read arguments and counterarguments so you can be better prepared to talk about these issues the next day? Absolutely not. The point of this retreat is to talk to people about big ideas that will change the world. There’s not enough time to do the due diligence of thinking through all the new, foreign ideas you’re hearing. At this retreat, you are encouraged to take advantage of all the networking opportunities. With no opportunity to do your due diligence to read into what people are confidently talking about, you are forced to implicitly trust your fellow retreat participants. Suddenly, you will have unusually high credence in everything that people have been talking about. Even if you decide to do your due diligence after the retreat, you will be fighting an uphill battle against your unusually high prior on those "out there" takes from those really smart people at the retreat.

Other Retreat Issues

  1. Social dynamics are super weird. It can feel very alienating if you don't know anyone at the retreat while everyone else seems to know each other. More speed friending with people you’ve never met before would be great.
  2. Lack of psychological safety
    1. I think it's fine for conversations at retreats to be focused on sharing ideas and generating impact, but it shouldn't feel like the only point of the conversation is impact. Friendships shouldn't feel centered around impact. It’s a bad sign if people feel that they will jeopardize a relationship if they stop appearing to be impactful.
    2. The pressure to appear to be “in the know” and send the right virtue signals can be overwhelming, especially in group settings.
  3. Not related to retreats but similar: sending people to the Bay Area is weird. Why do people suddenly start to take longtermist, x-risk, AI safety ideas more seriously when they move to the Bay? I suspect moving to the Bay Area has similar effects as going to retreats.

University Group Organizer Funding

University group organizers should not be paid so much. I was paid an outrageous amount of money to lead my university's EA group. I will not apply for university organizer funding again even if I do community build in the future.

Why I Think Paying Organizers May Be Bad

  1. Being paid to run a college club is weird. All other college students volunteer to run their clubs. If my campus newspaper found out I was being paid this much, I am sure an EA take-down article would be published shortly after.
  2. I doubt paying university group organizers this much is increasing their counterfactual impact much. I don't think organizers are spending much more time because of this payment. Most EA organizers are from wealthy backgrounds, so the money is not clearing many bottlenecks (need-based funding would be great—see potential fixes section).
    1. Getting paid to organize did not make me take my role more seriously, and I suspect that other organizers did not take their roles much more seriously because of being paid. I'd be curious to read the results of the university group organizer funding exit survey to learn more about how impactful the funding was.

Potential Solutions

  1. Turn the University Group Organizer Fellowship into a need-based fellowship. This is likely to eliminate financial bottlenecks in people's lives and accelerate their path to impact, while not wasting money on those who do not face financial bottlenecks.
  2. If the University Group Organizer Fellowship exit survey indicates that funding was somewhat helpful in increasing people's commitment to quality community building, then reduce funding to $15/hour (I’m just throwing this number out there; bottom line is reduce the hourly rate significantly). If the results indicate that funding had little to no impact, abandon funding (not worth the reputational risks and weirdness). I think it’s unlikely that the results of the survey indicate that the funding was exceptionally impactful.

Final Remarks

I found an awesome community at Columbia EA, and I plan to continue hanging out with the organizers. But I think it’s time I stop organizing for my mental health and the reasons outlined above. I plan to spend the next year focusing on my cause prioritization and building general competencies. If you are a university group organizer and have concerns about your community’s health, please don’t hesitate to reach out.

Comments113
Sorted by Click to highlight new comments since: Today at 9:21 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hey,

I’m really sorry to hear about this experience. I’ve also experienced what feels like social pressure to have particular beliefs (e.g. around non-causal decision theory, high AI x-risk estimates, other general pictures of the world), and it’s something I also don’t like about the movement. My biggest worries with my own beliefs stem around the worry that I’d have very different views if I’d found myself in a different social environment. It’s just simply very hard to successfully have a group of people who are trying to both figure out what’s correct and trying to change the world: from the perspective of someone who thinks the end of the world is imminent, someone who doesn’t agree is at best useless and at worst harmful (because they are promoting misinformation).

In local groups in particular, I can see how this issue can get aggravated: people want their local group to be successful, and it’s much easier to track success with a metric like “number of new AI safety researchers” than “number of people who have thought really deeply about the most pressing issues and have come to their own well-considered conclusions”. 

One thing I’ll say is that core researchers ... (read more)

As someone who is extremely pro investing in big-tent EA, my question is, "what does it look like, in practice, to implement 'AI safety...should have its own movement, separate from EA'?"

I do think it is extremely important to maintain EA as a movement centered on the general idea of doing as much good as we can with limited resources. There is serious risk of AIS eating EA, but the answer to that cannot be to carve AIS out of EA. If people come to prioritize AIS from EA principles, as I do, I think it would be anathema to the movement to try to push their actions and movement building outside the EA umbrella. In addition, EA being ahead of the curve on AIS is, in my opinion, a fact to embrace and treat as evidence of the value of EA principles, individuals, and movement building methodology.

To avoid AIS eating EA, we have to keep reinvesting in EA fundamentals. I am so grateful and impressed that Dave published this post, because it's exactly the kind of effort that I think is necessary to keep EA EA. I think he highlights specific failures in exploiting known methods of inducing epistemic ... untetheredness? 

For example, I worked with CFAR where the workshops deliberately em... (read more)

"what does it look like, in practice, to implement 'AI safety...should have its own movement, separate from EA'?"

Creating AI Safety focused Conferences, AI Safety university groups and AI Safety local meet-up groups? Obviously attendees will initially overlap very heavily with EA conferences and groups but having them separated out will lead to a bit of divergence over time  

Wouldn't this run the risk of worsening the lack of intellectual diversity and epistemic health that the post mentions? The growing divide between long/neartermism might have led to tensions, but I'm happy that at least there's still conferences, groups and meet-ups where these different people are still talking to each other!

There might be an important trade-off here, and it's not clear to me what direction makes more sense.

7
freedomandutility
9mo
I don’t think there’s much of a trade-off, I’d expect a decent proportion of AI Safety people to still be coming to EA conferences
4
Arthur Malone
9mo
I am all for efforts to do AIS movement building distinct from EA movement building by people who are convinced by AIS reasoning and not swayed by EA principles. There's all kinds of discussion about AIS in academic/professional/media circles that never reference EA at all. And while I'd love for everyone involved to learn about and embrace EA, I'm not expecting that. So I'm just glad they're doing their thing and hope they're doing it well. I could probably have asked the question better and made it, "what should EAs do (if anything), in practice to implement a separate AIS movement?" Because then it sounds like we're talking about making a choice to divert movement building dollars and hours away from EA movement building to distinct AI safety movement building, under the theoretical guise of trying to bolster the EA movement against getting eaten by AIS? Seems obviously backwards to me. I think EA movement building is already under-resourced, and owning our relationship with AIS is the best strategic choice to achieve broad EA goals and AIS goals.

What should be done? I have a few thoughts, but my most major best guess is that, now that AI safety is big enough and getting so much attention, it should have its own movement, separate from EA.

Or, the ideal form for the AI safety community might not be a "movement" at all! This would be one of the most straightforward ways to ward off groupthink and related harms, and it has been possible for other cause areas, for instance, global health work mostly doesn't operate as a social movement.

Global health outside of EA may not have the issues associated with being a movement, but it has even bigger issues.

At the moment, I’m pretty worried that, on the current trajectory, AI safety will end up eating EA. Though I’m very worried about what the next 5-10 years will look like in AI, and though I think we should put significantly more resources into AI safety even than we have done, I still think that AI safety eating EA would be a major loss.

I wonder how this would look different from the current status quo:

  • Wytham Abbey cost £15m, and its site advertises it as basically being primarily for AI/x-risk use (as far as I can see it doesn't advertise what it's been used for to date)
  • Projects already seem to be highly preferentially supported based on how longtermist/AI-themed they are. I recently had a conversation with someone at OpenPhil in which, if I understood/remembered correctly, they said the proportion of OP funding going to nonlongtermist stuff was about 10%. [ETA sounds like this is wrong]
  • The global health and development fund seems to have been discontinued . The infrastructure fund, I've heard on the grapevine, strongly prioritises projects with a longtermist/AI focus. The other major source of money in the EA space is the Survival and Flourishing Fund, which lists its goal as 'to
... (read more)

Regarding the funding aspect:

  • As far as I can tell, Open Phil has always given the majority of their budget to non-longtermist focus areas.
    • This is also true of the EA portfolio more broadly.
  • GiveWell has made grants to less established orgs for several years, and that amount has increased dramatically of late.

Holden also stated in his recent 80k podcast episode that <50% of OP's grantmaking goes to longtermist areas.

2
Arepo
9mo
I realise I didn't make this distinction, so I'm shifting the goalposts slightly, but I think it's worth distinguishing between 'direct work' organisations and EA infrastructure. It seems pretty clear from the OP that the latter is being strongly encouraged to primarily support EA/longtermist work.
6
Rebecca
9mo
Im a bit confused about the grammar of the last sentence - are you saying that EA infrastructure is getting more emphasis than direct work, or that people interested in infrastructural work are being encouraged to primarily support longtermism?
2
Arepo
9mo
Sorry - the latter.
4
Rebecca
9mo
I’d imagine it’s much harder to argue that something like community building is cost-effective within something like global health, than within longtermist focused areas? There’s much more capacity to turn money into direct work/bednets, and those direct options seem pretty hard to beat in terms of cost effectiveness.
3
Arepo
9mo
Community building can be nonspecific, where you try to get a build a group of people who have some common interest (such as something under big tent EA), or specific, where you try to get people who are working on some specific thing (such as working on AI/longtermist projects, or moving in that direction). My sense is that (per the OP), community builders are being pressured to do the latter.

The theory of change for community building is much stronger for long-termist cause areas than for global poverty.

For global poverty, it's much easier to take a bunch of money and just pay people outside of the community to do things like hand out bed nets.

For x-risk, it seems much more valuable to develop a community of people who deeply care about the problem so that you can hire people who will autonomously figure out what needs to be done. This compares favourably to just throwing money at the  problem, in which case you’re just likely to get work that sounds good, rather than work advancing your objective.

9
Jason
9mo
Right, although one has to watch for a possible effect on community composition. If not careful, this will end up with a community full of x-risk folks not necessarily because x-risk is correct cause prioritization, but because it was recruited for due to the theory of change issue you identify.
6
Arepo
9mo
This seems like a self-fulfilling prophecy. If we never put effort into building a community around ways to reduce global poverty, we'll never know what value they could have generated. Also it seems a priori really implausible that longtermists could usefully do more things in their sphere alone than that EAs focusing on the whole of the rest of EA-concern-space could.
5
Chris Leong
9mo
Well EA did build a community around it and we’ve seen that talent is a greater bottleneck for longtermism than it is for global poverty.

The flipside argument would be that funding is a greater bottleneck for global poverty than longtermism, and one might convince university students focused on global poverty to go into earning-to-give (including entrepreneurship-to-give). So the goals of community building may well be different between fields, and community building in each cause area should be primarily judged on its contribution to that cause area's bottleneck.

7
Chris Leong
9mo
I could see a world in which the maths works out for that. I guess the tricky thing there is that you need the amount raised with discount factor applied to exceed the cost, incl. the opportunity cost of community builders potentially earning to give themselves. And this seems to be a much tighter constraint than that imposed by longtermist theories of change.
5
Jason
9mo
True -- although I think the costs would be much lower for university groups run by (e.g.) undergraduate student organizers who were paid typical student-worker wages (at most). The opportunity costs would seem much stronger for community organizing by college graduates than by students working a few hours a week.

Most of the researchers at GPI are pretty sceptical of AI x-risk.


Not really responding to the comment (sorry), just noting that I'd really like to understand why these researchers at GPI and careful-thinking AI alignment people - like Paul Christiano - have such different risk estimates!  Can someone facilitate and record a conversation? 

David Thorstadt, who worked at GPI, Blogs about reasons for his Ai skepticism (and other EA critiques) here https://ineffectivealtruismblog.com/

2
Achim
8mo
Which of David's posts would you recommend as a particularly good example and starting point?
8
JWS
8mo
Imo it would his Existential Risk Pessimism and the Time of Perils series (it's based on a GPI paper of his that he also links to) Clearly written, well-argued, and up there amongst both his best work and I think one of the better criticisms of xRisk/longtermist EA that I've seen. I think he's pointed out a fundamental tension in utilitarian calculus here, and pointed out the additional assumption that xRisk-focused EAs have to make this work - "the time of perils", but I think plausibly argues that this assumption is more difficult to argue for that the initial two (Existential Risk Pessism and the Astronomical Value Thesis)[1] I think it's a rich vein of criticism that I'd like to see more xRisk-inclined EAs responed to further (myself included!) 1. ^ I don't want to spell the whole thing out here, go read those posts :)
2
Achim
8mo
Thanks! I read it, it's an interesting post, but it's not "about reasons for his Ai skepticism ". Browsing the blog, I assume I should read this?
3
mhendric
8mo
Depends entirely on your interests! They are sorted thematically https://ineffectivealtruismblog.com/post-series/ Specific recommendations if your interests overlap with Aaron_mai's: 1(a) on a tension between thinking X-risks are likely and thinking reducing X-risks have astronomical value; 1(b) on the expected value calculation in X-risk; 6(a) as a critical review of the Carlsmith report on AI risk.

The object-level reasons are probably the most interesting and fruitful, but for a complete understanding of how the differences might arise, it's probably also valuable to consider:

  • sociological reasons
  • meta-level incentive reasons 
  • selection effects

An interesting exercise could be to go through the categories and elucidate 1-3 reasons in each category for why AI alignment people might believe X and cause prio people might believe not X.

Larks
9mo136
59
3

One said climate change, two said global health, and two said AI safety. Neither of the people who said AI safety had any background in AI. If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong (Note: prioritizing any non-mainstream cause area after Arete is epistemically shaky. By mainstream, I mean a cause area that someone would have a high prior on).

This seems like a strange position to me. Do you think people have to have a background in climate science to decide that climate change is the most important problem, or development economics to decide that global poverty is the moral imperative of our time? Many people will not have a background relevant to any major problem; are they permitted to have any top priority?

I think (apologies if I am mis-understanding you) you try to get around this by suggesting that 'mainstream' causes can have much higher priors and lower evidential burdens. But that just seems like deference to wider society, and the process by which mainstream causes became dominant does not seem very epistemically reliable to me.

If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong

I would like to second the objection to this. I feel as most intros to AI Safety, such as AGISF, are detached enough from technical AI details such that one could do the course without the need for past AI background

(This isn't an objection to the epistemics related to picking up something non-mainstream cause area quickly, but rather about the need to have an AI background to do so)

I guess I'm unclear about what sort of background is important. ML isn't actually that sophisticated as it turns out, it could have been, but "climb a hill" or "think about an automata but with probability distributions and annotated with rewards" just don't rely on more than a few semesters of math. 

2/5 doesn’t seem like very strong evidence of groupthink to me.

I also wouldn’t focus on their background, but on things like whether they were able to explain the reasons for their beliefs in their own words or tended to simply fall back on particular phrases they’d heard.

(I lead the CEA uni groups team but don’t intend to respond on behalf of CEA as a whole and others may disagree with some of my points)
 

Hi Dave,

I just want to say that I appreciate you writing this. The ideas in this post are ones we have been tracking for a while and you are certainly not alone in feeling them. 

I think there is a lot of fruitful discussion in the comments here about strategy-level considerations within the entire EA ecosystem and I am personally quite compelled by many of the points in Will’s comment. So, I will focus specifically on some of the considerations we have on the uni group level and what we are trying to do about this. (I will also flag that I could say a lot more on each of these but my response was already getting quite long and we wanted to keep it somewhat concise)

Epistemics

  • We are also quite worried about epistemic norms in university groups. We have published some of our advice around this on the forum here (though maybe we should have led with more concrete examples) and I gave a talk at EAG Bay Area on it.
  • We also try to screen that people actually understand the arguments behind the claims they are making & common argumen
... (read more)
4
freedomandutility
9mo
Hi Jessica, if you have time, I’d love to get your thoughts on some of my suggestions to improve university group epistemics via the content of introductory fellowships: https://forum.effectivealtruism.org/posts/euzDpFvbLqPdwCnXF/university-ea-groups-need-fixing?commentId=z7rPNpaqNZPXH2oBb
1[comment deleted]9mo

Hi Dave,

Thanks for taking the time to write this. I had an almost identical experience at my university. I helped re-start the club, with every intention to lead the club, but I am no longer associated with it because of the lack of willingness from others to engage with AI safety criticisms or to challenge their own beliefs regarding AI safety/Existential risk.

I also felt that those in our group who prioritized AI safety had an advantage as far as getting recognition from more senior members of the city group, ability to form connections with other EAs in the club, and to get funding from EA orgs. I was quite certain I could get funding from the CEA too, as long as I lied and said I prioritized AI safety/Existential risk, but I wasn’t willing to do that. I also felt the money given to other organizers in the club was not necessary and did not have any positive outcomes other than for that individual.

I am now basically fully estranged from the club (which sucks, because I actually enjoyed the company of everyone) because I do not feel like my values, and the values I originally became interested in EA for (such as epistemic humility) exist in the space I was in.

I did manage to have... (read more)

Thanks for writing this. This comment, in connection with Dave's, reminds me that paying people -- especially paying them too much -- can compromise their epistemics. Of course, paying people is often a practical necessity for any number of reasons, so I'm not suggesting that EA transforms into a volunteer-only movement.

I'm not talking about grift but something that has insidious onset in the medical sense: slow, subtle, and without the person's awareness. If one believes that financial incentives matter (and they seemingly must for the theory of change behind paying university organizers to make much sense), it's important to consider the various ways in which those incentives could lead to bad epistemics for the paid organizer. 

If student organizers believe they will be well-funded for promoting AI safety/x-risk much more so than broad-tent EA, we would expect that to influence how they approach their organizing work. Moreover, reduction of cognitive dissonance can be a powerful drive -- so the organizer may actually (but subconsciously) start favoring the viewpoint they are emphasizing in order to reduce that dissonance rather than for sound reasons. If a significant number... (read more)

It seems like a lot of criticism of EA stems from concern about "groupthink" dynamics. At least, that is my read on the main reason Dave dislikes retreats. This is a major concern of mine as well.

I know groups like CEA and Open Phil have encouraged and funded EA criticism. My difficulty is I don't know where to find that criticism. I suppose the EA forum frequently posts criticisms, but fighting groupthink by reading the forum seems counter productive.

I've personally found a lot of benefit in reading Reflective Altruism's blog. 

What I'm saying is, I know EA orgs want to encourage criticism, and good criticisms do exit, but I don't think orgs have found a great way to disseminate those criticisms yet. I would want criticism dissemination to be more of a focus. 

For example, there is an AI Safety reading list an EA group put out. It's very helpful, but I haven't seen any substantive criticism linked to in that list, while arguments in favor of longtermism comprise most of the lists.

I've only been to a handful of the conferences, but I've not seen a "Why to be skeptical of longtermism" talk posted.

Has there been an 80k podcast episode that centers longtermism skepticism befor... (read more)

If you're an animal welfare EA I'd highly recommend joining the wholesome refuge that is the newly minted Impactful Animal Advocacy (IAA).

Website and details here. I volunteered for them at the AVA Summit which I strongly recommend as the premier conference and community-builder for animal welfare-focused EAs. The AVA Summit has some features I have long thought missing from EAGs - namely people arguing in good faith about deep deep disagreements (e.g. why don't we ever see a panel with prominent longtermist and shorttermist EAs arguing for over an hour straight at EAGs?). There was an entire panel addressing quantification bias which turned into talking about some believing how EA has done more harm than good for the animal advocacy movement... but that people are afraid to speak out against EA given it is a movement that has brought in over 100 million dollars to animal advocacy. Personally I loved there being a space for these kind of discussions.

Also, one of my favourite things about the IAA community is they don't ignore AI, they take it seriously and try to think about how to get ahead of AI developments to help animals. It is a community where you'll bump into people who can talk about x-risk and take it seriously, but for whatever reason are prioritizing animals.

People have been having similar thoughts to yours for many years, including myself. Navigating through EA epistemic currents is treacherous. To be sure, so is navigating epistemic currents in lots of other environments, including the "default" environment for most people. But EA is sometimes presented as being "neutral" in certain ways, so it feels jarring to see that it is clearly not.

Nearly everyone I know who has been around EA long enough to do things like run a university group eventually confronts the fact that their beliefs have been shaped socially by the community in ways that are hard to understand, including by people paid to shape your beliefs. It's challenging to know what to do in light of that. Some people reject EA. Others, like you, take breaks to figure things out more for themselves. And others press on, while trying to course correct some. Many try to create more emotional distance, regardless of what they do. There's not really an obvious answer, and I don't feel I've figured it fully out myself. All this is to just say: you're not alone. If you or anyone else reading this wants to talk, I'm here.

Finally, I really like this related post, as well as this comment... (read more)

I'm really glad you chose to make this post and I'm grateful for your presence and insights during our NYC Community Builders gatherings over the past ~half year. I worry about organizers with criticisms leaving the community and the perpetuation of an echo chamber, so I'm happy you not only shared your takes but also are open to resuming involvement after taking the time to learn, reflect, and reprioritize.

Adding to the solutions outlined above, some ideas I have:

• Normalize asking people, "What is the strongest counterargument to the claim you just made?" I think this is particularly important in a university setting, but also helpful in EA and the world at large. A uni professor recently told me one of the biggest recent shifts in their undergrad students has been a fear of steelmanning, lest people incorrectly believe it's the position they hold. That seems really bad. And it seems like establishing this as a new norm could have helped in many of the situations described in the post, e.g. "What are some reasons someone who knows everything you do might not choose to prioritize AI?" • Greater support for uni students trialing projects through their club, including projects spann... (read more)

I remember speaking with a few people that were employed doing AI-type EA work (people who appear to have fully devoted their careers to the mainstream narrative of EA-style longtermism). I was a bit surprised that when I asked them "What are the strongest arguments  against longtermism" none were able to provide much of an answer. I was perplexed that people who had decided to devote their careers (and lives?) to this particular cause area weren't able to clearly articulate the main weaknesses/problems.

Part of me interpreted this as "Yeah, that makes sense. I wouldn't be able to speak about strong arguments against gravity or evolution either, because it seems so clear that this particular framework is correct." But I also feel some concern if the strongest counterargument is something fairly weak, such as "too many white men" or "what if we should discount future people."

Mad props for going off anon. Connecting it to your resignation from columbia makes me take you way more seriously and is a cheap way to make the post 1000x more valuable than an anon version. 

9
Linch
9mo
Hmm, 1000x feels too strong to me, maybe by >100x. EDIT: lol controversial comment
Zane
9mo47
12
0

Thanks for making this post. Many commenters are disputing your claim that "Being paid to run a college club is weird", and I want to describe why I think it is in fact distorting.

One real reason you don't want to pay the leadership of a college club a notably large amount of money is because you expose yourself to brutal adverse selection: the more you pay above the market rate for a campus job, the more attractive the executive positions are to people who are purely financially motivated rather than motivated by the mission of the club. This is loosely speaking a problem faced by all efforts to hire everywhere, but is usually resolved in a corporate environment through having precise and dispassionate performance evaluation, and the ability to remove people who aren't acting "aligned", if you will. I think the lack of mechanisms like this at college org level basically mean this adverse selection problem blows up, and you simply can't bestow excess money or status on executives without corrupting the org. I saw how miserable college-org politics were in other settings, with a lot less money to go around than EA. 

At the core of philanthropic mission is a principal-agent probl... (read more)

...seeing that the Columbia EA club pays its executives so much...

To the best of my knowledge, I don't think Columbia EA gives out salaries to their "executives." University group organizers who meet specific requirements (for instance, time invested per week) can independently apply for funding and have to undergo an application and interview process. So, the dynamics you describe in the beginning would be somewhat different because of self-selection effects; there isn't a bulletin board or a LinkedIn post where these positions are advertised. I say somewhat because I can imagine a situation where a solely money-driven individual gets highly engaged in the club, learns about the Group Organizer Fellowship, applies, and manages to secure funding. However, I don't expect this to be that likely.

...you are constantly being nudged by your corrupted hardware to justify spending money on luxuries and conveniences.

For group funding, at least, there are strict requirements for what money can and cannot be spent on. This is true for most university EA clubs unless they have an independent funding source.

 All that said, I agree that "notably large amount[s] of money" for university organizers is not ideal.

2
Kirsten
9mo
The mostly analogous position I can think of is that university chaplains get paid to work with university students to help teach and mentor them.
8
Jason
9mo
Chaplains dont raise all of the same concerns here. They generally aren't getting above-market salaries (either for professional-degree holders generally, or compared to other holders of their degree), and there's a very large barrier to entry (in the US, often a three-year grad degree costing quite a bit of money). So there's much less incentive and opportunity for someone to gift into a chaplain position; chaplains tend to be doing it because they really believe in their work.

My poor epistemics led me astray, but weirdly enough, my poor epistemics gained me some social points in EA circles. While at the retreat and at EA events afterwards, I was socially rewarded for telling people that I was a longtermist who cared about AI safety.

This is odd to me because I have a couple memories of feeling like sr EAs were not taking me seriously because I was being sloppy in my justification for agreeing with them. Though admittedly one such anecdote was pre-pandemic, and I have a few longstanding reason to expect the post-pandemic community builder industrial complex would not have performed as well as the individuals I'm thinking about. 

Can confirm that:

"sr EAs [not taking someone seriously if they were] sloppy in their justification for agreeing with them"

sounds right based on my experience being on both sides of the "meeting senior EAs" equation at various times.

(I don't think I've met Quinn, so this isn't a comment on anyone's impression of them or their reasoning)

I think that a very simplified ordering for how to impress/gain status within EA is:

Disagreement well-justified ≈ Agreement well-justified >>> Agreement sloppily justified > Disagreement sloppily justified

Looking back on my early days interacting with EAs, I generally couldn't present well-justified arguments. I then did feel pressure to agree on shaky epistemic grounds. Because I sometimes disagreed nevertheless, I suspect that some parts of the community were less accessible to me back then.

I'm not sure about what hurdles to overcome if you want EA communities to push towards 'Agreement sloppily justified' and 'Disagreement sloppily justified' being treated similarly.

8
RohanS
9mo
I think both things happen in different contexts. (Being socially rewarded just for saying you care about AI Safety, and not being taken seriously because (it seems like) you have not thought it through carefully, that is.)
7
Linch
9mo
I dunno, I think it can be the case that being sloppy in reasoning and for having disagreements with your conversational partner are independently penalized, or that there's an interaction effect between the two.  Especially in quick conversations, I can definitely see times where I'm more attuned to bad (by lights) arguments for (by my lights) wrong conclusions, than bad arguments for what I consider to be right conclusions. This is especially true if "bad arguments for right conclusions" really just means people who don't actually understand deep arguments paraphrasing better arguments that they've heard. 
7
nonn
9mo
My experience is that it's more that group leaders & other students in EA groups might reward poor epistemics in this way. And that when people are being more casual, it 'fits in' to say AI risk & people won't press for reasons in those contexts as much, but would push if you said something unusual. Agree my experience with senior EAs in the SF Bay was often the opposite–I was pressed to explain why I'm concerned about AI risk & to respond to various counterarguments.

For what it's worth, I run an EA university group outside of the U.S (at the University of Waterloo in Canada). I haven't observed any of the points you mentioned in my experience with the EA group:

  • We don't run intro to EA fellowships because we're a smaller group. We're not trying to convert more students to be 'EA'. We more so focus on supporting whoever's interested in working on EA-relevant projects (ex: a cheap air purifier, a donations advisory site, a cybersecurity algorithm, etc.). Whether they identify with the EA movement or not. 
  • Since we're not trying to get people to become EA members, we're not hosting any discussions where a group organiser could convince people to work on AI safety over all else. 
  • No one's getting paid here. We have grant money that we've used for things like hosting an AI governance hackathon. But that money gets used for things like marketing, catering, prizes, etc. - not salaries. 

Which university EA groups specifically did you talk to before proclaiming "University EA Groups Need Fixing"? Based only on what I read in your article, a more accurate title seems to be "Columbia EA Needs Fixing" 

...we're not hosting any discussions where a group organiser could convince people to work on AI safety over all else. 

I feel it is important to mention that this isn't supposed to happen during introductory fellowship discussions. CEA and other group organizers have compiled recommendations for facilitators (here is one, for example), and all the ones I have read quite clearly state that the role of the facilitator is to help guide the conversation, not overly opine or convince participants to believe in x over y.

Thanks for writing this, these are important critiques. I think it can be healthy to disengage from EA in order to sort through some of the weird ideas for yourself, without all the social pressures.

A few comments:

Being paid to run a college club is weird. All other college students volunteer to run their clubs.

I actually don't think it's that weird to pay organizers. I know PETA has a student program that pays organizers, and The Humane League once did this too. I'd imagine you can find similar programs in other movements, though I don't know for sure.

I suspect the amount that EA pays organizers is unusual though, and I strongly agree with you that paying a lot for university organizing introduces weird and epistemically corrosive incentives. The PETA program pays students $60 per event they run, so at most ~$600 per semester. Idk exactly how much EA group leaders are paid, but I think it's a lot more than that.

I definitely share your sense that EA's message of "think critically about how to do the most good" can sometimes feel like code for "figure out that we're right about longtermism so you can work on AI risk." The free money, retreats etc. can wind up feeling more like bribe... (read more)

For UK universities (I see a few have EA clubs) - it is really weird that student volunteers receive individual funding. I think this applies to US as well but can't be 100% sure:

UK student clubs fall under the banner of their respective student union, which is a charitable organisation to support the needs, interests and development of the students at the university. They have oversight of clubs, and a pot of money that clubs can access (i.e. they submit a budget for their running costs/events for the year and the union decides what is/isn't reasonable and what it can/can't fund). They also have a platform to promote all clubs through the union website, Freshers' week, university brochures, etc.

Some external organisations sponsor clubs. This is usually to make up 'gaps' in funding from the union e.g. If a bank wanted to fund a finance club so they can provide free caviar and wine at all events to encourage students to attend, in return for their logo appearing in club newsletters, this makes sense; the union would not be funding the 'caviar and wine' line item in the budget as this is not considered essential to supporting the running of the finance club as per the union's charita... (read more)

9
zchuang
9mo
In Australia it is the norm for student union leaders to be paid a decently large sum along the 20k to 30k range from memory. 
3
JoshuaBlake
9mo
The UK has this too. But they are full-time employees, either taking a year off from their studies or in the year after they graduate. Open Phil pays a lot more than this.
1
zchuang
9mo
Yeah in Australia they don't really do much having been friends with them.
6
Jason
9mo
Open Phil's University Organizer Fellowship quotes the following ranges which may be useful as a ballpark: Funding starting in the following ranges for full-time organizers, pro-rated for part-time organizers: * In the US: $45,000 – $80,000 per year for undergraduates, and $60,000 – $95,000 per year for non-undergraduates (including those no longer at university). * In the UK: £31,800 – £47,800 per year for undergraduates, and £35,800 – £55,900 per year for non-undergraduates. * Funding amounts in other countries will be set according to cost-of-living and other location-specific factors. Exact funding amounts will depend on a number of factors, including city-specific cost-of-living, role, track record, and university. Most grantees are "working 15 hours per week or less."
5
JoshuaBlake
9mo
For context, a UK graduate at their first job at a top 100 employer earns around £30,000 per year, which is pretty close to the national median salary. So these are well-paying jobs.
5
Linch
9mo
It's always wild to me that English-speaking countries with seemingly competent people (like the UK and Singapore) pay their programmers 1/2 or less that of programmers in America. I still don't understand the economics behind that.
2
Joseph Pusey
9mo
As in, paying UK undergrads ~£50/hr (assuming they work 15 hours all year round, including in the very lengthy university holidays)? (!) Or am I missing something here?
9
Jason
9mo
It is "pro-rated for part-time organizers," and most are part-time. In the US, proration is commonly done off of around 2000 hrs/year for full time, but I don't know how Open Phil does it.
1
Rebecca
9mo
It's a similar situation with at least some universities in Australia, with the added complication that club office-holders are elected by club members, so no conventional application process is allowed, and there's always the chance that a random, non-CEA-vetted member will turn up and manage to win the election.
6
Vaidehi Agarwalla
9mo
+1 to the the amount of money being really high relative to other clubs (and - importantly - other on campus jobs). At my college (Haverford College, small liberal arts in the US) the only "club" that was paid (to my knowledge) was the environmental committee, and this was because 1) it was a committee which liaised with the other offices on campus (e.g. president's office, arboretum, faculty) and it existed because 2) it was funded by an independent donor. Only the org leaders were compensated and this was at the college-wide student rate of between $9-10 (depending on your work experience). I don't think $10/hour is a reasonable wage to pay anyone and other unis probably have a higher wages ($15, possibly higher?), but it gives you a sense of the discrepancy in pay for on-campus jobs and student organizers. I think it's reasonable to pay students higher wages during the summer where some students may have competitive offers from for-profit companies. I'd weight it higher if they are needs based (e.g. some schools like Haverford have a mandatory co-pay of ~$2500 a year, which many students earn during the summer).

Is it actually bad if AI, longtermism, or x-risk are dominant in EA? That seems to crucially depend on whether these cause areas are actually the ones in which the most good can be done - and whether we should believe that depends on how strong arguments back up these cause areas. Assume, for example, that we can do by far the most good by focusing on AI x-risks and that there is an excellent case / compelling arguments for this. Then, this cause area should receive significantly more resources and should be much more talked about, and promoted, than other cause areas. Treating it just like other cause areas would be a big mistake: the (assumed) fact that we can do much more good in this cause area is a great reason to treat it differently! 

To be clear: my point is not that AI, longtermism, or anything else should be dominant in EA, but that how these cause areas should be represented in EA (including whether they should be dominant) depends on the object-level discourse about their cost-effectiveness. It is therefore unobvious, and depends on difficult object-level questions, whether a given degree of dominance of AI, longtermism, or any other cause area, is justified or not. (I take this to be in tension with some points of the post, and some of the comments, but not as incompatible with most of its points.)

8
Pablo
9mo
I am puzzled that, at the time of writing, this comment has received as many disagreement votes as agreement votes. Shouldn't we all agree that the EA community should allocate significantly more resources to an area, if by far the most good can be done by this allocation and there are sound public arguments for this conclusion? What are the main reasons for disagreement?
  1. Different people in EA define 'good' in different ways. You can argue that some cause is better for some family of definitions, but the aim is, I think, to help people with different definitions too achieve the goal.
  2. You say "if by far the most good can be done by this allocation and there are sound public arguments for this conclusion", but the idea of 'sound public arguments' is tricky. We're not scientists with some very-well-tested models. You're never going to have arguments which are conclusive enough to shut down other causes, even if it sometimes seems to some people here that they do.
8
Jason
9mo
In my view, the comment isn't particularly responsive to the post. I take the post's main critique as being something like: groups present themselves as devoted to EA as a question and to helping participants find their own path in EA, but in practice steer participants heavily toward certain approved conclusions. That critique is not inconsistent with "EA resources should be focused on AI and longtermism," or maybe even "EA funding for university groups should concentrate on x-risk/AI groups that don't present themselves to be full-spectrum EA groups."
8
Pablo
9mo
Shouldn’t we expect people who believe that a comment isn’t responsive to its parent post to downvote it rather than to disagree-vote it, if they don’t have any substantive disagreements with it?

Sorry to hear that you've had this experience.

I think you've raised a really important point - in practice, cause prioritisation by individual EAs is heavily irrational, and is shaped by social dynamics, groupthink and deference to people who don't want people to be deferring to them. Eliminating this irrationality entirely is impossible, but we can still try to minimise it. 

I think one problem we have is that it's true that cause prioritisation by orgs like 80000 Hours is more rational than many other communities aiming to make the world a better place. However, the bar here is extremely low, and I think some EAs (especially new EAs) see cause prioritisation by 80000 Hours as 100% rational. I think a better framing is to see their cause prioritisation as less irrational.

As someone who is not very involved with EA socially because of where I live, I'd also like to add that from the outside, there seems to be fairly strong, widespread consensus that EAs think AI Safety is the more important cause area. But then I've found that when I meet "core EAs", eg - people working at CEA, 80k, FHI etc, there is far more divergence in views around AI x-risk than I'd expect, and this consen... (read more)

But then I've found that when I meet "core EAs", eg - people working at CEA, 80k, FHI etc, there is far more divergence in views around AI x-risk than I'd expect, and this consensus does not seem to be present. I'm not sure why this discrepancy exists and I'm not sure how this could be fixed - maybe staff at these orgs could publish their "cause ranking" lists.

This post is now three years old but is roughly what you suggest. For convenience I will copy one of the more relevant graphs into this comment:

What (rough) percentage of resources should the EA community devote to the following areas over the next five years? Think of the resources of the community as something like some fraction of Open Phil's funding, possible donations from other large donors, and the human capital and influence of the ~1000 most engaged people.


 

6
Conor McGurk
9mo
Hey - thanks for the suggestions!  I work on the Virtual Programs team at CEA, and we're actually thinking of making some updates to the handbook in the coming months. I've noted down your recommendations and we'll definitely consider adding some of the resources you shared. In particular, I'd be excited to add the empirical data point about cause prio, and maybe something discussing deference and groupthink dynamics. I do want to mention that some of these resources, or similar ones, already exist within the EA Handbook intro curriculum. To note a few:  - Moral Progress & Cause X, Week 3 - Crucial Conversations, Week 4 (I think this gets at some similar ideas, although not exactly the same content as anything you listed) - Big List of Cause Candidates, Week 7 Also I want to mention that while we are taking another look at the curriculum - and we will apply this lens when we do - my guess is that a lot of the issue here (as you point out!) actually happens through interpersonal dynamics, and is not informed by the curriculum itself, and hence requires different solutions. 

One data point to add in support: I once spoke to a relatively new EA who was part of a uni group who said they "should" believe that longtermism/AI safety is the top cause, but when I asked them what their actual prio was said it was mental health.

By "their actual prio", which of these do you think they meant (if any)?

  • The area where they could personally do the most good with their work
  • The area that should absorb the highest fraction of EA-interested people, because it has the most strong opportunities to do good
  • The area they personally cared most about, to the point where it would feel wrong to answer otherwise (even if they technically thought they could do more good in other areas)

I've sometimes had three different areas in mind for these three categories, and have struggled to talk about my own priorities as a result.

A combination of one and three, but hard to say exactly the boundaries. E.g. I think they thought it was the best cause area for themselves (and maybe people in their country) but not everyone globally or something.

I think they may not have really thought about two in-depth, because of the feeling that they "should" care about one and prioritize it, and appeared somewhat guilty or hesitant to share their actual views because they thought they would be judged. They mentioned having spoken to a bunch of others and feeling like that was what everyone else was saying.

It's possible they did think two though (it was a few years ago, so I'm not sure).

First, I am sorry to hear about your experience. I am sympathetic to the idea that a high level of deference and lack of rigorous thinking is likely rampant amongst the university EA crowd, and I hope this is remedied. That said, I strongly disagree with your takeaways about funding and have some other reflections as well:

  • "Being paid to run a college club is weird. All other college students volunteer to run their clubs."

    This seems incorrect. I used to feel this way, but I changed my mind because I noticed that every "serious" club (i.e., any club wanting to achieve its goals reliably) on my campus pays students or hires paid interns. For instance, my university has a well-established environmental science ecosystem, and at least two of the associated clubs are supported via some university funding mechanism (this is now so advanced that they also do grantmaking for student projects ranging from a couple thousand to a max of $100,000). I can also think of a few larger Christian groups on campus which do the same. Some computer science/data-related clubs also do this, but I might be wrong.

    Most college clubs are indeed run on a volunteer basis. But most are run quite casually. T
... (read more)

I don't think most people should be doing cause prioritisation with 80000 Hours's level of rigour, but I think everyone is capable of doing some sort of cause prioritisation - at least working out where there values may differ with those of 80000 Hours, or identifying where they disagree with some of 80K's claims and working out how that would affect how they rank causes.

3
akash
9mo
I agree. I was imagining too rigorous (and narrow) of a cause prioritization exercise when commenting.   
8
Rebecca
9mo
I agree with all of this up until the cause prioritisation part. I’m confused about why you think it would be a mental health concern? There’s a very big space of options between feeling like there’s only one socially valid opinion about a cause area and feeling like you have to do a rigorous piece of analysis of all causes in 6 weeks. I gather the OP wants something that’s more just an extension of ‘developing better ways of thinking and forming opinions’ about causes, and not quashing people’s organic critical reflections about the ideas they encounter. Surely we want more analytical people who can think clearly and are net contributors to important intellectual debates in EA, rather than people who just jump on bandwagons and regurgitate consensus arguments.
1
akash
9mo
I don't! I meant to say that students who have mental health concerns may find it harder to do cause prioritization while balancing everything else.  I was unsure if this is what OP meant; if yes, then I fully agree.
Wim
9mo24
2
0

Your description of retreats matches my experience almost disconcertingly; it even described things I didn't even realize I took away from the retreat. I went to I felt like the only one who had those experiences. Thanks for writing this up. I hope things work out for you! 

but I am very concerned with just how little cause prioritization seems to be happening at my university group

I've heard this critique in different places and never really understood it. Presumably undergraduates who have only recently heard of the empirical and philosophical work related to cause prioritization are not in the best position to do original work on it. Instead they should review arguments others have made and judge them, as you do in the Arete Fellowship. It's not surprising to me that most people converge on the most popular position within the broader movement. 

9
Thomas Larsen
9mo
IMO there's a difference between evaluating arguments to the best of your ability and just deferring to the consensus around you. I think most people probably shouldn't spend lots of time doing cause prio from scratch, but I do think most people should judge the existing cause prio literature on object level and judge them from the best of the ability.  My read of the sentence indicated that there was too much deferring and not enough thinking through the arguments oneself. 

IMO there's a difference between evaluating arguments to the best of your ability and just deferring to the consensus around you.

Of course. I just think evaluating and deferring can look quite similar (and a mix of the two is usually taking place). 

OP seems to believe students are deferring because of other frustrations. As many have quoted: "If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong". 

I've attended Arete seminars at Ivy League universities and seen what looked liked fairly sophisticated evaluation to me. 

2
Jakob Lohmar
9mo
I'd say that critically examining arguments in cause prioritization is an important part of doing cause prioritization. Just as examining philosophical arguments of others is part of doing philosophy. At least, reviewing and judging arguments does not amount to deferring - which is what the post seems mainly concerned about. Perhaps there is actually no disagreement?

Thank you for the post, as a new uni group organizer I'll take this into account. 

I think a major problem may lie in the intro-fellowship curriculum offered by CEA. It says it is an "intro" fellowship but the program discusses longtermism/x-risk framework disproportionally for 3 weeks. And for a person who just meets EA ideas newly this could bring 2 problems:

First, as Dave mentioned, some people may want to do good as much as possible but don't buy  longtermism. We might lose these people who could do amazing good. 

Second, EA is weird and unintuitive. Even without ai stuff, it is still weird because of stuff like impartial altruism, prioritization, and earning to give. And if we give this content of weirdness plus the "most important century" narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion. 

This was definitely the case for me. I had a vegan advocacy background when I enrolled in my first fellowship. It was only 6 weeks and only one week was given to longtermism. Now I do believe we are in the most important century after a lot of time thinking and reading but If I was given this weir... (read more)

5
akash
9mo
I disagree-voted and briefly wanted to explain why. 1. "some people may want to do good as much as possible but don't buy longtermism. We might lose these people who could do amazing good." I agree that University groups should feel welcoming to those interested in non-longtermist causes, but it is perfectly possible to create this atmosphere without nixing key parts of the syllabus. I don't think the syllabus has much to do with creating this atmosphere. Rockwell and freedomandutility (and others) have listed some great points on this, and I think the conversations you have (and how you have them) and the opportunities you share with your group could help folks be more cause-neutral. One idea I liked was the "local expert" model where you have members deeply exploring various cause areas. When there is a new member interested in cause X, you can simply redirect them to the member who has studied it or done internships related to that cause. If you have different "experts" spanning different areas, this could help maintain a broad range of interests in the club and feel welcoming to a broader range of newcomers.   2. "And if we give this content of weirdness plus the "most important century" narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion." I think assumes that people won't be put off by the weirdness by, let's say, week 1 or week 3. I could see situations where people would find caring about animals weirder than caring about future humans. Or both of these weirder than pandemic prevention or global poverty reduction. I don't know what the solution is, except reminding people to be open-minded + critical as they go through the reading, and cultivating an environment where people understand that they don't have to agree with everything to be a part of the club.    3. Host of other reasons that I will quickly mention: 1. I don't think those three w

First, I’m sorry you’ve had this bad experience. I’m wary of creating environments that put a lot of pressure on young people to come to particular conclusions, and I’m bothered when AI Safety recruitment takes place in more isolated environments that minimize inferential distance because it means new people are not figuring it out for themselves.

I relate a lot to the feeling that AI Safety invaded as a cause without having to prove itself in a lot of the ways the other causes had to rigorously prove impact. No doubt it’s the highest prestige cause and attractive to think about (math, computer science, speculating about ginormous longterm impact) in many ways that global health or animal welfare stuff is often not. (You can even basically work on AI capabilities at a big fancy company while getting credit from EAs for doing the most important altruism in the world! There’s nothing like that for the other causes.)

Although I have my own ideas about some bad epistemics going on with prioritizing AI Safety, I want to hear your thoughts about it spelled out more. Is it mainly the deference you’re talking about?

Thanks so much for sharing your thoughts and reasons for disillusionment. I found this section the most concerning. If this has even a moderate amount of truth to it (especially the bit about discouraging new potential near termist EAs) then these kind of fellowships might need serious rethinking.

"Once the fellowship is over, the people who stick around are those who were sold on the ideas espoused in weeks 4, 5, and 6 (existential risks, longtermism, and AI) either because their facilitators were passionate about those topics, they were tech bros, or they were inclined to those ideas due to social pressure or emotional appeal. The folks who were intrigued by weeks 1, 2, and 3 (animal welfare, global health, and cost-effectiveness) but dismissed longtermism, x-risks, or AI safety may (mistakenly) think there is no place for them in EA. Over time, the EA group continues to select for people with those values, and before you know it your EA group is now a factory that churns out x-risk reducers, longtermists, and AI safety prioritizers."

Thanks so much for writing this post Dave; I find this really helpful for pinning down some of the perceived and real issues with the EA community.

I think some people have two stable equilibria: one being ~“do normal things” and the other being “take ideas seriously” (obviously an oversimplification). I think getting from the former to the latter often requires some pressure, but the latter can be inhabited without sacrificing good epistemics and can be much more impactful. Plus, people who make this transition often end up grateful that they made it, and wish they’d made it earlier. I think other people basically don’t have these two stable equilibria, but some of those have an unstable equilibrium for taking ideas seriously which is epistemically unsound, and it becomes stable through social dynamics rather than by thinking through the ideas carefully, which is bad… but also potentially good for the world if they can do good work despite the unsound epistemic foundation… This is messy and I don’t straightforwardly endorse it, but I also can’t honestly say that it’s obvious to me we should always prioritize pure epistemic health if it trades off against impact here. Reducing “the ... (read more)

I'm not really an EA, but EA-adjacent. I am quite concerned about AI safety, and think it's probably the most important problem we're dealing with right now. 

It sounds like your post is trying to point out some general issues in EA university groups, and you do point out specific dynamics that one can reasonably be concerned about. It does seem, however, like you do have an issue with the predominance of concerns around AI that is separate from this issue and that strongly shines through in the post. I find this dilutes your message and it might be better separated from the rest of your post. 

To counter this, I'm also worried about AI safety despite having mostly withdrawn from EA, but I think the EA focus and discussion on AI safety is weird and bad, and people in EA get sold on specific ideas way too easily. Some examples for ideas that are common but I believe to be very shoddy: "most important century", "automatic doom from AGI", "AGI is likely to be developed in the next decade", "AGI would create superintelligence".

5
David Mathers
9mo
What are your reasons for being worried? 
1
Guy Raveh
9mo
More simplistic ones. Machines are getting smarter and more complex and have the potential to surpass humans in intelligence, in the sense of being able to do the things we can do or harder things we haven't cracked yet, all the while having a vast advantage in computing power and speed. Stories we invent about how machines can get out of control are often weird and require them to 'think outside the box' and reason about themselves - but since we ourselves can do it, there's no reason a machine couldn't. All of this, together with the perils of maximization. The thing is, every part of this might or might not happen. Machine intelligence may remain too narrow to do any of this. Or may not decide to break out of its cage. Or we may find ways to contain it by the time any of this happens. Given the current state of AI, I strongly think none of this will happen soon.
2
David Mathers
9mo
Mostly agree, though maybe not with the last sentence on certain readings (i.e. I'm "only" 95% confident we won't have human-like agents by 2032, not 99.9% confident.) But strong agreement on the basic "hey intelligent agents could be dangerous, humans are", being much more convincing than detailed AI doomer stuff.
1
Nick K.
9mo
You're not really countering me! It's very easy to imagine that group dynamics like this get out of hand, and people tend repeat certain talking points without due consideration. But if your problem is bad discourse around an issue, it would be better to present that separately from your personal opinions on the issue itself. 
2
Guy Raveh
9mo
I don't think the two issues are separate. The bad dynamics and discourse in EA are heavily intertwined with the ubiquity of weakly supported but widely held ideas, many of which fuel the AI safety focus of the community. The subgroups of the community where these dynamics are worst are exactly those where AI safety as a cause area is the most popular.

Hello,

I am sorry that this was your experience in your university group. I would also like to thank you for being bold and sharing your concerns because it will help make necessary changes to various groups who are having the same experience. This kind of effort is important because it will keep the priorities, actions and overall efforts of E.A groups in check.

There are some actions that my university facilitator took to help people "think better" about issues they are particular interested in and fall under the EA umbrellas (or would make the world a bet... (read more)

As someone that organizes and is in touch with a various EA/AI safety groups, I can definitely see where you're coming from! I think many of the concerns here boil down to group culture and social dynamics that could be irrespective of what cause areas people in the group end up focusing on.

You could imagine two communities whose members in practice work on very similar things, but whose culture couldn't be further apart:

  • Intellectually isolated community where longtermism/AI safety being of utmost importance is seen as self-evident. There are social dynami
... (read more)

Sorry to hear that you had such a rough experience.

I agree that there can be downsides of being too trapped within an EA bubble and it seems worthwhile suggesting to people that after spending some extended time in the bay, they may benefit from getting away from it for a bit.

Regarding retreats, I think it can be beneficial for facilitators to try to act similarly to philosophy lecturers who are there to ensure you understand the arguments for and against more than trying to get you to agree with them.

I also think that it would be possible to create an alt... (read more)

Thank you for taking the time to write this. In 2020, I had the opportunity to start a city group or a university group in Cyprus, given the resources and connections at my disposal. After thinking long and hard about the homogenization of the group towards a certain cause area, I opted not to, but rather focused on being a facilitator for the virtual program, where I believe I will have more impact by introducing EA to newcomers from a more nuanced perspective. Facilitators of the virtual program have the ability to maintain a perfect balance between caus... (read more)

Kudos to you for having the courage to write this post. One of the things I like most about it is the uncanny understanding and acknowledgement of how people feel when they are trying to enter a new social group. EAs tend to focus on logic and rationality but humans are still emotional beings. I think perhaps we may underrate how these feelings drive our behavior. I didn't know that university organizers were paid - that, to me, seems kind of insane and counter to the spirit of altruism. I really like the idea of making it need based. One other thing your ... (read more)

@Lizka Apologies is this was raised and answered elsewhere, but I just noticed in relation to this article that your reading estimate says 12 minutes, but when I press listen to this article it says 19 minutes at normal speed? Is there a reason for the discrepancy? How is the reading time calculated?

 

Also, when I tried to look for who else from the Forum team to tag - I don't find any obvious page/link that lists the current team members. How can I find this in the future?

Most people can read faster than they can talk, right? So 60% longer for the audio version than the predicted reading time seems reasonable to me?

"The moderation team
The current moderators (as of July 2023) are Lorenzo Buonanno, Victoria Brook, Will Aldred, Francis Burke, JP Addison, and Lizka Vaintrob (we will likely grow the team in the near future). Julia Wise, Ollie Base, Edo Arad, Ben West, and Aaron Gertler are on the moderation team as active advisors. The moderation team uses the email address forum-moderation@effectivealtruism.org. Please feel free to contact us with questions or feedback."
https://forum.effectivealtruism.org/posts/yND9aGJgobm5dEXqF/guide-to-norms-on-the-forum#The_moderation_team 

And there is also the online team:
https://www.centreforeffectivealtruism.org/team/

For questions like this I would use the intercom, read here how the team wants to get in contact:
https://forum.effectivealtruism.org/contact 

I don't know the formula, but I think the reading time looks at the number of words and estimates how long someone would need to read this much text.
 
"The general adult population read 150 – 250 words per minute, while adults with college education read 200 – 300 words per minute. However, on average, adults read around 250 words per minute."
https://www.linkedin.com/pulse/how-fast-considered... (read more)

[comment deleted]9mo2
0
0
[comment deleted]8mo2
0
2
Curated and popular this week
Relevant opportunities