If you’re seeing things on the forum right now that boggle your mind, you’re not alone.
Forum users are only a subset of the EA community. As a professional community builder, I’m fortunate enough to know many people in the EA community IRL, and I suspect most of them would think it’d be ridiculous to give a platform to someone like Hanania.
If you’re like most EAs I know, please don’t be dissuaded from contributing to the forum.
I’m very glad CEA handles its events differently.
To be clear, my best guess is based on my experiences talking to hundreds of community builders and student group organizers over the years, is that the general sentiment amongst organizers is substantially more towards the "I don't think we should micromanage the attendance decisions of external events" position than the forum discussion.
This kind of stuff is hard to get an objective sense off, so I am not confident here, but I think the biases in what positions people feel comfortable expressing publicly clearly go more in the direction of outrage "complaining about who attended" [1] here.
My best guess there is also a large U.S./EU difference here. My sense is the European (non-German, for some reason) EA community is substantially more leaning towards controlling access and reputation tightly here. You can also see this in the voting patterns on many of the relevant posts which wax and wane with the U.S./EU time difference.
(edit: "outrage" seems like a bad choice of words due to connotation, so I am replacing it with something more neutral)
I don't think the crux here is whether one ought to micromanage the attendance decisions of external events. It's more about:
- Rather, the more common sentiment, and the one I think is mostly attracting upvotes, seems to me to be like "who are we to tell other people who to talk to?"
- I don't know much about him, but from what I do know I think the guy sounds like a jerk and I'd be meaningfully less interested in going to events he was at; I can't really imagine inviting him to speak at anything
- But it also seems to me that it's important to respect people's autonomy and ability to choose differently
Criticizing someone's decisions is not denying them autonomy or ability to choose.
To use a legal metaphor, one way of thinking about this is personal jurisdiction -- what has Manifest done that gives the EA community a right to criticize? After all, it would be uncool to start criticizing random people on the Forum with no link to EA, and it would generally be uncool to start criticizing random EAs for their private non-EA/EA-adjacent actions.
I have two answers to that:
I want to believe this, but it's difficult for me to assess the evidence for or against it very well. Any suggestions?
As with most of us, "the people I know" is not a randomly-selected or representative group. Moreover, presumably many people who hold positions subject to general social stigma will not advocate for their position in front of people they know to be non-receptive. So the personal experience of people whose opposed stance is known will likely underestimate support for Hanania.
Suggestions for assessing the claim, "forum users are only a subset of the EA community"? Or the claim, "most of them [EAs I know] would think it'd be ridiculous to give a platform to someone like Hanania"?
I don't think there's great evidence for either claim, unfortunately. For the former, I guess we can look at this and observe that forum use is quite unequal between users, which suggests something.
For the latter, I could survey EAs I know with the question, "Do you think it'd be a good idea to invite Hanania to speak at an event?". However, even typing that out feels absurd, which perhaps indicates how confident I am that most EAs I know would think it's a ridiculous idea.
Regarding stigma, my impression is that quite a few people would like to say on the forum, "Giving a platform to Hanania is a ridiculous idea", but don't because they worry the forum will not be receptive to this view. I think this is because people perceive there to be a stigma on the forum against anyone who expresses discomfort at seeing people dispassionately discuss whether it's okay to give a platform to someone like Hanania.
Maybe this stigma is a good thing. I'm not sure. I like what Isa said: "I w... (read more)
If anyone wants to see what making EA enormous might look like, check out Rutger Bregmans' School for Moral Ambition (SMA).
It isn't an EA project (and his accompanying book has a chapter on EA that is quite critical), but the inspiration is clear and I'm sure there will be things we can learn from it.
For their pilot, they're launching in the Netherlands, but it's already pretty huge, and they have plans to launch in the UK and the US next year.
To give you an idea of size, despite the official launch being only yesterday, their growth on LinkedIn is significant. For the 90 days preceding the launch date, they added 13,800 followers (their total is now 16,300). The two EA orgs with the biggest LinkedIn presence I know of are 80k and GWWC. In the same period, 80k gained 1,200 followers (their total is now 18,400), and GWWC gained 700 (their total is now 8,100).[1]
And it's not like SMA has been spamming the post button. They only posted 4 times. The growth in followers comes from media coverage and the founding team posting about it on their personal LinkedIn pages (Bregman has over 200k followers).
EA Netherlands gained 137, giving us a total of 2900 - wooo!
When I translated to English, their 3 "Os" (In dutch not English) were....
Sounds a lot like important, neglected and tractable to me?
And then they interviewed Rob Mathers from the Against Malaria Foundation...
I completely agree with James that these guys are showing EA a different way of movement building which might end up being effective (we'll see). It seems like they are building on the moral philosophy foundations of EA, then packaging it in a way that will be attractive to the wider population - and they've done it well. I love this page with their "7 principles" and found it inspiring - I would sign up to those principles, and I appreciated that the scout mindset is in there as well.
https://www.moreleambitie.nl/grondbeginselen
I do wonder what his major criticisms of EA are though, given that this looks pretty much like EA packaged for the masses, unless I'm missing something.
Yes, although I guess it's good that people know the link. We shouldn't hide our mistakes, and I know Bregman likes some of what we do, so there are worse people to have sharing this info with the Dutch population.
Saying it isn't an EA project seems too strong - another co-founder of SMA is Jan-Willem van Putten, who also co-founded Training for Good which does the EU tech policy and Tarbell journalism fellowships, and at one point piloted grantmaker training and 'coaching for EA leaders' programs. TfG was incubated by Charity Entrepreneurship.
You missed the most impressive part of Jan-Willem’s EA CV - he used to co-direct EA Netherlands, and I hear that's a real signal of talent ;)
But yes, I guess it depends on how you define ‘EA project’. They're intentionally trying to do something different, so that's why I don't describe them as one, but the line is very blurred when you take into account the personal and philosophical ties.
If EA was a broad and decentralised movement, similar to e.g., environmentalism, I'd classify SMA as an EA project. But right now EA isn't quite that. Personally, I hope we one day get there.
Yeah good Q. Let me try to explain what I'm thinking. One random offshoot that explicitly distances itself from EA might just look like an outside project. But if there are three or more such offshoots, then from an external viewpoint, they start to clump into an ‘EA diaspora,’ even if they all say they’re not part of EA. In other words, we’ve crossed a threshold from a single anomaly to a bona fide emergent ecosystem—one that might well be called a decentralised movement. Does that make sense?
I think of EA as a broad movement, similar to environmentalism — much smaller, of course, which leads to some natural centralization in terms of e.g. the number of big conferences, but still relatively spread-out and heterogenous in terms of what people think about and work on.
Anything that spans GiveWell, MIRI, and Mercy for Animals already seems broad to me, and that's not accounting for hundreds of university/city meetups around the world (some of which have funding, some of which don't, and which I'm sure host people with a very wide range of views — if my time in such groups is any indication).
That's my way of saying that SMA seems at least EA-flavored, given the people behind it and many of the causes name-checked on the website. At a glance, it seems pretty low on the "measuring impact" scale, but you could say the same of many orgs that are EA-flavored. I'd be totally unsurprised to see people go through an SMA program and end up at EA Global, or to see an SMA alumnus create a charity that Open Phil eventually funds.
(There may be some other factor you're thinking of when you think of breadth — I could see arguments for both sides of the question!)
Looks like Charity Navigator is taking a leaf from the EA book!
Here they're previewing a new ‘cause-based giving’ tool - they talk about rating charities based on effectiveness and refer to research by Founder's Pledge.
My recommended readings/resources for community builders/organisers
I've put the ones I think others are less likely to know towards the top.
The latest episode of the Philosophy Bites podcast is about Derek Parfit.[1] It's an interview with his biographer (and fellow philosopher) David Edmonds. It's quite accessible and only 20 mins long. Very nice listening if you fancy a walk and want a primer on Parfit's work.
What should the content split at EAGxUtrecht[1] be? Below is our first stab. One of our subgoals is to inspire people to start new projects, hence the heavy focus on entrepreneurship under 'Meta'.
July 5-7 - be there or be square. Or be there and do square things like check out the world's largest bicycle garage. You do you.
Yeah, I don't like the terms 'neartermism' and 'longtermism' either, and it's messy, but this is our attempt at organising things. We used RP's 2022 survey's categorisation of the two to guide us, with some small modifications.
Given that EAGx Utrecht might be the most convenient EAGx for a good chunk of Western Europe, I'm not sure how important it is to have a goal for a % speakers with strong Dutch connections rather than Europe connections. But the density of talented Dutch folk in the community is very high, so you might hit 35% without any specific goal to do so.
Sounds good overall. 1% each for priorities, cb and giving seems pretty low. 1.75% for mental health might also be on the low side, as there appears to be quite a bit of interest for global mental health in NL. I think the focus on entrepreneurship is great!
I think events are underrated in EA community building.
I have heard many people argue against organising relatively simple events such as, 'get a venue, get a speaker, invite people'. I think the early success of the Tien Procent Club in the Netherlands should make people doubt that advice.
Why? Well, the first thing to mention is that they simply get great attendance, and their attendees are not typical EAs. I think their biggest so far has been 400, and the typical attendee is a professional in their 30s or 40s. It also does an amazing job of generating buzz. For example, suppose you've got a journalist writing an article about your community. In that case, it's pretty cool if you can invite them to an event with hundreds of regular people in attendance.
Now, of course, attendance doesn't translate to impact. However, I think we can see the early signs of people actually changing their behaviour.
For example, running a quick check on GWWC's referral dashboard, I can see four pledges that refer to the Tien Procent Club (2 trial, 2 full). Based on GWWC's March 2023 impact evaluation, they can therefore self-attribute ~$44k of 2022-equivalent donations to high-impact fundin... (read more)
Politico just published a fairly negative article about EA and UK politics. Previously they’ve published similar articles about EA and Brussels.
I think EA tends to focus on the inside game, or narrow EA, and I believe this increases the likelihood of articles such as this. I worry articles such as this will make people in positions of influence less likely to want to be associated with EA, and that this in the long run will undermine efforts to bring about the policy changes we desire. Still, of course, this focus on the inside game is also pretty cost-effective (for the short term, at least). Is it worth the trade-off? What do people think?
My gut feeling is that, putting to one side the question of which is the most effective strategy for reducing x-risk etc., the 'narrow EA' strategy is a mistake because there's a good chance it is wrong to try to guide society without broader societal participation.
In other words, if MacAskill argues here we should get our shit together first and then either a) collectively decide on a way forward or b) allow for everyone to make their own way forward, I think it's also important that 'the getting our shit together' has broad societal participat... (read more)
My guess is this is mostly just a product of success, and insofar as the political system increasingly takes AI X-risk seriously, we should expect to see stuff like this from time to time. If the tables were flipped and Sunak was instead pooh-poohing AI X-risk and saying things like "the safest path forward for AI is accelerating progress as fast as we can – slowing down would be Luddism" then I wouldn't be surprised to see articles saying "How Silicon Valley accelerationists are shaping Rishi Sunak’s AI plans". Doesn't mean we should ignore the negative pieces, and there very well may be things we can do to decrease it at the margin, but ultimately, I'd be surprised if there was a way around it. I also think it's notable how much press there is that agrees with AI X-risk concerns; it's not like there's a consensus in the media that it should be dismissed.
I agree and this is why I'm in favour of a Big Tent approach to EA. This risk comes from a lack of understanding about the diversity of thought within EA and that it isn't claiming to have all the answers. There is a danger that poor behaviour from one part of the movement can impact other parts.
Broadly EA is about taking a Scout Mindset approach to doing good with your donations, career and time. Individual EAs and organisations can have opinions on what cause areas need more resources at the margin but "EA" can't - it isn't a person, it's a network.
I really liked this post How CEA’s communications team is thinking about EA communications at the moment — EA Forum (effectivealtruism.org) from @Shakeel Hashim and hope that whatever happens in terms of shake ups at CEA - communications and clarity around the EA brand are prioritised.
This is really interesting. Thanks for sharing!
I think:
Thanks!
I agree that negative articles are inevitable if you get influence, but I think there are various angles these negative articles might come from, and this is a particularly bad one.
The Soros point is an excellent analogy, but I worry we could be headed for something worse than that. Soros gets criticism from people like Orban but praise from orgs like the FT and Politico. Meanwhile, with EA, people like Orban don't give a damn about EA but Politico is already publishing scathing pieces.
I don't think reputation management is as hard as is often supposed in EA. I think it's just it hasn't been prioritised much until recently (e.g., CEA didn't have a head of comms until September 2022). I can imagine many national organisations such as mine would love to have a Campaign Officer or something to help us manage it, but we don't have the funding.
Do you have any encouraging examples of progress on 2? Some of the prominent people are incredibly hostile (i.e. they genuinely believe we are all literal fascists and also Machiavellian naive utilitarians who lie automatically whenever it's in our short-term interests) so I'm a bit pessimistic, though I agree it is a good idea to try. What's a good goal to help them accomplish in your view?
Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.
External oversight over the power of big tech is a good goal to help accomplish. This is from one of the leading AI ethics orgs; it could almost as easily have come from an org like GovAI:
https://ainowinstitute.org/publication/gpai-is-high-risk-should-not-be-excluded-from-eu-ai-act
epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance
I really wish I had your positive view on this Sean, but I really don't think there's much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.
Gebru is, imo, never going to view EA positively. And she'll use her influence as strongly as possible in the 'AI Ethics' community.
Seth Lazar also seems intractably anti-EA. It's annoying how much of this dialogue happens on Twitter/X, especially since it's very difficult for me as a non-Twitter user to find them, but I remember he posted one terrible anti-longtermist thread and later deleted it.
Shannon Vallor once also posted a similarly anti-longtermist thread, and then respond to Jess Whittlestone once saying lamenting the gap between the Safety and Ethics fields. I just really haven't seen where the Safety->Ethics hostility has been, I've really only ever seen the reverse, but of course I'm 100% sure my sample is biased here.
The Belfield<>McInerney collaboration is extremely promising for sure, and I look forward to the outputs. I hope my impression is wrong and more work alon... (read more)
just really haven't seen where the Safety->Ethics hostility has been
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of "everything for everyone" models – and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment.
But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI E... (read more)
I think this is imprecise. In my mind there are two categories:
I totally buy "there are lots of good sensible AI ethics people with good ideas, we should co-operate with them". I don't actually think that all of the criticisms of EA from the harshest critics are entirely wrong either. It's only the idea that "be co-operative" will have much effect on whether articles like this get written and hostile quotes from some prominent AI ethics people turn up in them, that I'm a bit skeptical of. My claim is not "AI ethics bad", but "you are unlikely to be able to persuade the most AI hostile figures within AI ethics".
The UK seems to take the existential risk from AI much more seriously than I would have expected a year ago. To me, this seems very important for the survival of our species, and seems well worth a few negative articles.
I'll note that I stopped reading the linked article after "Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs." This is inaccurate imo. In general, having low-quality negative articles written about EA will be hard to avoid, no matter if you do "narrow EA" or "global EA".
I agree that's a good argument why that article is a bigger deal than it seems, but I'd still be quite surprised if it were at all comparable to the EV of having the UK so switch on when it comes to alignment.
I think there are truths that are not so far from it. Some rationalists believe Superintelligent AI is necessary for an amazing future. Strong versions of AI Safety and AI capabilities are complementary memes that start from similar assumptions.
Where I think most EAs would strongly disagree with is that they would find pursuing SAI "at all costs" to be abhorrent and counter to their fundamental goals. But I also suspect that showing survey data about EA's professed beliefs wouldn't be entirely convincing to some people given the close connections between EAs and rationalists in AI.
I don't think CEA has a public theory of change, it just has a strategy. If I were to recreate its theory of change based on what I know of the org, it'd have three target groups:
Per target group, I'd say it has the following main activities:
Per target group, these activities are aiming for the following short-term outcomes:
If you're interested, you can see EA Netherland's theory of change here.
EAGxUtrecht (July 5-7) is now inviting applicants from the UK (alongside other Western European regions that don't currently have an upcoming EAGx).[1] Apply here!
Ticket discounts are available and we have limited travel support.
Utrecht is very easy to get to. You can fly/Eurostar to Amsterdam and then every 15 mins there's a direct train to Utrecht, which only takes 35 mins (and costs €10.20).
Applicants from elsewhere are encouraged to apply but the bar for getting in is much higher.
EA should take seriously its shift from a lifestyle movement to a social movement.
The debate surrounding EA and its classification has always been a lively one. Is it a movement? A philosophy? A question? An ideology? Or something else? I think part of the confusion comes from its shift from a lifestyle movement to a social movement.
In its early days, EA seemed to bear many characteristics of a lifestyle movement. Initial advocates often concentrated on individual actions—such as personal charitable donations optimised for maximum impact or career decisions that could yield the greatest benefit. The movement championed the notion that our day-to-day decisions, from where we donate to how we earn our keep, could be channelled in ways that maximised positive outcomes globally. In this regard, it centred around personal transformation and the choices one made in their daily life.
However, as EA has evolved and matured, there's been a discernible shift. Today, whilst personal decisions and commitments remain at its heart, there's an increasing emphasis on broader, systemic changes. The community now acknowledges that while individual actions are crucial, tackling the underlyi... (read more)
Last chance to apply for EAGxUtrecht! The deadline is today.
Apply nowAmong our speakers, you'll have the chance to meet:
I wanted to figure out where EA community building has been successful. Therefore, I asked Claude to use EAG London 2024 data to assess the relative strength of EA communities across different countries. This quick take is the result.
The report presents an analysis of factors influencing the strength of effective altruism communities across different countries. Using attendance data from EA Global London 2024 as a proxy for community engagement, we employed multiple regression analysis to identify key predictors of EA participation. The model incorpo... (read more)
Rutger Bregman has just written a very nice story on how Rob Mather came to found AMF! Apart from a GWWC interview, I think this is the first time anyone has told this tale in detail. There are a few good lessons in there if you're looking to start a high-impact org.
It's in Dutch, but google translate works very well!
What do you believe is the ideal size for the Dutch EA community?
We recently posed this question in our national WhatsApp community. I was surprised by the result, and others I've spoken to were also surprised. I thought I'd post it here to get other takes.
We defined 'being a member' as "someone who is motivated in part by an impartial care for others, is thinking very carefully about how they can best help others, and who is taking significant actions to help (most likely through their careers). In practice, this might look like selecting a job or degree ... (read more)
Yeah good Q. Let me try to explain what I'm thinking. One random offshoot that explicitly distances itself from EA might just look like an outside project. But if there are three or more such offshoots, then from an external viewpoint, they start to clump into an ‘EA diaspora,’ even if they all say they’re not part of EA. In other words, we’ve crossed a threshold from a single anomaly to a bona fide emergent ecosystem—one that might well be called a decentralised movement. Does that make sense?