This is a special post for quick takes by Charlie G 🔹. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Hey y'all,

My TikTok algorithm recently presented me with this video about effective altruism, with over 100k likes and (TikTok claims) almost 1 million views. This isn't a ridiculous amount, but it's a pretty broad audience to reach with one video, and it's not a particularly kind framing to EA. As far as criticisms go, it's not the worst, it starts with Peter Singer's thought experiment and it takes the moral imperative seriously as a concept, but it also frames several EA and EA-adjacent activities negatively, saying EA quote "has an enormously well funded branch ... that is spending millions on hosting AI safety conferences."

I  think there's a lot to take from it. The first is in relation to @Bella's argument recently that EA should be doing more to actively define itself. This is what happens when it doesn't. Because EA is legitimately an interesting topic to learn about because it asks an interesting question. That's what I assume drew many of us here to begin with. It's interesting enough that when outsiders make videos like this, even when they're not the picture that'd we'd prefer,[1] they will capture the attention of many. This video is a significant impression, but it's not the end-all-be-all, and we should seek to define ourself lest we be defined by videos like it.

The second is about zero-sum attitudes and leftism's relation to EA. In the comments, many views like this were presented:

TikTok comment: @JoJo Stone Austin: I agree with you. Putting that weight on the average citizen isn't effective. It's like saying I need to carpool to lower my carbon footprint when Taylor Swift's private jet does more than a family can do in a lifetime. We should all do our part, but some people are infinitely worse than others. 3345 likes

TikTok comment: @bloatware boi: If they took their extreme utilitarianism seriously they should've started a revolution years ago. Nothing produces as much death and suffering as capitalism and the inextricably linked settler-colonialism. 3827 likes

@LennoxJohnson really thoughtfully grappled with this a few months ago, when he talked about how his journey from a zero-sum form of leftism and the need for structural change towards becoming more sympathetic to the orthodox EA approach happened. But I don't think we can necessarily depend on similar reckonings happening to everyone, all at the same time. With this, I think there's a much less clear solution than the PR problem, as I think on the one hand that EA sometimes doesn't grapple enough with systemic change, but on the other hand that society would be dramatically better if more people took a more EA outlook towards alleviating suffering.

For me, I'm partial towards demonstrating virtue as one of the primary ways of showing that it's possible to create improvement without systemic change. If EAs are directly helping people out, with whatever they might have, it makes it harder to position yourself above people who are doing that. In particular, I keep hearing about GiveDirectly specifically as a way of doing this. When you're directly giving money to people much poorer than yourself, there's something to that that really can't be ignored. Money is agency in today's society, and when you're directly giving someone money, that's a form of charity that is much harder to interpret as paternalistic or narrow-sighted, it's just altruistic. 
GiveDirectly is already the benchmark by which GiveWell evaluates charities, it's worth emphasizing that even more within the movement and in our outreach efforts.

That isn't to say I think it should supplant x-risk reduction and AI safety work, I think those are still extremely important and neglected in society at large, but EA as a whole has a fundamental issue with what it is if it wants to be a mass movement. A few months ago, I ran into a service worker who could not be regarded as an EA by any extent. But he was telling about this new charity he'd heard about, GiveDirectly, and how giving to it felt like going around the charity industry and helping without working with existing power structures. In my opinion, people like this should form the core of a broader EA movement. I think it's possible to have a movement which is primarily based on the idea of doing good, where many members donate 1-10% of their income to charity, and engage with EA ideas somewhere roughly weekly and can be activated when they see something that's clearly dangerous towards our long-term future. I think, to some extent, that's what EA the movement should strive for. EA and 80k should be separate, and right now there is no distinction. @Mjreard a few months ago expressed this as EA needing fans (donors) rather than players (direct workers). We can and should work towards that world.


  1. The speaker says EA spends "millions on AI safety conferences," which is pretty inaccurate though not 100% wrong, as that is EA Global's budget where AI safety topics are a major discussion though not the only focus. She also says AI safety is "particularly well-funded," which is basically untrue right now in the broader world, but isn't pants-on-fire wrong in strictly the EA world. ↩︎ I've retracted this section following @NickLaing's comment.

I thought her main point was pretty good,.

"We should be suspicious of people who decide the most important thing to do is what they would have the most fun doing anyway". (regarding AI safety).

I also am also suspicious about this, and suspect it to be a source of bias towards AI safety at the expense of other cause areas, regardless of the "true" importance of AI safety (FWIW I think its important),

Also I think she's broadly right that EA is spending millions hosting AI safety conferences. I would imagine EAG Bay Area is over 50% AI safety focused, and millions is spent on that.

I also think saying AI safety is "particularly well funded" is a subjective call and I wouldn't even say "basically untrue". Its not an unreasonable take given all the jobs in AI companies plus EA funded AI safety jobs out of labs. As a comparison I'm not sure what animal welfare spend vs. AI safety spend is but I imagine it wouldn't be an order of magnitude higher?

Despite all this, I disagreed with much of what she said, but I would put this in the top 30% of EA criticism I've seen (not hard given how much dross there is out there).

And one should probably give some weight to limitations imposed by the medium -- a 3-minute video on a platform whose users are on average not known for having long attention spans.

For what it's worth, I would guess that though the "funness" of AI safety research, or maybe especially technical AI safety research, is probably a factor in determining how many people are interested in working on that, I would be surprised if it's a factor in determining how much money is allocated towards that as a field.

Thanks for the response, and to be honest it's something that I'd agree with too. I've edited my initial comment to better reflect what's actually true. I wouldn't call the EA Global that I've been to an "AI Safety Conference," but if Bay Area is truly different it wouldn't surprise me. "Well-funded" is also subjective, and I think it's likely that I was letting my reflexive defensiveness get in the way of engaging directly. That said, I think the broader point about it exposing a weakness in EA comms and the comments reflecting broad low-trust attitudes towards ideas like EA stand, and I hope people continue to engage with them.

Yep 100% agree with the weakness in EA comms. I'm happy there's been a fair amount of chat recently about this on the forum.

My hobby horse around these parts has been that EA should be less scared about reaching out to the left (where I’m politically rooted), and thinking about what commonalities we have. This is something I have already seen in the animal welfare movement, where EAs are unafraid to work with existing vegan activism, and have done a good job of selling philanthropic funding to them, despite having large differences in opinion on the margins.

As you note, it’s not unreasonable that EA looks very far left from some perspectives. GiveDirectly is about direct empowerment, and I would argue that a lot of global development work, especially economic development, can be anti-imperial and generally concord with Marxist ideas of the internationale. Some better outreach and PR management in these communities would go a long way in the same way that it has for the political centre-left, who seem to get lots more attention from EA.

Okay. I actually watched the TikTok. That shoulda been step 1 — I committed the cardinal sin of commenting without watching. (My previous comment was more responding to the screenshotted comments, based on my past experience with leftist discourse on TikTok and Twitter.)

The TikTok is 100% correct. The creator’s points and arguments are absolutely correct. Every factual claim she makes is correct. The video is extremely reasonable, fair-minded, and even-handed. The creator is eloquent, perceptive, and clearly very intelligent. She comes across as earnest, sincere, kind, open-minded, and well-meaning. I really liked her brief discussion of Strangers Drowning. Just from this brief video, I already feel some fondness toward her. Based on this first impression, I like her.

If I still had a TikTok account, I would give video a like.

Her exegesis of Peter Singer’s parable of the drowning child is really, really good — quick, breezy, and straight to the point, in a way that should be the envy of any explainer. The only part that was a question mark for me was her use of the term "extreme utilitarians". It’s not exactly inaccurate, though, and it does get the point across, so, now that I’m thinking about it, I guess it’s actually fine. Come to think of it, if I were trying to explain this idea casually to a friend or an acquaintance or a general audience, I might use a similar phrase like "hardcore utilitarians" or something. 

It isn’t a technical term, but she is referring to the extreme personal sacrifice some people will go through for their moral views, or people who take moral views to more of an extreme than the typical person will (even probably the typical utilitarian or the typical moral philosopher).

Her suspicion of the emotional motivations of people in EA who have pivoted from what tends to be more boring, humble, and sometimes gruelling work in global poverty to high-paying, sexy, glamorous, luxurious, fun, exciting work in AI safety is incredibly perceptive and just a really great point. I have said (and others have said) similar things in the past, and even so, the way she said it was so clear and perceptive that I feel I now better understand the point I was trying to make because she said it (and thought it) better. So, kudos to her on that.

I would say your instinct should not be to treat this as a PR or marketing or media problem, or to want to leap into the fray to provide a "counternarrative". I would say this is actually just perceptive, substantive, eloquently expressed criticism or skepticism. I think the appropriate response is to take it a substantive argument or point.

There are many things people in EA could do if they wanted to do more to establish the credibility of AI safety for a wider audience or for mainstream society. Doing vastly more academic publishing on the topic is one idea. People are right not to take seriously ideas only written on blogs, forums, Twitter, or in books that don’t go through any more rigour or academic review than the previous three mediums. Science and academia provide a blueprint for how to establish mainstream credibility of obscure technical ideas.

I’m sure there are other good ideas out there too. For example, why not get more curious about why AI safety critics, skeptics, and dissenters disagree? Why not figure out their arguments, engage deeply, and respond to them? This could be in informal mediums and not through academic publishing. I think it would be a meaningful step toward persuasion. It’s kind of embarrassing for AI safety that it’s fairly easy for critics and skeptics to lob up plausible-sounding objections to the AI safety thesis/worldview and there isn’t really a convincing (to me, and to many others) response. Why not do the intellectual work, first, and focus on the PR/marketing later?

Something that would go a long way for me, personally, toward establishing at least a bit more good faith and credibility would be if AI safety advocates were willing to burn bad arguments that don’t make sense. For instance, if an AI safety advocate were willing to concede the fundamental, glaring flaws in AI 2027 or Situational Awareness, I would personally be willing to listen to them more carefully and take them more seriously. On the other hand, if someone can’t acknowledge that this is an atrocious, ridiculous graph, then I sort of feel like I can safely ignore what they say, since overall they haven’t demonstrated to me a level of seriousness, credibility, or reasonableness that I would feel is needed if it’s going to be worthwhile for me to engage with their ideas.

Right now, whatever the best arguments in AI safety are, it feels like they’re all lumped in with the worst arguments, and it’s hard for me not to judge it all based on the worst arguments. I imagine this will be a recurring problem if AI safety tries to gain more mainstream, widespread acceptance. If like 10% of people in EA were constantly talking about how great homeopathy is and is and how it’s curing all their ailments, and how foolish the medical and scientific establishment is for saying it’s just a placebo, would you be as willing to take EA arguments about pandemic risk seriously? Or would you just figure that this community doesn’t know what it’s talking about? That’s the situation for me with AI safety, and I’m sure others feel the same way, or would if they encountered AI safety ideas from an initial position of reasonable skepticism.

Those are just my first 2-3 ideas. Other people could probably brainstorm others. Overall, I think the intellectual work is lacking. More marketing/PR work would either fail or deserve to fail (even if it succeeded), in my view, because the intellectual foundation isn’t there yet.

I actually share a lot of your read here. I think it is actually a very strong explanation of Singer's argument (the shoes-for-suit swap is a nice touch), and the observation about the motivation for AI safety warrants engagement rather than dismissal. 

My one quibble with the video's content is the "extreme utilitarians" framing; as I'm one of maybe five EA virtue ethicists, I bristle a bit at the implication that EA requires utilitarianism, and in this context it reads as dismissive. It's a pretty minor issue though.

I think that the video is still worth providing a counter-narrative to though, and I think that's actually going to be my primary disagreement. For me, that counter-narrative isn't that EA is perfect, but that taking a principled EA mindset towards problems actually leads towards better solutions, and has lead to a lot of good being done in the world already. 

The issue with the video, which I should've been more explicit about in the original comment, is that when taken in the context of TikTok, it acts as a reinforcement to people who think that you can't try to make the world better. She presents a vision of EA where it initially tried to do good (while not mentioning any of the good it actually did, just the sacrifices that people made for it), and then that it was corrupted by people with impure intentions, and now no longer does. 

Regardless of what you or I think of the AI safety movement, I think that the people who believe in it believe in it seriously, and got there primarily through reasoning from EA principles. It isn't a corruption of EA ideas of doing good, just a different way of accomplishing them, though we can (and should) disagree on how the weighting of these factors plays out. And it primarily hasn't supplanted the other ways that people within the movement are doing good, it's supplemented them.

When the first exposure of EA ideas leads people towards the "things can't be better" meme, that's something that I think is worth combatting. I don't think EA is perfect, but I think that thinking about and acting on EA principles really can help make the world better, and that's what an ideal simple EA counter-narrative would emphasize to me. 

I agree there should be a counter narrative. It is also important to realize that people who create, like, and comment on mean-spirited TikToks who are self-absorbed in their own misguided ideology are far enough from the target market that you really shouldn't worry about changing their behavior.

That's the thing that gets me here: the TikTok itself is primarily not mean-spirited (I would reccomend watching it, it's 3 minutes, and it did make me cringe, but there was definitely a decent amount of thought put into it!) Some of the commenters are a bit mean-spirited, I won't deny, but some are also just jaded. The problem, to me, right now, is that the "thoughtful media" idea of EA, which to me this person embodies, says that EA has interesting philosophical grounding and also has a lot of weird Silicon Valley stuff going on. I think that content like this is exactly what we should be hoping to influence.

Good characterization; I should have watched the video. Seems like she may be unwilling to consider that the weird Silicon Valley stuff is correct, but explicitly says she's just raising the question of motivated reasoning.

The "writing scifi with your smart friends" is quite an unfair characterization, but fundamentally on us to counter. I think it will all turn on whether people find AI risk compelling. 

For that, there's always going to be a large constituency scoffing. There's a level at which we should just tolerate that, but we're still at a place where communicating the nature of AI risk work more broadly and more clearly is important on the margin.

The number I’ve seen people throw out a few times to estimate the number of people who identify with the effective altruism movement is 10,000, although I don’t know where that comes from. In one survey/poll I read (I think it was Pew or Gallup), 5% of Americans identify as being on the far left. 5% of the American population is 17 million. 

If the American far left is going to change ideologically or culturally, it probably won’t be because of anything the effective altruism movement does. It’s just too big in comparison. I think there’s a sense in which you’ve just gotta resign yourself to the idea that many people on the far left will dislike effective altruism, insofar as they know anything about it, indefinitely into the future. 

I think you have some interesting thoughts about messaging and outreach. For people who are concerned with paternalism or neocolonialism, or who are distrustful of charities, GiveDirectly is a great option. So, promoting GiveDirectly to people with these concerns seems like a good idea. I wonder also if explaining charities that do simple things like the Against Malaria Foundation giving bednets might be appealing to people, too. I feel like that’s so simple, it’s hard to imagine it somehow being secretly evil. 

I’m personally fairly worn out and discouraged from trying, over many years, to talk to far leftist friends, acquaintances, and members of various communities (online and local). Despite voting for a social democratic party and having many strongly socially progressive and economically progressive/social democratic views, I’ve often had a hard time finding common ground with many people on the far left, to the extent that I’ve ended relationships with friends and acquaintances and left certain communities. Some of the views I hold that I was in several cases not able to find common ground on: 

-Governments should be democratic rather than authoritarian 

-It is morally unacceptable to commit terrorist attacks against civilians, or to murder your political enemies, and certainly not something to celebrate or glorify

-Joseph Stalin and Mao Zedong were brutal dictators and not praiseworthy or figures to celebrate in any way

I find this very discouraging and depressing, and sad, and infuriating, and scary, and disturbing. I don’t know what to do about it. I have no energy left for this kind of engagement, so I’m not the right person to ask. I guess I’m just trying to warn you about some of the sort of stuff you might encounter and find yourself having to argue with if you do go down this road of engaging with the far left. 

Overall, I find that getting into politics or topical “discourse” on TikTok or Twitter pretty much just sucks up time, attention, energy, and emotional stamina without spitting anything back out (like a black hole). There’s just an infinite amount of time-wasting and aggravation that can happen. And what good ever comes of it?

I wonder if there’s meaningfully such a thing as trying to make better TikTok videos or better tweets or if that’s like trying to make better cigarettes. I mean, in a sense, yes, you can obviously make better ones. There are lots of people who just do comedy videos on TikTok that I used to enjoy, and Hank Green does some good educational videos I see on YouTube Shorts. But I wonder if going in with the explicit intention of fighting discourse with discourse is going to get anywhere. (I commented on Bella’s quick take with my thoughts on this as well.) 

(Please don’t interpret this as dismissive, I don’t mean it that way, but I thought about this comic.)

5% of Americans identify as being on the far left

However, I would strongly wager that the majority of this sample does not believe in the three ideological points you outlined around authoritarianism, terrorist attacks, and Stalin & Mao (I think it is also quite unlikely that the people viewing the Tik Tok in question would believe these things either). Those latter beliefs are extremely fringe.

Two years ago, I thought these sort of ideas were way more fringe among the far left than I do now. I could just have terrible luck, but I encountered these sort of ideas way, way, way more than I ever expected I would. And it wasn’t just once or twice or with people all in the same social circle. It was at least nine different unconnected individuals or unconnected social circles/social contexts/communities where someone expressed support for at least one of these ideas. Since it’s happened so many times, it’s hard for me to write it off.

In the conversations I’ve had with friends I still have now and don’t endorse any of these extreme opinions, they’ve told me their experiences are similar to mine. So, still anecdotal, but still hard to just write off as just my bad luck.

I would find it comforting to see polling that found these to be truly fringe positions within the far left, so if anyone knows of any, please share it.

None of the nine examples I’m thinking of were algorithmic social media feeds (some were people I knew in real life, some were local people in my community posting online, some were small and semi-private online communities). However, algorithmic social media feeds tend to amplify extreme views. So, if you step into that arena, even if a minority of minority believes something (e.g. 10-20% of the far left which is 5% of the U.S. population, so 0.5-1% of Americans overall), it might get disproportionate attention (e.g. it might look like 10% of the overall American population believes it).

Overall, this is just a warning to anyone who wants to get into the fray of these sort of TikTok/Twitter short-form algorithmic social media debates with the far left that it might be disconcerting and crazymaking. And a concern that this format/medium, in general, may just not be a productive way of changing people’s minds about anything or having serious conversations.

Running EA Oxford Socials: What Worked (and What Didn't)

After someone reached out to me about my experience running EA socials for the Oxford group, I shared my experience and was encouraged to share what I sent him more widely. As such, here's a brief summary of what I found from a few terms of hosting EA Oxford socials.

The Power of Consistency

Every week at the same time, we would host an event. I strongly recommend this, or having some kind of strong schedule, as it lets people form a routine around your events and can help create EA aligned friend-groups. Regardless of the event we were hosting, we had a solid 5ish person core who were there basically every week, which was very helpful. We tended to have 15 to 20 people per event, with fewer at the end of the term as people got busy with finishing tutorials.

Board Game Socials

Board game socials tended to work the best of the types of socials I tried. No real structure was necessary, just have a few strong EAs to set the tone, so it really feels like "EA boardgames," and then just let people play. Having the games acts as a natural conversation starter. Casual games especially are recommended, "Codenames" and "Coup" were favorites in particular at my socials but I can imagine many others working too. Deeper games have a place too, but they generally weren't primary. In the first two terms, we would just hold one of these every week. They felt like ways for people to just talk about EA stuff in a more casual environment than the discussion groups or fellowships.

"Lightning Talks"

We also pretty effectively did "Lightning Talks," basically EA powerpoint nights. As this was in Oxford, we could typically get at least one EA-aligned researcher or worker there every week we did it (which was every other week), and the rest of the time would be filled with community member presentations (typically between 5-10 minutes). These seemed to be best at re-engaging people who signed up once but had lost contact with EA, my guess is primarily because EA-inclined people tend to have joined partially because of that lecture-appreciating personality. In the third term, we ended up alternating weeks between the lightning talks and board game socials.

Other Formats

Other formats, including pub socials and one-off games (like the estimation game, or speed updating) seemed less effective, possibly just due to lower name recognition. Everyone knows what they're getting with board games, and they can figure out lightning talks, but getting too creative seemed to result in lower turnout.

Execution Above All

Probably more important than what event we did was doing what we did well. We found that having (vegan) pizza and drinks ready before the social, and arriving 20 minutes early to set things up, dramatically improved retention. People really like well-run events; it helps them relax and enjoy rather than wondering when the pizza will arrive, and I think that's especially true of student clubs where that organizational competence is never guaranteed.

On May 27, 2024, Zach Stein-Perlman argued on here that Anthropic’s Long-Term Benefit Trust (LTBT) might be toothless, pointing to unclear voting thresholds and the potential for large shareholder dominance, such as from Amazon and Google.


On May 30, 2024 TIME ran a deeply reported piece confirming key governance details, e.g., that a shareholder supermajority can rewrite LTBT rules but (per Anthropic’s GC) Amazon/Google don’t hold voting shares, speaking directly to the concerns raised three days earlier. It also specifically reviewed incorporation documents via permission granted by Anthropic and interviewed experts about them, confirming some details about when exactly the LTBT would gain control of board seats.

I don't claim that this is causal, but the addressing of the specific points raised by Stein-Perlman's post which weren't previously widely examined and the timeline of the two articles implies some degree of conversation between them to me. It points toward this being an example of how EA forum posts can shape discourse around AI safety. It also seems to encourage the idea that if you see addressable concerns for Anthropic in particular and AI safety companies in general, posting them here could be a way influencing the conversation.

I'm confident the timing was a coincidence. I agree that (novel, thoughtful, careful) posting can make things happen.

I agree that the timing is to some extent a coincidence, especially considering that the TIME piece followed an Anthropic board appointment which would have to have been months in the making, but I'm also fairly confident that your piece shaped at least part of the TIME article. As far as I can tell, you were the first person to bring up the concern that large shareholders, in particular potentially Amazon and Google, could end up overruling the LTBT and annulling it. The TIME piece quite directly addressed that concern, saying,

The Amazon and Google question

According to Anthropic’s incorporation documents, there is a caveat to the agreement governing the Long Term Benefit Trust. If a supermajority of shareholders votes to do so, they can rewrite the rules that govern the LTBT without the consent of its five members. This mechanism was designed as a “failsafe” to account for the possibility of the structure being flawed in unexpected ways, Anthropic says. But it also raises the specter that Google and Amazon could force a change to Anthropic’s corporate governance.

But according to Israel, this would be impossible. Amazon and Google, he says, do not own voting shares in Anthropic, meaning they cannot elect board members and their votes would not be counted in any supermajority required to rewrite the rules governing the LTBT. (Holders of Anthropic’s Series B stock, much of which was initially bought by the defunct cryptocurrency exchange FTX, also do not have voting rights, Israel says.)  

To me, it would be surprising if this section was added without your post in mind. Again, your post is the only time prior to the publication of this article (AFAICT) that this concern was raised.

Curated and popular this week
Relevant opportunities