Hide table of contents

TL;DR: For now, we're going to be promoting EA as a place for intellectual exploration, incredible research, and real-world impact and innovation.

These are my thoughts, but Emma Richter has been closely involved with developing them.

This post is intended as a very overdue introduction to CEA’s communications team, our goals, and what we’re currently working on/planning to work on.

I started at CEA as head of communications in September 2022. My position was a new one: as I understand it, various EA stakeholders were concerned that EA communications had fallen into a diffusion of responsibility. Though everyone in this ecosystem wanted it to go well, no one explicitly managed it. I was therefore hired with the remit of trying to fix this. Emma Richter joined the team as a contractor in December and became a permanent member of the team in March. We’ve also worked with a variety of external advisors, most notably Mike Levine at TSD Communications.

Our team has two main goals. The first is to help look after the EA brand. That means, broadly, that we want the outside world to have an accurate, and positive impression of effective altruism and the value created by this ecosystem. The second, more nebulous goal, is to help the EA ecosystem better use communications to achieve various object-level goals. This means things like “helping to publicise a report on effective giving”, or “advocating for AI safety in the press”. As communications capacity grows across the EA ecosystem, I expect this goal to become less of a priority for us — but for now I think we have expertise that can be used to make a big difference in this way.

With that in mind, here’s how we’re thinking about things at the moment.


I’ll start with what’s going on in the world. There are a few particularly salient things I’m tracking:

On the EA brand:

  • Negative attention on EA has significantly died down.
    • We expect it to flare back up somewhat this autumn, around SBF’s trial and various book releases, though probably not to the level that it was in late 2022.
  • Polling suggests that there wasn't a hit to public sentiment about EA from FTX (see here for various data). Among those who have heard of both, though, there may have been a hit — and I suspect that group of people would include important subgroups like journalists and politicians.
  • There is uncertainty about what people want EA (the brand, the ecosystem and/or the community) to be.
    • Within CEA, our new executive director might make fairly radical changes (though they may also keep things quite similar).
      • From the job announcement: “One thing to highlight is that we are both open to and enthusiastic about candidates who want to pursue significant changes to CEA. This might include: Spinning off or shutting down programs, or starting new programs; Focusing on specific cause areas, or on promoting general EA principles; Trying to build something more like a mass movement or trying to be more selective and focused; Significant staffing changes; Changing CEA’s name.
    • There is increased interest in cause-specific field building (e.g. see here).
    • In general, there are lots of conversations and uncertainties about what direction to take EA in (“should we frame EA as a community or a philosophical movement?" or "should we devote most of our resources to AI safety right now?")
    • I expect this uncertainty to clear up a little as conversations continue in the next few months (e.g. in things like EA Strategy Fortnight), and CEA getting a new ED might help too. But I don’t expect it to resolve altogether.
    • That said, EA community building (groups, conferences, online discussion spaces) has a strong track record and it seems likely that it will continue, in some form, to be a key source of value going forward.

 

On the use of communications to achieve various object-level goals:

  • Attention on AI safety has increased very rapidly, and regulatory conversations are moving very quickly. We are possibly in a very narrow window for being able to influence policy — the next 6 months seem really important.
    • While lots of good AI safety communications work is being done, there seems to be a clear need for more.
    • It is unclear the extent to which the EA brand is good for AI safety work; my best guess is that it is neutral-to-harmful and we should try to build a non-EA branded AI safety coalition.
  • There are some upcoming events that provide good opportunities to publicise other work (e.g. the release of Oppenheimer is ripe for nuclear security coverage).

How the team is responding to this

In recent weeks, I have been very focused on AI work. I expect this to continue; I think there is an urgent need for more communications capacity here and I’m well-positioned to help. This work will not be EA-branded.

We are also helping some organisations on publicising other, non-AI and non-EA branded work.

On EA: We think now is a good time to resume work on the EA brand. This should probably look less like “let’s go and talk to lots of newspapers about EA”, and more like “let’s assert on our own channels what EA is and stands for”.

The rationale for avoiding mass attention (e.g. a press campaign à la summer 2022) is the potential for significant downside (e.g. articles might focus a lot on FTX, and raise the salience of EA just before another wave of negative media). Courting this kind of mass attention while there is still significant uncertainty as to what EA is and what we want it to be also feels a bit premature.

But that doesn’t mean we think we should avoid all communications. It seems good that if and when people do hear about EA and come on e.g. the Twitter account, they see good things and get a good impression of us. And for various decision-makers and opinion influencers, softly reminding them that EA is still around and still doing cool, impactful work seems good. In particular, this lays the groundwork for if and when we do decide to court attention again, as there will already be some positive sentiment towards us.

Of course, to do this we need some vision of EA to present to the world. As mentioned above, for now CEA is sticking pretty closely to my original plans for the EA brand. This is a vision we’re loosely calling “EA as a university”. EA, in this conception, is a place for intellectual exploration, incredible research, and real-world impact and innovation. In practice, that means we’ll be promoting things like:

CEA's broader strategy and a new ED will significantly affect our team's strategy. We're operating with these ideas for now, but we remain open to future changes as the environment continues to evolve.

We’re currently figuring out exactly what approach we’re going to take to promote this conception of EA: I expect it will likely involve things like sprucing up our social media accounts and potentially launching new channels (such as an EA blog). We are also considering producing materials to help group organisers communicate about EA.

We view this, and everything we do, as somewhat of an experiment: as we execute on this we’ll be paying close attention to what is and isn’t working, and we’ll adjust our approach accordingly. We also appreciate feedback and suggestions — and if you think you have skills that could contribute to this work, please let me know!

Thanks to Ben West, Emma Richter and Mike Levine for comments on this post, and to them and many, many others for thoughts on our communications strategy. The preview image was taken at EAG London.

Comments10
Sorted by Click to highlight new comments since:

Very excited about the "EA as a university" concept and am looking forward to hearing more!

Where do you see GWWC and commitments to effective giving fitting into this? Do you expect to promote this as a norm?

Great question, to which I don't have a simple answer. I think I agree with a lot of what Sjir said here. I think claims 2 and 4 are particularly important — I'd like the effective giving community to grow as its own thing, without all the baggage of EA, and I'm excited to see GWWC working to make that happen. That doesn't mean that in our promotion of EA we won't discuss giving at all, though, because giving is definitely a part of EA. I'm not entirely sure yet how we'll talk about it, but one thing I imagine is that giving will be included as a call-to-action in much of our content.

That seems reasonable - I think the target audience for effective giving is much bigger.

The call-to-action is really what I'm getting at so pleased to see that ☺️

My suggestion for the CEA comms team would be to consider adopting a 'no first strikes' policy: that while it might be fine to rhetorically retaliate if someone attacks EA, as a movement we shouldn't initiate hostilities with a personal attack against someone who didn't go after EA first. I think this is a simple and morally intuitive rule that would be beneficial to follow.

While I agree 'no first strikes' is good, my prior is that EA communications currently has a 'no retaliation at all' policy, which I think is a very bad one (even if unofficial - I buy Shakeel's point that there may have been a diffusion of responsibility around this)

So for clarification, do you think that CEA ought to adopt this policy just because it is a good thing to do, or because they/other EAs have broken this rule and it needs to be a clearer norm? If the latter, I'd love to see some examples, because I can't really think of any (at least from 'official' EA orgs, and especially the CEA comms team) 

On the other hand, I can think of many examples, some from quite senior figures/academics, absolutely attacking EA in an incredibly hostile way, and basically being met with no pushback from official EA organisations or 'EA leadership' however defined.

they/other EAs have broken this rule and it needs to be a clearer norm? If the latter, I'd love to see some examples, because I can't really think of any (at least from 'official' EA orgs, and especially the CEA comms team) 

Exactly this - so things like CEA Comms picking on a random EA-adjacent couple to make personal 'vibes-based' attacks for no clear reason.

I agree with you that EA has not been very good at collectively retaliating, and it would be good if this could be changed. My point was just that not randomly bullying people for being weird seems like low hanging fruit.

 

[anonymous]3
0
0

I was going to ask the same thing, because I can't think of any examples either.

(I thought maybe Larks had this in mind - which I do think was bad and I found pretty shocking even before FLI were able to respond - but that's the OP attacking another EA org, not an EA attacking outside of EA. And I can think of several examples of EA org heads publicly and repeatedly attacking other EA org heads even when the latter never attack back, but again this is all within EA.)

I think this is interesting but don't think this is as clear cut as you're making out. There seem to me to be some instances where making the "first strike" is good — e.g. I think it'd be reasonable (though maybe not advisable) to criticise a billionaire for not donating any of their wealth; to criticise an AI company that's recklessly advancing capabilities; to criticise a virology lab that has unacceptably lax safety standards; or to criticise a Western government that is spending no money on foreign aid. Maybe your "personal attack" clause means this kind of stuff wouldn't get covered, though?

Just a quick impression:

I definitely love EA for its intellectual bent... We need to evaluate how we can do the most good, which can be a tricky process with reality often confounding our intuitions.

But I also love EA for wanting to use that reason to profoundly better the world... Action. What I get from this strategy is an emphasis on the cerebral without the emphasis on action. I think EA will appeal more broadly if we highlight action as well as cogitation, and these functions in furtherance of a world with far less suffering, more joy and ability of people to pursue their dreams, and a firm foundation for a wonderful world to persist indefinitely.

Definitely agreed that we need to showcase the action — hence my mention of "real-world impact and innovation" (and my examples of LEEP and far-UVC work as the kinds of things we're very excited to promote).

Curated and popular this week
Relevant opportunities