Hide table of contents

TL;DR: For now, we're going to be promoting EA as a place for intellectual exploration, incredible research, and real-world impact and innovation.

These are my thoughts, but Emma Richter has been closely involved with developing them.

This post is intended as a very overdue introduction to CEA’s communications team, our goals, and what we’re currently working on/planning to work on.

I started at CEA as head of communications in September 2022. My position was a new one: as I understand it, various EA stakeholders were concerned that EA communications had fallen into a diffusion of responsibility. Though everyone in this ecosystem wanted it to go well, no one explicitly managed it. I was therefore hired with the remit of trying to fix this. Emma Richter joined the team as a contractor in December and became a permanent member of the team in March. We’ve also worked with a variety of external advisors, most notably Mike Levine at TSD Communications.

Our team has two main goals. The first is to help look after the EA brand. That means, broadly, that we want the outside world to have an accurate, and positive impression of effective altruism and the value created by this ecosystem. The second, more nebulous goal, is to help the EA ecosystem better use communications to achieve various object-level goals. This means things like “helping to publicise a report on effective giving”, or “advocating for AI safety in the press”. As communications capacity grows across the EA ecosystem, I expect this goal to become less of a priority for us — but for now I think we have expertise that can be used to make a big difference in this way.

With that in mind, here’s how we’re thinking about things at the moment.


I’ll start with what’s going on in the world. There are a few particularly salient things I’m tracking:

On the EA brand:

  • Negative attention on EA has significantly died down.
    • We expect it to flare back up somewhat this autumn, around SBF’s trial and various book releases, though probably not to the level that it was in late 2022.
  • Polling suggests that there wasn't a hit to public sentiment about EA from FTX (see here for various data). Among those who have heard of both, though, there may have been a hit — and I suspect that group of people would include important subgroups like journalists and politicians.
  • There is uncertainty about what people want EA (the brand, the ecosystem and/or the community) to be.
    • Within CEA, our new executive director might make fairly radical changes (though they may also keep things quite similar).
      • From the job announcement: “One thing to highlight is that we are both open to and enthusiastic about candidates who want to pursue significant changes to CEA. This might include: Spinning off or shutting down programs, or starting new programs; Focusing on specific cause areas, or on promoting general EA principles; Trying to build something more like a mass movement or trying to be more selective and focused; Significant staffing changes; Changing CEA’s name.
    • There is increased interest in cause-specific field building (e.g. see here).
    • In general, there are lots of conversations and uncertainties about what direction to take EA in (“should we frame EA as a community or a philosophical movement?" or "should we devote most of our resources to AI safety right now?")
    • I expect this uncertainty to clear up a little as conversations continue in the next few months (e.g. in things like EA Strategy Fortnight), and CEA getting a new ED might help too. But I don’t expect it to resolve altogether.
    • That said, EA community building (groups, conferences, online discussion spaces) has a strong track record and it seems likely that it will continue, in some form, to be a key source of value going forward.

 

On the use of communications to achieve various object-level goals:

  • Attention on AI safety has increased very rapidly, and regulatory conversations are moving very quickly. We are possibly in a very narrow window for being able to influence policy — the next 6 months seem really important.
    • While lots of good AI safety communications work is being done, there seems to be a clear need for more.
    • It is unclear the extent to which the EA brand is good for AI safety work; my best guess is that it is neutral-to-harmful and we should try to build a non-EA branded AI safety coalition.
  • There are some upcoming events that provide good opportunities to publicise other work (e.g. the release of Oppenheimer is ripe for nuclear security coverage).

How the team is responding to this

In recent weeks, I have been very focused on AI work. I expect this to continue; I think there is an urgent need for more communications capacity here and I’m well-positioned to help. This work will not be EA-branded.

We are also helping some organisations on publicising other, non-AI and non-EA branded work.

On EA: We think now is a good time to resume work on the EA brand. This should probably look less like “let’s go and talk to lots of newspapers about EA”, and more like “let’s assert on our own channels what EA is and stands for”.

The rationale for avoiding mass attention (e.g. a press campaign à la summer 2022) is the potential for significant downside (e.g. articles might focus a lot on FTX, and raise the salience of EA just before another wave of negative media). Courting this kind of mass attention while there is still significant uncertainty as to what EA is and what we want it to be also feels a bit premature.

But that doesn’t mean we think we should avoid all communications. It seems good that if and when people do hear about EA and come on e.g. the Twitter account, they see good things and get a good impression of us. And for various decision-makers and opinion influencers, softly reminding them that EA is still around and still doing cool, impactful work seems good. In particular, this lays the groundwork for if and when we do decide to court attention again, as there will already be some positive sentiment towards us.

Of course, to do this we need some vision of EA to present to the world. As mentioned above, for now CEA is sticking pretty closely to my original plans for the EA brand. This is a vision we’re loosely calling “EA as a university”. EA, in this conception, is a place for intellectual exploration, incredible research, and real-world impact and innovation. In practice, that means we’ll be promoting things like:

CEA's broader strategy and a new ED will significantly affect our team's strategy. We're operating with these ideas for now, but we remain open to future changes as the environment continues to evolve.

We’re currently figuring out exactly what approach we’re going to take to promote this conception of EA: I expect it will likely involve things like sprucing up our social media accounts and potentially launching new channels (such as an EA blog). We are also considering producing materials to help group organisers communicate about EA.

We view this, and everything we do, as somewhat of an experiment: as we execute on this we’ll be paying close attention to what is and isn’t working, and we’ll adjust our approach accordingly. We also appreciate feedback and suggestions — and if you think you have skills that could contribute to this work, please let me know!

Thanks to Ben West, Emma Richter and Mike Levine for comments on this post, and to them and many, many others for thoughts on our communications strategy. The preview image was taken at EAG London.

Comments10


Sorted by Click to highlight new comments since:

Very excited about the "EA as a university" concept and am looking forward to hearing more!

Where do you see GWWC and commitments to effective giving fitting into this? Do you expect to promote this as a norm?

Great question, to which I don't have a simple answer. I think I agree with a lot of what Sjir said here. I think claims 2 and 4 are particularly important — I'd like the effective giving community to grow as its own thing, without all the baggage of EA, and I'm excited to see GWWC working to make that happen. That doesn't mean that in our promotion of EA we won't discuss giving at all, though, because giving is definitely a part of EA. I'm not entirely sure yet how we'll talk about it, but one thing I imagine is that giving will be included as a call-to-action in much of our content.

That seems reasonable - I think the target audience for effective giving is much bigger.

The call-to-action is really what I'm getting at so pleased to see that ☺️

My suggestion for the CEA comms team would be to consider adopting a 'no first strikes' policy: that while it might be fine to rhetorically retaliate if someone attacks EA, as a movement we shouldn't initiate hostilities with a personal attack against someone who didn't go after EA first. I think this is a simple and morally intuitive rule that would be beneficial to follow.

While I agree 'no first strikes' is good, my prior is that EA communications currently has a 'no retaliation at all' policy, which I think is a very bad one (even if unofficial - I buy Shakeel's point that there may have been a diffusion of responsibility around this)

So for clarification, do you think that CEA ought to adopt this policy just because it is a good thing to do, or because they/other EAs have broken this rule and it needs to be a clearer norm? If the latter, I'd love to see some examples, because I can't really think of any (at least from 'official' EA orgs, and especially the CEA comms team) 

On the other hand, I can think of many examples, some from quite senior figures/academics, absolutely attacking EA in an incredibly hostile way, and basically being met with no pushback from official EA organisations or 'EA leadership' however defined.

they/other EAs have broken this rule and it needs to be a clearer norm? If the latter, I'd love to see some examples, because I can't really think of any (at least from 'official' EA orgs, and especially the CEA comms team) 

Exactly this - so things like CEA Comms picking on a random EA-adjacent couple to make personal 'vibes-based' attacks for no clear reason.

I agree with you that EA has not been very good at collectively retaliating, and it would be good if this could be changed. My point was just that not randomly bullying people for being weird seems like low hanging fruit.

 

[anonymous]3
0
0

I was going to ask the same thing, because I can't think of any examples either.

(I thought maybe Larks had this in mind - which I do think was bad and I found pretty shocking even before FLI were able to respond - but that's the OP attacking another EA org, not an EA attacking outside of EA. And I can think of several examples of EA org heads publicly and repeatedly attacking other EA org heads even when the latter never attack back, but again this is all within EA.)

I think this is interesting but don't think this is as clear cut as you're making out. There seem to me to be some instances where making the "first strike" is good — e.g. I think it'd be reasonable (though maybe not advisable) to criticise a billionaire for not donating any of their wealth; to criticise an AI company that's recklessly advancing capabilities; to criticise a virology lab that has unacceptably lax safety standards; or to criticise a Western government that is spending no money on foreign aid. Maybe your "personal attack" clause means this kind of stuff wouldn't get covered, though?

Just a quick impression:

I definitely love EA for its intellectual bent... We need to evaluate how we can do the most good, which can be a tricky process with reality often confounding our intuitions.

But I also love EA for wanting to use that reason to profoundly better the world... Action. What I get from this strategy is an emphasis on the cerebral without the emphasis on action. I think EA will appeal more broadly if we highlight action as well as cogitation, and these functions in furtherance of a world with far less suffering, more joy and ability of people to pursue their dreams, and a firm foundation for a wonderful world to persist indefinitely.

Definitely agreed that we need to showcase the action — hence my mention of "real-world impact and innovation" (and my examples of LEEP and far-UVC work as the kinds of things we're very excited to promote).

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
 ·  · 15m read
 · 
“I” refers to Zach, the Centre for Effective Altruism's CEO. Oscar is CEA’s Chief of Staff. We are grateful to all the CEA staff and community members who have contributed insightful input and feedback (directly and indirectly) during the development of our strategy and over many years. Mistakes are of course our own. Exec summary As one CEA, we are taking a principles-first approach to stewardship of the EA community. During the search for a new CEO, the board and search committee were open to alternative strategic directions, but from the beginning of my tenure, we’ve committed to a strategy under which we will: * Operate as one CEA, rather than winding down, breaking up or renaming the organization. Instead of optimizing for each of our team’s programs, we’ll be optimizing for EA as a whole. * Take a principles-first approach to EA, rather than becoming an AI org or otherwise re-orienting ourselves to specific causes. * Take greater responsibility for stewardship of the EA community, rather than restricting ourselves to passively providing infrastructure and support. This post explores stewardship in greater detail. Stewardship is about actors taking more responsibility for reaching and raising EA’s ceiling, and we believe CEA should play a leading role in steering, supporting and coordinating the community. Importantly, however, stewardship of EA is not ownership of EA: we don’t want to be the only leaders, and we do want a close collaboration with the community. During 2024 we focussed on building strong foundations that CEA will require to succeed at stewarding the community, including making over 20 hires (having started the year with 34 staff) while cutting a quarter of our costs, and developing our strategy for 2025 and 2026, including by listening to and learning from members of the EA community during visits I made to over half a dozen countries and in more than 200 one-on-one meetings. I feel good about the foundations we built and having priori
Neel Nanda
 ·  · 1m read
 · 
TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and requires different skills. But people often conflate these skills, leading to excessive deference to researchers in the field, without evidence that that person is good at strategic thinking specifically. I certainly try to have good strategic takes, but it's hard, and you shouldn't assume I succeed! Introduction I often find myself giving talks or Q&As about mechanistic interpretability research. But inevitably, I'll get questions about the big picture: "What's the theory of change for interpretability?", "Is this really going to help with alignment?", "Does any of this matter if we can’t ensure all labs take alignment seriously?". And I think people take my answers to these way too seriously. These are great questions, and I'm happy to try answering them. But I've noticed a bit of a pathology: people seem to assume that because I'm (hopefully!) good at the research, I'm automatically well-qualified to answer these broader strategic questions. I think this is a mistake, a form of undue deference that is both incorrect and unhelpful. I certainly try to have good strategic takes, and I think this makes me better at my job, but this is far from sufficient. Being good at research and being good at high level strategic thinking are just fairly different skillsets! But isn’t someone being good at research strong evidence they’re also good at strategic thinking? I personally think it’s moderate evidence, but far from sufficient. One key factor is that a very hard part of strategic thinking is the lack of feedback. Your reasoning about confusing long-term factors need to extrapolate from past trends and make analogies from things you do understand better, and it can be quite hard to tell if what you're saying is complete bullshit or not. In an empirical science like mechanistic interpretability, however, you can get a lot more fe
Recent opportunities in Building effective altruism
46
Ivan Burduk
· · 2m read