Shakeel Hashim

2640 karmaJoined Feb 2022


Head of Communications at the Centre for Effective Altruism. Previously: News Editor at The Economist; journalist and growth manager at Protocol; journalist at Finimize.


As someone who works on comms stuff, I struggle with this a lot too! One thing I've found helpful is just asking decision makers, or people close to decision makers, why they did something. It's imperfect, but often helpful — e.g. when I've asked DC people what catalysed the increased political interest in AI safety, they overwhelmingly cited the CAIS letter, which seems like a fairly good sign that it worked. (Similarly, I've heard from people that Ian Hogarth's FT article may have achieved a similar effect in the UK.)

There are also proxies that can be kind of useful — if an article is incredibly widely read, and is the main topic on certain corners of Twitter for the day, and then the policy suggestions from that article end up happening, it's probably at least in part because of the article. If readership/discussion was low, you're probably not the cause.

This is really cool, thanks for organising it!

GiveWell's previously recommended MSF as a good disaster relief org, so that would be my best guess. I'd love to know more, though.  

“is there no EA press or comms unit that journalists contact before publishing such articles” — sometimes CEA or Forethought get asked for comment on pieces, but the vast majority of the time no one contacts us. It’s quite frustrating.

Yeah, the phrase "woke mob" (and similar) is extremely common in conservative media!

I don’t have an answer, but would suggest you talk to the folks at the Good Food Institute if you haven’t already — they might have advice, or at the very least be able to point you towards other people you could ask about this.

This is great, thanks for highlighting. Evidence Action is another excellent charity that’s nominated, here’s the link to vote for them: Action Inc.

Thanks for this. I agree that we’ve been neglecting social media; the main reason for this as far as I can tell is that no one at CEA was primarily focused on comms/marketing until I was hired in September; then other events proved to be attention-stealing.

Social media is going to be a major part of the communications strategy I outlined here; I expect you'll see us being more active in the coming months.

This is interesting and I broadly agree with you (though I think Habryka’s comment is important and right). On point 2, I’d want us to think very hard before adopting these as principles. It’s not obvious to me that non-violence is always the correct option — e.g. in World War 2 I think violence against the Nazis was a moral course of action.

As EA becomes increasingly involved in campaigning for states to act one way or another, a blanket non-violence policy could meaningfully and harmfully constrain us. (You could amend the clause to be “no non-state-sanctioned violence” but even then you’re in difficult territory — were the French resistance wrong to take up arms?)

I think there are similar issues with the honesty clause, too — it just isn’t the case that being honest is always the moral course of action (e.g. the lying to the Nazis about Jews in your basement example).

These are of course edge cases, and I do believe that in ~99% of cases one should be honest and non-violent. But formalising that into a core value of EA is hard, and I’m not sure it’d actually do much because basically everyone agrees that e.g. honesty is important; when they’re dishonest they just think (often incorrectly!) that they’re operating in one of those edge cases.

Thanks for this post, it's a really important issue. On tractability, do you think we'll be best off with technical fixes (e.g. maybe we should just try not to make sentient AIs?), or will it have to be policy? (Maybe it's way too early to even begin to guess).

Load more