Hide table of contents
This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! 
Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated. 

This is a response to The fidelity model of spreading ideas - EA Forum (effectivealtruism.org), the associated CEA blog post, plus those who have referenced it in good-spirited debate about how EA comms ought to develop in the coming years.

Epistemic status: As confident as anyone experienced enough in manipulating the whims of social media algorithms for social change can ever be, given that they change semi-regularly. 6+ years of experience using social media to effect social change, including advising politicians on their use of social media, conducting experimental social media for social change projects that have informed national political party strategies, and training upwards of 500+ activists.


This post is an exploration of the Fidelity Model, an approach to the spreading of ideas that is widely referenced within EA spaces. 

While I am broadly supportive of the fidelity model and agree with its central premises, I disagree with one influential interpretation of it: that we shouldn't use social media to communicate key EA ideas.

This interpretation of the argument is claimed to be central to CEA's social media scepticism, which I argue is misplaced. I am more sympathetic to the mass media scepticism this argument also brings about.

I explain why having an "official presence"[1] on social media is important using the Fidelity Model. Social media is an important communications model, even if social media itself is low-fidelity.

The Fidelity Model

I am taking this from the original post, so as to keep our arguments clear and consistent.


Fidelity The key term in this model is "fidelity." Therefore it will be useful to define this term. By fidelity [the original author has] in mind nothing more than the classic dictionary definition of "adherence to fact or detail" or "accuracy; exactness."

As an example, imagine I am shooting a movie on an old camera. If the image captured by the camera causes it to seem as though I am wearing a blue shirt when actually my shirt is red, then the image captured by the camera is low fidelity.

The problem

With reference to The Telephone Game, the author of the original argument accepts that "EA" ideas and arguments are nuanced and worries that some methods of communication require that the nuance be stripped back in order to be easily communicable.

When the context gets stripped away, those who receive the ideas leave with something that's similar to effective altruism, but different. Thus, when we hear the EA message repeated back to us, we get sentences like "EA is about earning all the money you can and donating it to GiveWell charities" or "EAs only care about interventions that are supported by randomized controlled trials." To a certain extent we can influence the sentences we get back by being more clever about how we frame our ideas, but it seems unlikely that framing can do all the work.

How we get to social media scepticism

Very few people who use social media believe it to be a high-fidelity information exchange space. Platforms like Twitter, with low character counts and a norm towards fast-paced argument threads, are particularly bad at fostering nuanced debate.

EA ideas are nuanced, and when they aren't covered with that nuance in mind, it tends to end quite badly. It is difficult to communicate with nuance and retain it throughout social sharing chains, so, if you aren't careful, the idea you set out to communicate isn't the one that makes it to the audience.

An example of an extremely low fidelity method of communicating EA would be during a heated political discussion on Twitter.

Given Twitter's character limit you can neither explore many ideas nor explore the ideas in any depth, nor get much useful feedback, and since politics is a very bad environment for updating, this would fail all four components [of the fidelity model].

Because it is difficult to do well and the consequences of doing it poorly are potentially severe, many perfectly reasonable people believe that social media should not be used to communicate complex, nuanced ideas.

On the surface, the trade-off between the benefits of doing it well (some people engaging positively, albeit for a limited time) and the disadvantages of getting it wrong (attracting bad faith critics, reputation damage, negative mass media coverage) appears to make not doing it better than trying to do it well. 

Why I think this is a mistake

I will start by saying I broadly agree with the Fidelity Model.

Higher-fidelity modes of communication, like one-on-one in-person discussions and books/podcasts, should always form the backbone of the EA communications model. They are a better way of communicating complex ideas.

While Twitter threads are a personal favourite communication means of mine, I don't regard them as high-fidelity. I regard them as a low-barrier-to-entry, fast-exchange model of communication that helps me get complex ideas in front of people who wouldn't otherwise consume them in the form of a blog post or journal article. They are also a fairly efficient means, once you know what you're doing, to get ideas in front of people within your own niche who would not otherwise give your ideas credence. I've had some luck influencing decisions this way.

I do not believe that social media is a high-fidelity space for information exchange; rather, I believe that there are higher-fidelity ways to do it and that not doing it at all in an "official" capacity creates a vacuum within which low-fidelity exchanges thrive and brand reputations go to die.

The Awareness/Inclination Model

The original argument for the Fidelity Model references Owen Cotton-Barratt's 2015 CEA working paper on the Awareness/Inclination Model.

Owen Cotton-Barratt has developed a model of movement growth called the Awareness/Inclination Model according to which we can compress knowledge of EA into two dimensions: how much they know about the ideas (awareness) and how favorably they feel, or would feel, towards the ideas (inclination). One implication of this model is that increasing awareness without increasing inclination can cause increased adoption of the ideas in the short term but at the expense of decreasing the total potential number of people that might adopt the ideas in the future (given some assumptions).

Given the assumption that many people would respond favorably to EA ideas if they understood them, it seems plausible that low fidelity mechanisms for spreading EA ideas increase the probability that we increase awareness for the ideas without increasing inclination. Articles like "Join Wall Street. Save the world." may be doing a good job of increasing awareness without doing a good job of increasing inclination (as some criticisms of the piece suggests) and thus may be decreasing the maximum number of people that might adopt the ideas in the future.

Again, I believe this argument is broadly correct, and we apply similar approaches within the political and social media for social change spaces.

What I do not believe is that social media scepticism, specifically the sort that leads organisations not to invest in "official" or movement-building social media, ought to follow from such an argument.

Social media can be a high-awareness space. It is a low-cost, low-effort alternative to traditional communications in a high-contagion communications environment. Because of the triangulation approach social media platforms like Facebook use to suggest content and map social connections, it is relatively easy for an idea that reaches Person A to reach Persons B, C, D, E, and F, who hold similar views or outlooks on the world.

This is why smart politicians make good use of social media: It does a fairly good job of finding people like your supporters in important ways and giving them your message.

Social media can also be a place where inclination is shaped

The dangers of meeting 'Almost you'

Those of us who have been active in relatively tight-knit communities will be familiar with this experience.

You meet someone at an event who knows you through someone else and has a very different impression of you from that person than you'd have liked to start out on.

Could be overly positive. Could be overly negative. Could just be slightly off in some important way.

They know a kind of uncanny valley "almost you," and then, if you're lucky, they update their picture of you as they get to know you.

  • If their "almost you" is wildly good, they may find themselves jaded and frustrated by the real you.
  • If "almost you" is missing crucial aspects of who you are, you may miss out on really useful connections and valuable personal relationships built on shared interests and perspectives neither of you knows you share.
  • If "almost you" is particularly bad, they might not attempt to get to know you at all.

The same thing can happen with arguments: When you become a trusted source of information in a tight-knit community, people making similar arguments or trying to convince others of things will often refer to you and your arguments in doing so.

If you're lucky, they do this in a high-fidelity way. They're more likely to do it in a way that makes it seem like you agree with them by missing out on important nuances. This also ends incredibly, incredibly badly.

Why doing social media - right - is the answer

Social media, for all its vast expanse, is in many ways a tight-knit community.

You will not build a useful level of inclination towards your movement if it is based on a series of low-fidelity "almost you" impressions gleaned from someone who knows someone who knows you or from people using your arguments as justification for their own.

This is why being present as the "real you" and conveying your own position are so important. Particularly in high-contagion spaces like social media, where ideas travel fast.

Just ask Elon Musk how reputations are built and destroyed on social media. You can have some control over the reputation you build by putting those building blocks in place yourself. That is, again with the Elon Musk example, a double-edged sword.

Either you shape that inclination by framing the arguments the way you want to and responding in a contextual fashion to emerging arguments, or someone else will attempt to do it for you.

Someone else, even if well-intentioned, is unlikely to convey the ideas the way you would and with the same sense of fidelity. They're working off of an interpretation of what you said and believe, a copy, which may be very good or may have missed the point entirely. You have no control over this and shouldn't. That's not how idea exchange works online.

What you do have control over is whether or not you put your best foot forward. Whether or not you frame your own arguments online, Whether you produce the highest-fidelity content possible under less-than-ideal conditions, or whether you leave that gap - a vacuum - for anyone else to fill.

This is why I am so concerned by the lack of "official," for lack of a better word, EA communications in short-form social media settings.



  1. ^

    I need a better term for this, so as not to seem exclusionary of lots of the really good stuff already going on. I am 100% pro EA meme Twitter, and was even before the incredible amount of money it has raised as of late. More of that!





More posts like this

Sorted by Click to highlight new comments since:

The last four paragraphs are well said👌

I love this. I think the arguments by analogy to knowing “Almost you” are particularly helpful.

However, I would love to see more on how you think EA could get social media right. Assuming it has a presence, should those accounts respond to criticism? Should they campaign for donations? Should they invite to EAGs? Should they distill core ideas?

Thank you! Very kind.

I think, in an ideal world, what you end up with is an ecosystem of accounts that cover a lot of the niches you've raised.

Some are compatible, so you could do more than one on one account. Example: You could have an account that produces and shares accessible content on effective giving research and seeks donations (if done tastefully), but that account shouldn't be getting into arguments.

Responding to criticism is important and is often done better by individual user accounts vs central orgs. A good rule of thumb has always been: don't reply with anything that you wouldn't be happy being taken as a good representation of your views, that you don't think adds clarity or additional value, or that you wouldn’t want plastered above your name on a ballot paper (in the political context). It's hard to do well, but good-faith actors appreciate it and trust and respect you more for doing it.

I think, purely from an outside perspective without having experimented, that a better route to respond to criticism more centrally would probably just be creating and sharing more accessible content that reaffirms where EA stands (where the criticism isn’t or isn’t entirely valid).

EA as a community has a habit of referencing EA texts and principles in response to criticism, and while this is great for people already within the community or adjacent ones like the rationalists, it isn’t necessarily accessible (plain language, core ideas explained, typical blog or news story format) to the general public, while a lot of the critique pieces are.

Curated and popular this week
Relevant opportunities