Epistemic status confident that the structural incentives of major platforms degrade our core ideas, but highly uncertain about how much outreach is actually worth the loss of nuance.
In my previous roles working in tech product growth, the golden rule was always simple and that was to remove cognitive friction. You want the user to feel something instantly and give them a very low effort way to share it.
But effective altruism is entirely built on cognitive friction. Thinking rigorously about expected value, counterfactuals, or global risks requires readers to slow down, bypass their immediate intuitions, and process complex information. Lately I have been watching how EA concepts perform when they hit mainstream feeds on Twitter or TikTok, and the dynamic is honestly pretty worrying.
When a nuanced EA concept goes viral, it never goes viral because people read the underlying research. It goes viral because it gets compressed into a provocative soundbite. The algorithm rewards the most extreme version of the idea. We saw this clearly during the recent public debates around AI timelines and longtermism. The versions of these ideas that trended online were often completely unrecognizable to the people actually doing the research. By playing the social media game, we are not just reaching more people. We are actively subsidizing the mutation of our own cause areas.
There is also a huge opportunity cost here. Social media platforms are designed to make you feel an urgent need to reply. I constantly see incredibly smart researchers spending hours in long Twitter threads trying to correct bad faith interpretations of their work. From an expected value standpoint, having our top researchers acting as frontline social media managers to defend their epistemics is a terrible allocation of cognitive resources.
Growth is generally good but algorithmic growth is completely indiscriminate. If an EA topic goes viral via a polarizing tweet, it brings an influx of attention from people looking for an internet argument, not people looking to thoughtfully allocate their time or donations. We risk diluting the community culture just to chase vanity metrics.
I do not think deleting all our accounts is a realistic answer. But treating a big public social media presence as a default good seems naive at this point.
I am really curious how others here balance the need for outreach with the reality of how these algorithms actually work. Should EA organizations actively discourage their core researchers from getting into public internet debates? Do we need a dedicated layer of communicators whose only job is to translate complex research into accessible content so researchers do not have to absorb that friction themselves?

Good points! I've found the book 'Change: how to make big things happen' useful in my work at EA Netherlands. I wrote up a few takeaways in a comment here.