Hide table of contents

Hey everyone,

I’m writing this because I’ve honestly been pulling my hair out over the last few months trying to figure out our comms strategy. If you manage any sort of EA-aligned account on X, especially if you're posting about AI safety, global health or animal welfare, you’ve probably noticed your impressions are absolutely tanking.

I thought it was just me, or maybe people were just burned out on heavy topics. But after looking at our analytics and chatting with a few other comms managers in our space, it’s definitely not our messaging. It’s the platform itself.

So, what's actually happening to our reach?

Here’s the deal. A lot of us (myself included, no shame) have been using Claude or ChatGPT to help turn massive, dense forum posts into digestible Twitter threads. It’s a total lifesaver for our workload, and honestly it worked great last year.

But lately? Threads that even hint at being drafted by an LLM are getting completely buried. No warning, no notification. You just stop showing up on the "For You" page.

The 2026 Algorithm Update

From what I’ve been digging into, X quietly rolled out some intense backend updates for 2026. They're obviously trying to nuke bot farms and spam, but science comms and advocacy accounts are getting caught in the crossfire.

Trust Signals vs. Spam Filters

It really comes down to "trust signals." When we use AI to help draft a thread, it leans on predictable structures like perfect bullet points, super clean transitions, or those cliché hook sentences. The new algorithm flags that exact pattern as non-human. Once you trigger that flag a few times, your account's "Entity SEO" (basically your underlying trust score) takes a massive nosedive.

The Shadowban Effect

To be clear, we aren't getting permanently banned. It’s a silent de-boost. Your hardcore followers might still see your stuff if they scroll forever, but X completely stops pushing your content to new people. For a movement that literally depends on reaching outside our own bubble, this is a nightmare.

How we can actually fix this

If we don't want to waste hours of work for, like, 12 views, we have to change how we post. Here is what I’m implementing on our accounts this week to bypass the filters:

1. Stop the copy-paste

We just can't take a Claude summary and hit post anymore. Even if the facts are 100% right, rewrite it. Make it sound a little messier. Use your actual voice, drop a casual phrase, or structure the argument in a way a machine wouldn't naturally choose.

2. Kill the cliché hooks

Please, no more "Here is a breakdown of..." or "Let's dive into...". The algorithm practically treats those as spam now. Just start your threads with a direct opinion, a raw thought, or an observation.

3. Rehab your account

If your reach is already in the gutter, spend a week acting like a normal human on the app to rebuild your trust score. Reply to other researchers naturally, quote tweet with a quick genuine reaction, and lay off the giant walls of text for a bit.

Where I found this (and a quick disclaimer)

I really hope this saves some of you the headache I’ve been dealing with. We’re communicating really important stuff, and getting silenced by a spam filter sucks.

If you want to see the actual mechanics behind this, I’m linking the breakdown I found below. Full disclosure, the site is actually a commercial social media tool, which is normally not something I'd ever link here. But honestly, their technical breakdown of X’s 2026 AI rules, shadowbans, and revenue hits is super accurate and perfectly explains the exact data my team is seeing right now. It's definitely worth a skim if you manage socials for an org.

Here's the link: https://fameviso.com/blog/x-ai-content-rules-2026-visibility-drops-revenue-risk/

Have you guys been seeing this in your analytics too? Would love to know if anyone has found other ways around it!

1

1
0

Reactions

1
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities