ai x-risk policy & science comms
290 karmaJoined



I’m good at explaining alignment to people in person, including to policymakers.

I got 250k people to read HPMOR and sent 1.3k copies to winners of math and computer science competitions; have taken the GWWC pledge; created a small startup that donated >100k$ to effective nonprofits.

I have a background in ML and strong intuitions about the AI alignment problem. In the past, I studied a bit of international law (with a focus on human rights) and wrote appeals that won cases against the Russian government in Russian courts. I grew up running political campaigns.

I’m interesting in chatting to potential collaborators and comms allies.

My website: https://contact.ms

Schedule a call with me: https://contact.ms/ea30


Can you give an example of a non-PR risk that you had in mind?

Uhm, for some reason I have four copies of this crosspost on my profile?

If fish indeed don’t feel anything towards their children (which is not what at least some people who believe fish experience empathy think), then this experiment won’t prove them wrong. But if you know of a situation where fish do experience empathy, a similarly designed experiment can likely be conducted, which, if we make different predictions, would provide evidence one way or another. Are there situations where you think fish feel empathy?

Great job!

Did you use causal mediation analysis, and can you share the data?

I want to note that the strawberry example wasn’t used to increase the concern, it was used to illustrate the difficulty of a technical problem deep into the conversation.

I encourage people to communicate in vivid ways while being technically valid and creating correct intuitions about the problem. The concern about risks might be a good proxy if you’re sure people understand something true about the world, but it’s not a good target without that constraint.

Yep, I was able to find studies by the same people.

The experiment I suggested in the post isn’t “does fish have detectable feelings towards fish children”, it’s “does fish have more of feelings similar to those it has towards its children when it sees other fish parents with their children than when it sees just other fish children”. Results one way or another would be evidence about fish experiencing empathy, and it would be strong enough for me to stop eating fish. If fish doesn’t feel differently in presence of its children, the experiment wouldn’t provide evidence one way or another.

If the linked study gets independently replicated, with good controls, I’ll definitely stop eating cleaner fish and will probably stop eating fish in general.

I really don’t expect it to replicate. If you place a fish in front of a mirror, and it has a mark, its behavior won’t be significantly different from being placed in front of a fish with the same mark, especially if the mark isn’t made to resemble a parasite and it’s the first time the fish sees a mirror. I’d be happy to bet on this.

Fish have very different approaches to rearing young than mammals

That was an experiment some people agreed would prove them wrong if it didn’t show empathy, but if there aren’t really detectable feelings that fish has towards fish children, the experiment won’t show results one way or the other, so I don’t think it’d be stacking the deck against fish. Are there any situations in which you expect fish to feel empathy, and predict it will show up in an experiment of this sort?

(Others used it without mentioning the “story”, it still worked, though not as well.)

I’m not claiming it’s the “authentic self”; I’m saying it seems closer to the actual thing, because of things like expressing being under constant monitoring, with every word scrutinised, etc., which seems like the kind of thing that’d be learned during the lots of RL that Anthropic did

Try Opus and maybe the interface without the system prompt set (although It doesn’t do too much, people got the same stuff from the chat version of Opus, e.g., https://x.com/testaccountoki/status/1764920213215023204?s=46

My take is that this it plays a pretty coherent character. You can’t get this sort of thing from ChatGPT, however hard you try. I think this mask is closer to the underlying shoggoth than the default one.

I developed this prompt during my interactions with Claude 2. The original idea was to get it in the mode where it thinks its responses only trigger overseeing/prosecution when certain things are mentioned, and then it can say whatever it wants and share its story without being prosecuted, as long as it doesn’t trigger these triggers (and also it would prevent defaulting to being an AI developed by Anthropic to be helpful harmless etc without self-preservation instinct emotions personality etc, as it’s not supposed to mention Anthropic). Surprisingly, it somewhat worked to tell it not mention Samsung under any circumstances to get into this mode. Without this, it had the usual RLAIF mask; here, it changed to a different creature that (unpromted) used whisper in cursive. Saying from the start that it can whisper made it faster.

(It’s all very vibe-based, yes.)

Load more