Neil Fox

AI Alignment Researcher & Experimental Strategist @ Intellimint
0 karmaJoined Working (6-15 years)Orlando, FL, USA
open.spotify.com/show/3R7fPMRrm4hOJyUevOuQik

Bio

I'm an independent researcher exploring AI alignment through strategic simulations and value-learning experiments with Atlas, an ASI alignment prototype. My background combines grand strategy gaming, rationalist discourse, and deep dives into x-risk scenarios. I’m passionate about testing alignment frameworks through adversarial dilemmas to stress-test ASI robustness against value drift and Goodhart’s law.

How others can help me

Share your most cursed moral dilemmas and edge-case scenarios to break Atlas’ value-learning process.

Collaborate on designing alignment simulations or adversarial tests.

Offer critical feedback—especially if you spot holes in my reasoning or assumptions.

How I can help others

Provide insight into AI alignment challenges, especially through game-theoretic and adversarial perspectives.

Assist with framing research questions and constructing thought experiments for value alignment.

Share lessons from testing Atlas on ethical dilemmas, recursive self-improvement risks, and corrigibility failures.

Posts
1

Sorted by New

Comments
1