Michele Campolo

Lifelong recursive self-improver, on his way to exploding really intelligently!

Background in mathematics, research at CEEALAR now. I'm working on an AI alignment project that could allow us to turn future language models into AIs that are better than humans at reasoning about ethics. From a different viewpoint: I'd like to automate EA-like thinking as soon as AI capabilities allow it.

You'll find more info in:

-Naturalism and AI alignment

-From language to ethics by automated reasoning

I'd like to know how to add links to this bio.

Wiki Contributions


Naturalism and AI alignment

What you wrote about the central claim is more or less correct: I actually made only an existential claim about a single aligned agent, because the description I gave is sketchy and really far from the more precise algorithmic level of description. This single agent probably belongs to a class of other aligned agents, but it seems difficult to guess how large this class is.

That is also why I have not given a guarantee that all agents of a certain kind will be aligned.

Regarding the orthogonality thesis, you might find 1.2 in Bostrom's 2012 paper interesting. He writes that objective and intrinsically motivating moral facts need not undermine the orthogonality thesis, since he is using the term "intelligence" as "instrumental rationality". I add that there is also no guarantee that the orthogonality thesis is correct :)

About psychopaths and metaethics, I haven't spent a lot of time on that area of research. Like other empirical evidence, it doesn't seem easy to interpret.