Michele Campolo

84Joined Jul 2020


Lifelong recursive self-improver, on his way to exploding really intelligently :D

Background in mathematics, research at CEEALAR now. I focus on AI alignment, with an eye towards moral progress rather than just risk.

You'll find more info in:
-Naturalism and AI alignment
-From language to ethics by automated reasoning


Maybe "only person in the world" is a bit excessive :)

As far as I know, no one else in AI safety is directly working on it. There is some research in the field of machine ethics, about Artificial Moral Agents, that has a similar motivation or objective. My guess is that, overall, very few people are working on this.

What you wrote about the central claim is more or less correct: I actually made only an existential claim about a single aligned agent, because the description I gave is sketchy and really far from the more precise algorithmic level of description. This single agent probably belongs to a class of other aligned agents, but it seems difficult to guess how large this class is.

That is also why I have not given a guarantee that all agents of a certain kind will be aligned.

Regarding the orthogonality thesis, you might find 1.2 in Bostrom's 2012 paper interesting. He writes that objective and intrinsically motivating moral facts need not undermine the orthogonality thesis, since he is using the term "intelligence" as "instrumental rationality". I add that there is also no guarantee that the orthogonality thesis is correct :)

About psychopaths and metaethics, I haven't spent a lot of time on that area of research. Like other empirical evidence, it doesn't seem easy to interpret.