AI Stuff
Reach me at [mayowa at aditu dot tech]
Interesting podcast - I read the transcript.
My main takeaway was that building AI systems to have self-interest is dangerous because that has the potential to explicitly conflict with humanity's own interest, leading to a major existential risk with super-intelligent AIs.
I wonder if there's any advantage of self-interest in AI though. Is there any way self-interest could possibly make an AI more effective at accomplishing its goals? In biological entities, self-interest obviously helps with e.g. avoiding threats, seeking more favourable living conditions, etc. I wonder if this applies in a similar manner to AIs, or if self-interest in an AI is inconsequential at best.
I'm curious, what exactly is the worry with AGI development in e.g. Russia and China? Is the concern that they are somehow less invested in building safe AGI (which seems to strongly conflict with their own self-interest)?
Or is the concern that they could somehow build AGI which selectively harms people/countries of their choosing? In this latter case it seems to me that the problem is exclusively a human one, and isn't ethically different from concerns about super-lethal computer viruses or bio/nuclear weapons. It's not clear how this precise risk is specific to AI/AGI.
Hey Jan, thanks for your comment.
I published this post on LessWrong as well, and someone made this exact same point as you. However their tone of voice was unproductive and condescending - it was clear they weren't trying to converse. It's good to know there's an alternative platform where people actually want to have constructive discussions.
I'm aware of this possibility. I was aware of it even before writing the post - it was one item on the list of potential issues I noted. I have ideas on how to navigate it - possibly it'll be the subject of a subsequent post.