I'm imagining an AGI weighing the pros and cons of destroying humanity. If it finds persuasive arguments against in its training data (or via search), maybe it would decide not to?
Should we crowdsource arguments in favor of AGI preserving humanity?
Feel free to post about why this might or might not work, but I am most interested in hearing reasons why it (or this post) could backfire. If someone has convincing reasons, or this post is heavily downvoted, I will delete it.
I haven't come across anyone else attempting to make these arguments (to a future AGI, about why it should preserve humanity), but I would also be interested to hear about them if you know of any.