T

ThinKing

0 karmaJoined

Comments
1

Great discussion! I appreciate your post, it helped me form a more nuanced view of AI risk rather than subscribing to full-on doomerism.

I would, however, like to comment on your statement - "this is human stupidity NOT AI super intelligence. And this is the real risk of AI!"

I agree with this assessment, moreover, it seems to me that this "human stupidity" problem of our inability to design sufficiently good goals for AI is what the Alignment field is trying to solve. 

It is true that no computer program has its own will. And there is no reason to believe that some future superintelligent program will suddenly stop following its programming instructions. However, given our current models that optimize for a vague goal (like in the example below), we need to develop smart solutions to encode our "true intentions" correctly into these models. 

I think it's best explained with an example: GPT-based chatbots are simply trained to predict the next word in a sentence, and it is not clear at a technical level how we can modify such a simple and specific goal of next word prediction to also include broad, complex instructions like "don't agree with someone suicidal". Current alignment methods like RLHF help to some extent, but there are no existing methods that guarantee, for example, that a model will never agree with someone's suicidal thoughts. Such a lack of guarantees and control in our current training algorithms, and therefore our models, is problematic. And it seems to me this is the problem that alignment research tries to solve.