I have recently found myself in the situation of having to explain AI Safety to someone who has never heard of it before. I think I have polished a simple 3 sentence explanation that got me a couple of "wow, that seems important indeed". The basic idea is

It is quite of a cool scientific aspiration to build Artificial General Intelligent because it could be useful for so many things. But in the same way that it is not feasible to specify each individual action the system might take, we can also not specify a single objective for the system to pursue. AI Safety is about making AI Systems that learn what we want (and carries it out), and I want to work on it because there are very few people trying to understand this important problem.

Although somewhat restrictive definition, note that you don't need to use "existential risk" or any fancy wording.

7

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.