I'm not sure, we can dismiss Yann LeCun's statements so easily; mostly, because I do not understand how Meta works. How influential is he there? Does he set general policy around things like AI risk?
I feel there is this unhealthy dynamic where he represents the leader of some kind of "anti-doomerism" – and I'm under the impression that he and his Twitter crowd do not engage with the arguments of the debate at all. I'm pretty much looking at this from the outside, but LeCun's arguments seem to be so far behind. If he drives Meta's AI safety policy, I'm honestly worried about that. Meta just doesn't seem to be an insignificant player.
Thanks, this post is pretty relevant to me. I'm currently very interested in trying to understand Yann LeCun better. It's a sub-project in my attempt to form an opinion on AI and AI risk in general. Yann's twitter persona really gets under my skin; so I decided to look at him more broadly and try to see what I think when not perceiving him though the lense of the most toxic communication environment ever deviced by mankind ;)
I'm trying to understand how he can come to conclusions that seem to be so different from nearly anyone else in the field. Am I suffering from a selection bias? (EA was one of the first things/perspectives I found when looking at the topic; I'm mostly branching out from here and feel somewhat bubbled in.)
Any recommendation on what to read to get the clearest, strongest version of Yann's thinking?
P. S.: Just 4h ago he tweeted this: No one is "unconcerned". But almost everyone thinks making superhuman AI safe is eminently doable. And almost no one believes that doing so imperfectly will spell doom on humanity. It will cause a risk that is worth the benefits, like driving, flying, or paying online.
It really feels like he is tweeting from a different reality than mine.