Jörg Weiß

10 karmaJoined Apr 2023


I'm not sure, we can dismiss Yann LeCun's statements so easily; mostly, because I do not understand how Meta works. How influential is he there? Does he set general policy around things like AI risk?

I feel there is this unhealthy dynamic where he represents the leader of some kind of "anti-doomerism" – and I'm under the impression that he and his Twitter crowd do not engage with the arguments of the debate at all. I'm pretty much looking at this from the outside, but LeCun's arguments seem to be so far behind. If he drives Meta's AI safety policy, I'm honestly worried about that. Meta just doesn't seem to be an insignificant player.

Thanks, this post is pretty relevant to me. I'm currently very interested in trying to understand Yann LeCun better. It's a sub-project in my attempt to form an opinion on AI and AI risk in general. Yann's twitter persona really gets under my skin; so I decided to look at him more broadly and try to see what I think when not perceiving him though the lense of the most toxic communication environment ever deviced by mankind ;)

 I'm trying to understand how he can come to conclusions that seem to be so different from nearly anyone else in the field. Am I suffering from a selection bias? (EA was one of the first things/perspectives I found when looking at the topic; I'm mostly branching out from here and feel somewhat bubbled in.)

Any recommendation on what to read to get the clearest, strongest version of Yann's thinking?


P. S.: Just 4h ago he tweeted this: No one is "unconcerned". But almost everyone thinks making superhuman AI safe is eminently doable. And almost no one believes that doing so imperfectly will spell doom on humanity. It will cause a risk that is worth the benefits, like driving, flying, or paying online.

It really feels like he is tweeting from a different reality than mine.

Two suggestions:

  1. Shorten the text and go even more easy on the language/complexity: I'm an outsider to the AI field and this was a great overview for me. But: I got up at at 3 am to watch Alpha Go beat Lee Sedol on livestream and I spent the last three months in full immersion mode, reading up on things every day. I'd say I'm an highly interested, somewhat knowledgable outsider. If your intended audience are people less driven to the topic, it might be worth to both shorten the text and spend more time at key points. Example: I'm not sure if all the players really need to be introduced here. At the same time, I'm under the impression people often don't really grasp what it means that neural nets are black boxes nobody understands. If you look at the discourse on reddit, even people highly interested in the topic often conceptualize neural nets as being programmed. I feel people are really struggling with the idea that engineers don't understand their product.
  2. Strengthen the paragraph about what people can do. I would give it a shout-out int the introduction separate from section 7-9. Things I would add:
    1. Use the tools available and use them all the time! If AI apocalypse happens, it won't help. But: That the job market will be highly impacted is almost a certainty, and familiarity with current tools just might help to stay relevant on the job market for longer. Practical experience will also help to appreciate the strengths and limitations of the current tools better.
    2. Be in favor of AI alignment and act accordingly. Support research financially, bring up the topic in conversations and on political events (but don't inject it into partisan settings).