This is a special post for quick takes by JanPro. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 2:24 PM

Reflection on a university discussion about the future and safety of AI:

Background: ~20 people, 2 assistant professors, other BSc and MSc level students of AI/ML at Charles University

One of the professors sent us the FLI letter, Petition by LAION (in my view accelerationist and insufficiently addressing issues raised by the AI x-risk community), and On the Dangers of Stochastic Parrots by Bender et al.. Supplemental materials were criticisms of longtermism (by Torres and Thorn).

First we gave reasons for and against each petition. The professor in my view framed it that FLI is a suspicious organization (because of longtermist ties), x-risk issues are on the same level or less important than issues raised in Stochastic Parrots (bias, environmental concerns), and that academics support LAION more.

  • open source development proposed in LAION resonated with the students as an obviously good thing (at the start 16 said they would be in favor)
  • only 2 people (including me) would be in favor of FLI letter

In spite of the framing, after a 1 hour discussion, more than a half of the students were convinced that AI x-risk is a serious issue and rejected issues from stochastic parrots being as important.

No one changed their mind about LAION.

One student (he was in a group discussion with me) changed his mind to be in favor of the FLI letter.

I showed the professors the AI impacts survey and they found it interesting.

Some people seemed interested in the Lex Fridman with Eliezer Yudkowsky podcast.


I'm curious what they thought of Torres?

Almost none of the students read it (it wasn't required, and seemed a bit out of place in the context of the other materials).

One of the professors seemed on board of Torres' arguments such as that longtermists will justify anything (especially interests of "the rich") by arguing for bogus probabilities and quantities of future people.

(in the end we exchanged a few words about how it's funny that our twitter bubbles lead us to think that opposite things are the consensus)