Looking to skill up in x-risk research, summer 23 internship opportunities, employment from summer 2024
Reflection on a university discussion about the future and safety of AI:
Background: ~20 people, 2 assistant professors, other BSc and MSc level students of AI/ML at Charles University
One of the professors sent us the FLI letter, Petition by LAION (in my view accelerationist and insufficiently addressing issues raised by the AI x-risk community), and On the Dangers of Stochastic Parrots by Bender et al.. Supplemental materials were criticisms of longtermism (by Torres and Thorn).
First we gave reasons for and against each petition. The professor in my view framed it that FLI is a suspicious organization (because of longtermist ties), x-risk issues are on the same level or less important than issues raised in Stochastic Parrots (bias, environmental concerns), and that academics support LAION more.
In spite of the framing, after a 1 hour discussion, more than a half of the students were convinced that AI x-risk is a serious issue and rejected issues from stochastic parrots being as important.
No one changed their mind about LAION.
One student (he was in a group discussion with me) changed his mind to be in favor of the FLI letter.
I showed the professors the AI impacts survey and they found it interesting.
Some people seemed interested in the Lex Fridman with Eliezer Yudkowsky podcast.
Almost none of the students read it (it wasn't required, and seemed a bit out of place in the context of the other materials).
One of the professors seemed on board of Torres' arguments such as that longtermists will justify anything (especially interests of "the rich") by arguing for bogus probabilities and quantities of future people.
(in the end we exchanged a few words about how it's funny that our twitter bubbles lead us to think that opposite things are the consensus)