J

JanPro

13 karmaJoined Apr 2022Pursuing an undergraduate degreePrague, Czechia
janpro.dev

Bio

Participation
5

How others can help me

Looking to skill up in x-risk research, summer 23 internship opportunities, employment from summer 2024

Posts
1

Sorted by New
1
JanPro
· 1y ago · 1m read

Comments
5

No, not really, I am myself confused and wanted to provoke those who know more to reply and clarify. (Which already James Herbert slightly did and I hope more direct info will surface)

Many of the EAs I know who work in policy feel like they ought to keep their involvement in EA a secret. I once attended an event in Brussels where the host asked me to hide the fact I work for EA Netherlands. This was because they were worried their opponents would use their links with EA to discredit them. This seems like a very bad state of affairs. 

epistemic status: gossip

I've heard it's quite harmful to label oneself as EA in the EU policy space after the politico article.

I have the opposite stance,

it is a cool and cute shorthand, so I'd like for it to be the widely accepted meaning of rat.

Almost none of the students read it (it wasn't required, and seemed a bit out of place in the context of the other materials).

One of the professors seemed on board of Torres' arguments such as that longtermists will justify anything (especially interests of "the rich") by arguing for bogus probabilities and quantities of future people.

(in the end we exchanged a few words about how it's funny that our twitter bubbles lead us to think that opposite things are the consensus)

Reflection on a university discussion about the future and safety of AI:

Background: ~20 people, 2 assistant professors, other BSc and MSc level students of AI/ML at Charles University

One of the professors sent us the FLI letter, Petition by LAION (in my view accelerationist and insufficiently addressing issues raised by the AI x-risk community), and On the Dangers of Stochastic Parrots by Bender et al.. Supplemental materials were criticisms of longtermism (by Torres and Thorn).

First we gave reasons for and against each petition. The professor in my view framed it that FLI is a suspicious organization (because of longtermist ties), x-risk issues are on the same level or less important than issues raised in Stochastic Parrots (bias, environmental concerns), and that academics support LAION more.

  • open source development proposed in LAION resonated with the students as an obviously good thing (at the start 16 said they would be in favor)
  • only 2 people (including me) would be in favor of FLI letter

In spite of the framing, after a 1 hour discussion, more than a half of the students were convinced that AI x-risk is a serious issue and rejected issues from stochastic parrots being as important.

No one changed their mind about LAION.

One student (he was in a group discussion with me) changed his mind to be in favor of the FLI letter.

I showed the professors the AI impacts survey and they found it interesting.

Some people seemed interested in the Lex Fridman with Eliezer Yudkowsky podcast.