RomanHauksson

Computer Science student @ University of Texas at Dallas
146 karmaJoined Aug 2022Pursuing an undergraduate degreeDallas, TX, USA
roman.computer

Bio

Participation
5

Organizing my university's EA student group and self-studying to become an AI alignment researcher. I want to maximally reduce the risk of superintelligent agents killing everyone and/or sentient digital systems experiencing astronomical amounts of suffering. Also interested in entrepreneurship and changing institutions to use better-designed incentive systems (see mechanism design).

Comments
23

I agree that my answer isn't very useful by itself, without any object-level explanations, but I do think it is useful to bring up the anthropic principle if it hasn't been mentioned already. In hindsight, I think my answer comes off as unnecessarily dismissive.

Isn't the opposite end of the p(doom)–longtermism quadrant also relevant? E.g. my p(doom) is 2%, but I take the arguments for longtermism seriously and think that's high enough of a chance to justify working on the alignment problem.

80,000 Hours had an article with advice for new college students, and a section towards the end touches on your question.

Make sure to check out OpenPhil's undergraduate scholarship if you haven't yet.

Here are a couple of excerpts from relevant comments from the Astral Codex Ten post about the tournament. From the anecdotes, it seems as though this tournament had some flaws in execution, namely that the "superforcasters" weren't all that. But I want to see more context if anyone has it.

From Jacob:

I signed up for this tournament (I think? My emails related to a Hybrid Forecasting-Persuasion tournament that at the very least shares many authors), was selected, and partially participated. I found this tournament from it being referenced on ACX and am not an academic, superforecaster, or in any way involved or qualified whatsoever. I got the Stage 1 email on June 15.

From magic9mushroom:

I participated and AIUI got counted as a superforecaster, but I'm really not. There was one guy in my group (I don't know what happened in other groups) who said X-risk can't happen unless God decides to end the world. And in general the discourse was barely above "normal Internet person" level, and only about a third of us even participated in said discourse. Like I said, haven't read the full paper so there might have been some technique to fix this, but overall I wasn't impressed.

Same reason we haven't been destroyed by a nuclear apocalypse yet: if we had, we wouldn't be here talking about it.

As for the question "why haven't we encountered a power-seeking AGI from elsewhere in the universe who didn't destroy us", I don't know.

I can look into how to set up a torrent link tomorrow and let you know how it goes!

Can we set up a torrent link for this?

Rational Animations is probably the YouTube channel the report is referring to, in case anyone's curious.

Where did you copy the quote from?

Load more