Writer

Head and main scriptwriter @ Rational Animations
659 karmaJoined May 2021
www.youtube.com/RationalAnimations

Comments
68

Rational Animations has a subreddit: https://www.reddit.com/r/RationalAnimations/

I hadn't advertised it until now because I had to find someone to help moderate it. 

I want people here to be among the first to join since I expect having EA Forum users early on would help foster a good epistemic culture.

Writer
2mo82

I think the photo of the Yoruba folks might be a bit misleading in the context of this post, and I wouldn't include it.

I'm not entirely sure If I agree, but I removed them out of abundance of caution. 

Edit: yeah, you are correct actually. 

Writer
3mo166

I wonder why performance on AP English Literature and AP English Language stalled

Writer
3mo11

I was considering downvoting, but after looking at that page maybe it's good not to have it copy-pasted

[This comment is no longer endorsed by its author]Reply
Writer
3mo20

This article is evidence that Elon Musk will focus on the "wokeness" of ChatGPT, rather than do something useful about AI alignment. But still, we should keep in mind that news are very often incomplete or simply just plain false. 

Also, I can't access the article. 

Related: I've recently created a prediction market about whether Elon Musk is going to do something positive for AI risk (or at least not do something counterproductive) according to Eliezer Yudkowsky's judgment: https://manifold.markets/Writer/if-elon-musk-does-something-as-a-re?r=V3JpdGVy

Writer
3mo74

Hard agree, the shoggoth memes are great.

Writer
3mo10

It would probably be really valuable if people could forecast the ability to build/deploy AGI to within roughly 1 year, as it could inform many people’s career planning and policy analysis (e.g., when to clamp down on export controls). In this regard, an error/uncertainty of 3 years could potentially have a huge impact.

Yeah, being able to have such forecasting precision would be amazing. It's too bad it's unrealistic (what forecasting process would enable such magic?). It would mean we could see exactly when it's coming and make extremely tailored plans that could be super high-leverage.  

Writer
3mo129

This post was an excellent read, and I think you should publish it on LessWrong too.

I have the intuition that, at the moment, getting an answer to "how fast is AI takeoff going to be?" has the most strategic leverage and that this topic influences the probability we're going extinct due to AI the most, together with timelines (although it seems to me that we're less uncertain about timelines than takeoff speeds). I also think that a big part of why the other AI forecasting questions are important is because they inform takeoff speeds (and timelines). Do you agree with these intuitions?

Relatedly: If you had to rank AI-forecasting questions according to their strategic importance and influence on P(doom), what would those rankings look like?

Writer
4mo74

One class of examples could be when there's an adversarial or "dangerous" environment. For example:

  • Bots generating low-quality content.
  • Voting rings.
  • Many newcomers entering at once, outnumbering the locals by a lot. Example: I wouldn't be comfortable directing many people from Rational Animations to the EA Forum and LW, but a karma system based on Eigen Karma might make this much less dangerous.

Another class of examples could be when a given topic requires some complex technical understanding. In that case, a community might want only to see posts that are put forward by people who have demonstrated a certain level of technical knowledge. Then they could use EigenKarma to select them. Of course, there must be some way to enable the discovery of new users, but how much of a problem this is depends on implementation details. For example, you could have an unfiltered tab and a filtered one, or you could give higher visibility to new users. There could be many potential solutions.

Load more