This article is evidence that Elon Musk will focus on the "wokeness" of ChatGPT, rather than do something useful about AI alignment. But still, we should keep in mind that news are very often incomplete or simply just plain false.
Also, I can't access the article.
Related: I've recently created a prediction market about whether Elon Musk is going to do something positive for AI risk (or at least not do something counterproductive) according to Eliezer Yudkowsky's judgment: https://manifold.markets/Writer/if-elon-musk-does-something-as-a-re?r=V3JpdGVy
It would probably be really valuable if people could forecast the ability to build/deploy AGI to within roughly 1 year, as it could inform many people’s career planning and policy analysis (e.g., when to clamp down on export controls). In this regard, an error/uncertainty of 3 years could potentially have a huge impact.
Yeah, being able to have such forecasting precision would be amazing. It's too bad it's unrealistic (what forecasting process would enable such magic?). It would mean we could see exactly when it's coming and make extremely tailored plans that could be super high-leverage.
This post was an excellent read, and I think you should publish it on LessWrong too.
I have the intuition that, at the moment, getting an answer to "how fast is AI takeoff going to be?" has the most strategic leverage and that this topic influences the probability we're going extinct due to AI the most, together with timelines (although it seems to me that we're less uncertain about timelines than takeoff speeds). I also think that a big part of why the other AI forecasting questions are important is because they inform takeoff speeds (and timelines). Do you agree with these intuitions?
Relatedly: If you had to rank AI-forecasting questions according to their strategic importance and influence on P(doom), what would those rankings look like?
One class of examples could be when there's an adversarial or "dangerous" environment. For example:
Another class of examples could be when a given topic requires some complex technical understanding. In that case, a community might want only to see posts that are put forward by people who have demonstrated a certain level of technical knowledge. Then they could use EigenKarma to select them. Of course, there must be some way to enable the discovery of new users, but how much of a problem this is depends on implementation details. For example, you could have an unfiltered tab and a filtered one, or you could give higher visibility to new users. There could be many potential solutions.
Rational Animations has a subreddit: https://www.reddit.com/r/RationalAnimations/
I hadn't advertised it until now because I had to find someone to help moderate it.
I want people here to be among the first to join since I expect having EA Forum users early on would help foster a good epistemic culture.