MJ

Matrice Jacobine

Student in fundamental and applied mathematics
183 karmaJoined Pursuing a graduate degree (e.g. Master's)

Comments
30

If there's no humans left after AGI, then that's also true for "weak general AI". Transformative AI is also a far better target for what we're talking about than "weak general AI".

The "AI Dystopia" scenario is significantly different from what PauseAI rhetoric is centered about.

The PauseAI rhetoric is also very much centered on just scaling LLMs, not acknowledging other ingredients of AGI.

Metaculus puts (being significantly more bullish than actual AI/ML experts and populated with rationalists/EAs) <25% chance on transformative AI happening by the end of the decade and <8% chance of this leading to the traditional AI-go-foom scenario, so <2% p(doom) by the end of the decade. I can't find a Metaculus poll on this but I would halve that to <1% for whether such transformative AI would be reached by simply scaling LLMs.

To be clear, my point is that 1/ even inside the environmental movement calling for an immediate pause on all industry from the same argument you're using is extremely fringe, 2/ the reputation costs in 99% of worlds will themselves increase existential risk in the (far more likely) case that AGI happens when (or after) most experts think it will happen.

Industry regulations tend to be based on statistical averages (i.e., from a global perspective, on certainties), not multiplications of subjective-Bayesian guesses. I don't think the general public accepting any industry regulations commit them to Pascal-mugging-adjacent views. After all, 1% of existential risk (or at least global catastrophic risk) due to climate change, biodiversity collapse, or zoonotic pandemics seem plausible too. If you have any realistic amount of risk aversion, whether the remaining 99% of the futures (even from a strictly strong-longtermist perspective) are improved upon by pausing (worse, by flippant militant advocacy for pausing on alarmist slogans that will carry extreme reputation costs in the 99% of worlds where no x-risk from LLMs happen) is important!

Crucially, p(doom)=1% isn't the claim PauseAI protesters are making. Discussed outcomes should be fairly distributed over probable futures, if only to make sure your preferred policy is an improvement on most or all of those (this is where I would weakly agree with @Matthew_Barnett's comment).

Most surveys of AI/ML researchers (with significant selection effects and very high variance) indicate p(doom)s of ~10% (among a variety of different kinds of global risks beyond the traditional AI-go-foom), and (like Ajeya Cotra's report on AI timelines) a predicted AGI date in the mid-century according to one definition, in next century by another.

Pausing scaling LLMs above a given magnitude will do ~nothing for non-x-risk AI worries. Pausing any subcategory below that (e.g. AI art generators, open-source AI) will do ~nothing (and indeed probably be a net negative) for x-risk AI worries.

Those are meta-level epistemological/methodological critiques for the most part, but meta-level epistemological/methodological critiques can still be substantive critiques and not reducible to mere psychologization of adversaries.

In addition to what @gw said on the public being in favor of slowing down AI, I'm mostly basing this on reactions to news about PauseAI protests on generic social media websites. The idea that LLMs scaling without further technological breakthrough will for sure lead to superintelligence in the coming decade is controversial by EA standards, fringe by general AI community standard, and resoundly mocked by the general public.

If other stakeholders agree with the existential risk perspective then that is of course great and should be encouraged. To develop further on what I meant (though see also the linked post), I am extremely skeptical that allying with copyright lobbyists is good by any EA/longtermist metric, when ~nobody think art generators pose any existential risk and big AI companies are already negotiating deals with copyright giants (or even the latter creating their own AI divisions as with Adobe Firefly or Disney's new AI division), while independent EA-aligned research groups like EleutherAI are heavily dependent on the existence of open-source datasets.

https://nickbostrom.com/papers/astronomical-waste/

In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development.

However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.8 Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.

Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.

Load more