H

havequick

116 karmaJoined Jun 2023

Comments
5

If an anti-AI backlash gets formalized into strong laws and regulations against AGI development, leading governments could make it prohibitively difficult, costly, and risky to develop AGI. This doesn’t necessarily require a global totalitarian government panopticon monitoring all computer research. Instead, the moral stigmatization automatically imposes the panopticon. If most people in the world agree that AGI development is evil, they will be motivated to monitor their friends, family, colleagues, neighbors, and everybody else who might be involved in AI. They become the eyes and ears ensuring compliance. They can report evil-doers (AGI developers) to the relevant authorities – just as they would be motivated to report human traffickers or terrorists. And, unlike traffickers and terrorists, AI researchers are unlikely to have the capacity or willingness to use violence to deter whistle-blowers from whistle-blowing.

Something to add is that this sort of outcome can be augmented/bootstrapped into reality with economic incentives that make it risky to work to develop AGI-like systems while simultaneously providing economic incentives to report those doing so -- and again, without any sort of nightmare global world government totalitarian thought police panopticon (the spectre of which is commonly invoked by certain AI accelerationists as a reason not to regulate/stop work towards AGI).

These two posts (by the same person, I think) give an example of a scheme like this (ironically inspired by Hanson's writings on fine-insured-bounties): https://andrew-quinn.me/ai-bounties/ and https://www.lesswrong.com/posts/AAueKp9TcBBhRYe3K/fine-insured-bounties-as-ai-deterrent

Things to note not in either of those posts (though possibly in other writings by the author[s]) is:

  • the technical capabilities to allow for decentralized robust coordination that creates/responds to real-world money incentives have drastically improved in the past decade. It is an incredibly hackneyed phrase but...cryptocurrency does provide a scaffold onto which such systems can be built.

  • even putting aside the extinction/x-risk stuff there are financial incentives for the median person to support systems which can peaceably yet robustly deter the creation of AI systems which would take any of the jobs they could get ("AGI") and thereby leave them in an abyssal state of dependence without income and without a stake or meaningful role in society for the rest of their life

I'd recommend looking into vitamin K2 (along other fat-soluble vitamins involved ) and Weston Price's work. Some of that stuff goes a bit into quackery territory (like the massive fish oil consumption) but I think the case for the utility of vitamin K2 for bone/teeth along possible underconsumption (due to lack of eating liver/etc) of it in modern western diets is defensible.

The main challenge at that point will be to keep the anti-AI backlash peaceful and effective, rather than counter-productively violent. This will basically require 'just the right amount' of moral stigmatization of AI: enough to pause it for a few decades, but not so much that AI researchers or AI business leaders get physically hurt.

I agree with keeping it peaceful and effective; but I don't think that trying to calibrate "just the right amount" is really feasible or desirable. At least not the way that EA/LW-style approaches to AGI risk has approached this; which feels like it very easily leads to totalizing fear/avoidance/endless rumination/scrupulosity around doing anything which might be too unilateral or rash. The exact same sentiment is expressed in this post and the post it's in reply to; so I am confident this is very much a real problem: https://twitter.com/AISafetyMemes/status/1661853454074105858

"Too unilateral or rash" is not a euphemism for "non-peaceful": I really do specifically mean that in these EA/LW/etc circles there's a tendency to have a pathological fear (that can only be discharged by fully assuaging the scrupulosity of oneself and one's peers) of taking decisive impactful action.

To get a bit more object-level, I believe it is cowardice and pathological scrupulosity to not take a strong assertive line against very dangerous work (GoF virology on pandemic pathogens, AGI/ASI work, etc) because of fears that some unrelated psychotic wacko might use it as an excuse to do something violent. Frankly, if unstable wackos do violent things then the guilt falls quite squarely on them, not on whatever internet posts they might have been reading.

"Don't upset the AGI labs" and "don't do PR to normies about how the efforts towards AGI/ASI explicitly seek out human obsolescence/replacement/disempowerment" and the like feel to me like outgrowths of the same dynamic that led to the pathological (and continuing!) aversion to being open about the probable lab origin of Covid because of fears that people (normies, wackos, political leaders, funding agencies alike) might react in bad ways. I don't think this is good at all. I think that contorting truth, misrepresenting, and dissembling in order to subtly steer/manipulate society and public/elite sentiment leads to far worse outcomes than the consequences of just telling the truth.

To get very object-level, the AGI-accelerationist side of things does not hold themselves to any sort of scrupulosity or rumination about consequences. If anything, that side of things is defined by reckless and aggressive unilateral action; both in terms of discourse and capabilities development. Unilateral to the point that there is a nontrivial contingent of those working towards AGI who openly cheer that AI progress is so fast that political systems cannot react fast enough before AGI happens.

pmarca's piece about AI (published when I was writing this) is called "Why AI Will Save the World", the thesis is that fears about AI are the outgrowth of "irrational" "hysterical" "moral panic" from Ludditic economic innumerates, millenarianist doom-cultists, and ever-present hall-monitors, that accelerating AI "as fast and aggressively" as possible is a "moral obligation", and strongly insinuates that those not on-board act as enablers to the PRC's dreams of AI-fueled world domination.

There is, of course, no counterargument for the contention that at a certain level of AGI capabilities, the costs of human involvement will be greater than the value of human involvement, and it will be more efficient (gains from trade / comparative advantage / productivity growth didn't save draught horses from the glue factory when tractors/trucks/cars rolled around) to simply remove humans from the loop: which leads to near-total human technological unemployment/disempowerment. There is no mention that the work towards AGI explicitly and openly seeks to make the overwhelming majority of human work economically unviable. There is no mention of any of the openly-admitted motivations, consequences, and goals of AGI research which are grievously opposed to most of humanity (those which I had highlighted in my previous comment).

But that sort of piece doesn't need any of that, since the point isn't to steelman a case against AI doomers, but to degrade the credibility of AI doomers and shatter their ability to coordinate by depicting them as innumerates, Luddites, cultists, disingenuous, or otherwise useful idiots of the PRC.

I cannot help but see the AGI-accelerationist side of things winning decisively, soon, and irreversibly if those who are opposed continue to be so self-limitingly scrupulous about taking action because of incredibly nebulous fears. At some point those who aren't on-board with acceleration towards AGI/ASI have to start assertively taking the initiative if they don't want to lose by default.

Again, to be totally explicit and clear, "assertively taking the initiative" does not mean "violence". I agree with keeping it peaceful and effective. But it does mean things like "start being unflinchingly open about the true motivations, goals, and likely/intended/acceptable consequences of AGI/ASI research" and "stop torturing yourselves with scrupulosity and fears of what might possibly conceivably go wrong".

My guess: The point at which “AI took my job” changes from low-status to an influential rallying cry is the point when a critical mass of people “wake up” to the fact that AGI is going to take their jobs (in fact, everyone’s) and that this will happen in the near future.

My fear is that there won't be enough time in the window between "critical mass of people “wake up” to the fact that AGI is going to take their jobs (in fact, everyone’s)" and AGI/ASI actually being capable of doing so (which would nullify human social/economic power). To be slightly cynical about it, I feel like the focus on doom/foom outcomes ends up preventing the start of a societal immune response.

In the public eye, AI work that attempts to reach human-level and beyond-human-level capabilities currently seems to live in the same category as Elon's Starship/Super Heavy adventures: an ambitious eccentric project that could cause some very serious damage if it goes wrong -- except with more at stake than a launchpad. All the current discourse is downstream of this: opposition towards AGI work thus gets described as pro-stagnation / anti-progress / pro-[euro]sclerosis / pro-stagnation / anti-tech / anti-freedom and put in the same slot as anti-nuclear-power environmentalists, as anti-cryptocurrency/anti-encryption efforts, etc.

There's growing public realization that there's ambitious eccentric billionaires/corporations working on a project which might be Really Dangerous If It Things Go Wrong — "AI researchers believe that super-powerful AI might kill us all, we should make sure it doesn't" is entering the Overton window — but this ignores the cataclysmic human consequences even if things go right, even if the mythical (which human values? which humans? how is this supposed to be durable against AI systems creating AIs, how is this supposed to be durable against economic selection pressures to extract more profit and resources?) "alignment with human values" is reached.

Even today, "work towards AGI is explicitly and openly seeks to make the overwhelming majority of human work economically unviable" is still not in the Overton window of what it's acceptable/high-status to express, fear, and coordinate around, even though "nobody finds your work to be valuable, it's not worth it to train you to be better, and there's better replacements for what you used to do" is something which:

  1. most people can easily understand the implications of (people in SF can literally go outside and see what happens to humans that are rendered economically unviable by society)

  2. is openly desired by the AGI labs: they're not just trying to create better protein-folding AIs, they're not trying to create better warfighting or missile-guidance AIs. They're trying to make "highly autonomous systems that outperform humans at most economically valuable work". Says it right on OpenAI's website.

  3. is not something that the supposed "alignment" work is even pretending to be able to prevent.

I think a concerted effort to make the public aware of some of the underlying motivations, consequences, and goals of AGI research would likely trigger public backlash:

  • the singularity-flavored motivations of many AGI researchers: creation of superior successor beings to realize quasi-religious promises of a heavenly future, etc

  • the economic-flavored motivations of AGI labs: "highly autonomous systems that outperform humans at most economically valuable work". This is literally on OpenAI's website

  • increasing the likelihood of total human extinction is regarded as acceptable collateral damage in pursuit of the above goals. Some are even fine with human extinction if it's replaced by sufficiently-advanced computers that are deemed to be able to experience more happiness than humans can.

  • human economic/social disempowerment is sought out in pursuit of these goals, is realistically achievable well within our lifetime, and is likely to occur even with "aligned" AGI

  • AI "alignment" is theater: even the rosiest visions of futures with AGI are ones in which humans are rendered obsolete and powerless, and at the mercy of AGI systems: at best an abyssal unescapable perpetual pseudo-childhood with no real work to do, no agency, no meaningful pursuits nor purpose -- with economic and social decision-making and bargaining power stripped away

  • post-scarcity of human-level intelligence directly translates to human work being worthless, and "human work output is valuable" underpins whatever stability and beneficence has existed in every social and economic system we have and have ever had. AGI shreds these assumptions and worsens the situation: at the mercy of powerful systems at whose hands we are entirely powerless.

  • specifically, the promises of AGI-granted universal basic income used to legitimate AGI-granted economic cataclysm are unlikely to be upheld: with human labor output being worthless, there's nothing humans can do if the promise is reneged on. What are we going to do, go on strike? AGI's doing all the work. Wage an armed insurrection? AGI's a powerful military tool. Vote the AGI overlords out of office? AGI shreds the assumptions that make democracy a workable system (and not solely a box-ticking theater like it is in totalitarian states).

  • the accelerationist and fundamentalist-utilitarian ideologies driving and legitimating this work place vanishingly little value on human power/agency -- or even continued human existence. Some regard human desires for power/agency/meaningful work/existence as pathological, outdated, and to be eliminated by psychotechnological means, in order for humans to better cope with being relegated to the status of pets or zoo animals.

  • many of those working towards AGI openly revel in the thought of humanity's obsolescence and powerlessness in the face of the systems they're building, cheer that AI progress is so fast that political systems cannot react fast enough before AGI happens, and are stricken by a lack of faith in humanity's ability to cope with the world and its problems -- to be solved by domination and/or replacement by AGI.

  • the outcomes of enduring fruitful cooperation between AGI and humans (because of alleged comparative advantage) are laughably implausible: at a certain level of AGI capabilities, the costs of human involvement in AGI systems will be greater than the value of human involvement, and it will be more efficient to simply remove humans from the loop: by analogy, there's no horse-attachment point even on the first automobile because there's no gains to be had from having a horse in the loop. The same will be true of humans and AGI systems. To put it crudely, "centaurs" get sent to the glue factory too.

  • strategic deceit is used to obscure both the "total technological unemployment" cluster of motivations and the "singularity" cluster of motivations: for instance, the arguments that accelerating AGI is necessary for surviving problems which don't actually need AI: climate change, pandemics, asteroid impact, etc (and thus the risks of AGI are justified). "we need to build AGI so I can become immortal in a really powerful computer because I don't want to die" doesn't quite have the same ring to it

  • RLHF'ing language models is not even "alignment": it's mostly meant to evade society cracking down on the AGI industry by making it unlikely that their language models will say politically sensitive things or obscenities. This matters because it is remarkably easier to crystallize political crackdowns on entities whose products wantonly say racial slurs than it is to crystallize political crackdowns on entities that are working towards the creation of systems that will render humans obsolete and powerless. The Lovecraftian absurdity is withering.

"AI took my job" is low-status ("can't adapt? skill issue") to admit seriously thinking about, but even in the dream AGI/ASI-is-aligned scenarios, the catastrophic consequences of AGI/ASI will likely look like "AI took my job" extrapolated to the entire human species: full-spectrum human obsolescence, total technological unemployment, loss of human socioeconomic bargaining power, loss of purpose, loss of human role in keeping civilization running, and a degradation of humanity to perma-NEETs tranquilized via AI-generated AR-media/games/pornography, etc.

To put it very bluntly, the overwhelming majority of humanity doesn't want to be aboard a metaphorical Flight 93 piloted by nihilists with visions of a paradisiacal post-scarcity post-human future dancing in their heads as they make the final turns towards the singularity.