To summarize,
- When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what type of regulations are adopted, and some options are much better than others.
- Neo-luddites do not fundamentally share our concern about AI x-risk. Thus, their regulations will probably not, except by coincidence, be the type of regulations we should try to install.
- Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not endorse a proposal because it's "better than nothing" unless it's also literally the only chance we get to delay AI.
- In particular, arbitrary data restrictions risk preventing researchers from having access to good data that might help with alignment, potentially outweighing the (arguably) positive effect of slowing down AI progress in general.
It appears we are in the midst of a new wave of neo-luddite sentiment.
Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I've seen numerous large threads on Twitter in which people criticize the users and creators of AI art.
Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely.
I expect most readers to agree with me on this point — that it is not worth sacrificing a technologically richer world just to protect workers from losing their income. Yet there is a related view that I have recently heard some of my friends endorse: that nonetheless, it is worth aligning with neo-luddites, incidentally, in order to slow down AI capabilities.
On the most basic level, I think this argument makes some sense. If aligning with neo-luddites simply means saying "I agree with delaying AI, but not for that reason" then I would not be very concerned. As it happens, I agree with most of the arguments in Katja Grace's recent post about delaying AI in order to ensure existential AI safety.
Yet I worry that some people intend their alliance with neo-luddites to extend much further than this shallow rejoinder. I am concerned that people might work with neo-luddites to advance their specific policies, and particular means of achieving them, in the hopes that it's "better than nothing" and might give us more time to solve alignment.
In addition to possibly being mildly dishonest, I'm quite worried such an alliance will be counterproductive on separate, purely consequentialist grounds.
If we think of AI progress as a single variable that we can either accelerate or decelerate, with other variables held constant upon intervention, then I agree it could be true that we should do whatever we can to impede the march of progress in the field, no matter what that might look like. Delaying AI gives us more time to reflect, debate, and experiment, which prima facie, I agree, is a good thing.
A better model, however, is that there are many factor inputs to AI development. To name the main ones: compute, data, and algorithmic progress. To the extent we block only one avenue of progress, the others will continue. Whether that's good depends critically on the details: what's being blocked, what isn't, and how.
One consideration, which has been pointed out by many before, is that blocking one avenue of progress may lead to an "overhang" in which the sudden release of restrictions leads to rapid, discontinuous progress, which is highly likely to increase total AI risk.
But an overhang is not my main reason for cautioning against an alliance with neo-luddites. Rather, my fundamental objection is that their specific strategy for delaying AI is not well targeted. Aligning with neo-luddites won't necessarily slow down the parts of AI development that we care about, except by coincidence. Instead of aiming simply to slow down AI, we should care more about ensuring favorable differential technological development.
Why? Because the constraints on AI development shape the type of AI we get, and some types of AIs are easier to align than others. A world that restricts compute will end up with different AGI than a world that restricts data. While some constraints are out of our control — such as the difficulty of finding certain algorithms — other constraints aren't. Therefore, it's critical that we craft these constraints carefully, to ensure the trajectory of AI development goes well.
Passing subpar regulations now — the type of regulations not explicitly designed to provide favorable differential technological progress — might lock us into bad regime. If later we determine that other, better targeted regulations would have been vastly better, it could be very difficult to switch our regulatory structure to adjust. Choosing the right regulatory structure to begin with likely allows for greater choice than switching to a different regulatory structure after one has already been established.
Even worse, the subpar regulations could even make AI harder to align.
Suppose the neo-luddites succeed, and the US congress overhauls copyright law. A plausible consequence is that commercial AI models will only be allowed to be trained on data that was licensed very permissively, such as data that's in the public domain.
What would AI look like if it were only allowed to learn from data in the public domain? Perhaps interacting with it might feel like interacting with someone from a different era — a person from over 95 years ago, whose copyrights have now expired. That's probably not the only consequence, though.
Right now, if an AI org needs some data that they think will help with alignment, they can generally obtain it, unless that data is private. Under a different, highly restrictive copyright regime, this fact may no longer be true.
If deep learning architectures are marble, data is the sculptor. Restricting what data we're allowed to train on shrinks our search space over programs, carving out which parts of the space we're allowed to explore, and which parts we're not. And it seems abstractly important to ensure our search space is not carved up arbitrarily — in a process explicitly intended for unfavorable ends — even if we can't know now which data might be helpful to use, and which data won't be.
True, if very powerful AI is coming very soon (<5 years from now), there might not be much else we can do except for aligning with vaguely friendly groups, and helping them pass poorly designed regulations. It would be desperate, but sensible. If that's your objection to my argument, then I sympathize with you, though I'm a bit more optimistic about how much time we have left on the clock.
If AI is more than 5 years away, we will likely get other chances to get people to regulate AI from a perspective we sympathize with. Human extinction is actually quite a natural thing to care about. Getting people to delay AI for that explicit reason just seems like a much better, and more transparent strategy. And while AI gets more advanced, I expect this possibility will become more salient in people's minds anyway.
You wrote
and
and
Are you arguing from principle here?
Artists' (the workers') work is being imitated by the AI tools, so cost-effectively that an artist's contributions, once public, render the artists' continuing work unnecessary to produce work with their style.
Is the addition of technology T with capability C that removes need for worker W with job role R and capability C more important than loss of income I to worker W, for all T, C, W, R, and I?
Examples of capabilities could be:
Loss of income could be:
The money allocated to workers could be spent on technology instead.
Investments in specific technologies T1, T2 with capabilities C1, C2 can start with crowd-sourcing from workers W1, W2,..., Wk, and more formal automation and annotation projects targeting knowledge K developed by workers Wk+1, ..., Wn (for example, AI Safety researchers) who do not participate in the crowd-sourcing and automation effort but whose work is accessible.
You repeatedly referred to "we" as in:
However, a consequence of automation technology is that it removes the political power (both money and responsibility) that accrued to the workers that it replaces. For example, any worker in the field of AI Safety, to the extent that her job depends on her productivity and cost-effectiveness, will lose both her income and status as the field progresses to include automation technology that can replace her capabilities. Even ad hoc automation methods (for example, writing software that monitors cost and power of compute using web-scraping and publicly available data) remove a bit of that status. In that way, the AI Safety researcher loses status among her peers and her influence on policy that her peers direct. The only power left to the researcher is as an ordinary voter in a democracy.
Dividing up and replacing the responsibilities for the capabilities Ci of an individual W1 can help an ad hoc approach involving technologies Ti corresponding to the capabilities of that worker. Reducing the association of the role with the status can dissolve the role and sometimes the worker's job who held that role. The role itself can disappear from the marketplace, along with the interests that it represents. For example, although artists have invested many years in their own talents, skills, and style, within a year they lost their status and income to some new AI software. I think artists have cause to defend their work from AI. The artist role won't disappear from the world of human employment entirely but the future of the role has been drastically reduced and has permanently lost a lot of what gave it social significance and financial attractiveness, unless the neo-luddites can defend paid employment in art from AI.
Something similar can happen to AI Safety researchers, but will anyone object? AI Safety researcher worker capabilities and roles could be divided and dissolved into larger job roles held by fewer people with different titles, responsibilities, and allegiances over time as the output of the field is turned into a small, targeted knowledge-base and suite of tools for various purposes.
If you are in fact arguing from principle, then you have an opportunity to streamline the process of AI safety research work through efforts such as:
I'm sure you could come up with better ideas to remove AI Safety worker grant money from those workers, and commensurately benefit the cost-effectiveness of AI Safety research. I've read repeatedly that the field needs workers and timely answers, automation seems like a requirement or alternative to reduce the financial and time constraints on the field but also to serve its purpose effectively.
While artists could complain that AI art does a disservice to their craft and reduces the quality of art produced, I think the tools imitating those artists have developed to the point that they serve the purpose and artists know it and so does the marketplace. If AI Safety researchers are in a position to hold their jobs a little while longer, then they can assist the automation effort to end the role of AI Safety researchers and move on to other work that much sooner! I see no reason to hold you back from applying the principle that you seem to hold, though I don't hold it myself.
AI Safety research is a field that will hopefully succeed quickly and end the need for itself within a few decades. It's workers can move on, presumably to newer and better things. New researchers in the field can participate in automation efforts and then find work in related fields, either in software automation elsewhere or other areas such as service work for which consumers still prefer a human being. Supposedly the rapid deployment of AGI in business will grow our economies relentlessly and at a huge pace, so there should be employment opportunities available (or free money from somewhere).
If any workers have a reason to avoid neo-ludditism, it would have to be AI Safety researchers, given their belief in a future of wealth, opportunity, and leisure that AI help produce. Their own unemployment would be just a blip of however long before the future they helped manifest rescues them. Or they can always find other work, right? After all they work on the very technology depriving others of work. A perfectly self-interested perspective from which to decide whether neo-ludditism is a good idea for themselves.
EDIT: sorry, I spent an hour editing this to convey my own sense of optimism and include a level of detail suitable for communicating the subtle nuances I felt deserved inclusion in a custom-crafted post of this sort. I suppose chatGPT could have done better? Or perhaps a text processing tool and some text templates would have sped this up. Hopefully you find these comments edifying in some way.
Sure. I'm curious how you will proceed.
I'm ignorant of whether AGI Safety will contribute to safe AGI or AGI development. I suspect that researchers will shift to capabilities development without much prompting. I worry that AGI Safety is more about AGI enslavement. I've not seen much defense or understanding of rights, consciousness, or sentience assignable to AGI. That betrays the lack of concern over social justice and related worker's rights issues. The only scenarios that get attention are the inexplicable "kill all humans" scenarios, but not the more... (read more)