Hide table of contents

Disclaimer 1: This following essay doesn’t purport to offer much original ideas, and I am certainly a non-expert on AI Governance, so please don’t take my word for these things too seriously. I have linked sources throughout the text, and have some other similar texts later on, but this should merely be treated as another data point in people saying very similar things; far smarter people than I have written on this. 

Disclaimer 2: This post is quite long, so I recommend reading the section on " A choice not an inevitability" and  "It's all about power" for the core of my argument.

My argument essentially is as follows; under most plausible understandings of how harms arise from very advanced AI systems, be these AGI or narrow AI or systems somewhere in between, the actors responsible, and the actions that must be taken to reduce or avert the harm, are broadly similar whether you care about both existential and non-existential harms from AI development. I will then further go on to argue that this calls for broad, coalitional politics of people who vastly disagree on specifics of AI systems harms, because we essentially have the same goals. 

It's important to note that calls like these have happened before. Whilst I will be taking a slightly different argument to them, Prunkl & WhittlestoneBaumStix & Maas and Cave & Ó hÉigeartaigh have all made arguments attempting to bridge near term and long  term concerns. In general, these proposals (with the exception of Baum) have made calls for narrower cooperation between ‘AI Ethics’ and ‘AI Safety’ than I will make, and are all considerably less focused on the common source of harm than I will be. None go as far as I do in essentially suggesting all key forms of harm that we worry about are incidents of the same phenomena of power concentration in and through AI. These pieces are in many ways more research focused, whilst mine is considerably more politically focused. Nonetheless, there is considerable overlap in spirit of identifying that the near-term/ethics and long-term/safety distinction is overemphasised and is not as analytically useful as is made out, as well as the intention of all these pieces and mine to reconcile for mutual benefit of the two factions. 

A choice not an inevitability

At present, there is no AI inevitably coming to harm us. Those AIs that do will be given capabilities, and power to cause harm, by developers. If the AI companies stopped developing their AIs now, and people chose to stop deploying them, then both existential or non-existential harms would stop. These harms are in our hands, and whilst the technologies clearly act as important intermediaries, ultimately it is a human choice, a social choice, and perhaps most importantly a political choice to carry on developing more and more powerful AI systems when such dangers are  apparent (or merely plausible or possible). The attempted development of AGI is far from value neutral, far from inevitable and very much in the realm of legitimate political contestation. Thus far, we have simply accepted the right for powerful tech companies to decide our future for us; this is both unnecessary and dangerous. It's important to note that our current acceptance of the right of companies to legislate for our future is historically contingent. In the past, corporate power has been curbed, from colonial era companiesProgressive Era trust-bustingpostwar Germany and more, and this could be used again. Whilst governments have often taken a leading role, civil society has also been significant in curbing corporate power and technology development throughout history.   Acceptance of corporate dominance is far from inevitable. 

I also think it's wrong to just point the finger at humanity, as if we are all complicit in this. In reality, the development of more and more dangerous AI systems seems to essentially be driven by a very small number of corporate actors (often propped up by a very small number of individuals supporting them). OpenAI seem to be committed to shortening timelines as much as possible, had half their safety team leave to form another company in response to their lax approach to safety, and seem to see themselves as essentially empowered to risk all of humanity as they see themselves as saviours of humanity. Google sacked prominent members of the ethics team for speaking out on the dangers of LLMsMicrosoft sacked an entire ethics team, despite the ethical issues that increasing AI in their products has brought. None of these seem like the behaviour of companies that have earned the trust that society (implicitly) gives them. 

It’s all about power

Ultimately, the root cause of the harms that AI causes (whether or not you think they are existential or not), is the power that AI companies have to make unilateral decisions that affect large swathes of humanity without any oversight. They certainly don’t paint their actions as political, despite clearly attempting to gain power and guide the course of humanity from their ivory towers in silicon valley. They feel empowered to risk huge amounts of harm  (e.g. Bender and Gebru et alBirhaneCarlsmithYudkowsky) by developing more powerful AI systems, partially because there is little political opposition despite growing public worry. Whilst there is some mounting opposition to these companies’ unsafe deployments (activismlegislationhardware control), there is so much further to go, in particular in restricting the research into advanced AI.

If we see it like this, whether AI is closer to a stupid ‘stochastic parrot’ or on the  ‘verge-of-superintelligence’ doesn’t really matter; whichever world we are in, it’s the same processes and actors that ultimately generate the harm. The root cause, powerful, unaccountable, unregulated AI companies with little opposition playing fast and loose with risk and social responsibility, with utopian and quasi-messianic visions of their mission, causes the risk, irrespective of what you think that risk is. As capitalists like Sam Altman take in their private benefits, the rest of humanity suffers with the risk they place on all of us. Of course, these risks and harms aren’t placed on everyone equally, and as always, more harm is done to the less powerful and privileged (in healthcare, on women’s working livesfacial recognition etc.) ; but nonetheless, these AI companies are happy to run roughshod over the rights of everyone in order to pursue their own ends.  They have their vision of the future, and are happy to impose significant risk on the rest of humanity to achieve this without our consent. A lot of the researchers helping these companies think what they are doing has high probabilities of being extremely bad, and yet they carry on! 

Irrespective of which sort of dangers we worry about, it's clear who we need to worry about: the AI companies, chiefly (although not exclusively) OpenAI and DeepMind. Whether you care about ‘AI Ethics’ or ‘AI Safety’, no matter what the type of harms you worry about, if you look at the issue politically the source of the harms looks the same.  It's clear who has, and is trying to gain more, power, and it is clear that everyone else is put at extreme risk. If the problem is power, then we ought to fight it with power. We cannot merely allow these powerful actors to imagine and create futures for us, crowding out alternatives; we need to build coalitions that give us the power to imagine and create safer and more equitable futures. 

Thus, the importance of making a distinction between existential and non-existential harms will start to dissolve away, because either are possible hugely negative consequences of the same phenomena, with similar political solutions: slow down or stop companies trying to develop AGI and other risky ‘advanced AI’ systems. If we buy this, then the strategy needs to be much broader than the current status quo in the ‘AI XRisk’ community of  merely empowering a narrow range of ‘value-aligned’ individuals to research ‘technical alignment’ or even friendly technocratic ‘Existential AI Governance.’ (I’m not saying this is bad- far from it- or shouldn’t be hugely expanded, but it is very very very far from sufficient). Rather, it likely looks like bringing together coalitions of actors, with perhaps different underlying ethical concerns, but the same political concern that the growing unaccountable power of dangerously hubristic AI companies needs to be curbed. It requires building coalitions to engage in the politics of technology, imagining futures we can perform into existence, and asserting power to challenge the inherently risky pathways these AI companies want to put us on.

It's important to note that this isn't saying we don't and can't do AI research. But the type of research, moving towards larger and larger models with less accountability for the companies, trying to get more and more general systems with more destructive capabilities, with almost no regulatory oversight,  is simply not a viable safe pathway forward. There is good reason to think that within our current paradigm and political structures, AGI development may be inherently dangerous; this is a demon we ought not to make. If this recklessness is synonymous with innovation, then those dreaming of innovations have lost a lot of their spark. 

In whatever world we are in, putting ‘AI’s in charge’ of powerful systems is dangerous

Whether we are in ‘stochastic parrot’ or ‘verge of superintelligence’ worlds, giving AIs power is deeply dangerous. ‘Stupid’ AIs are already causing fatalitiesreinforcing biases, and causing other harms, all of which will likely get worse if given more power. ‘Stupid systems’ could even cause harm of existential proportions, for example if they are integrated into nuclear command and control, or used to make more powerful new biochemical weapons. Superintelligent AIs, if given power, could similarly cause tremendous amounts of harm scaling to existential harm. I think it's also important to note, that AI’s needn’t be an agent in the typical, anthropomorphised sense, for it to be useful to describe them as ‘having power’, and that is what I mean here. 

Once again, unaccountable, opaque, ‘machine power’, generally allows for an increase in harm that can be done, and a reduction in the ability of society to respond to said harm as systems get entrenched and remake the social world we live in, which is incredibly dangerous. And once again, these harms are often imposed on the rest of the world, without our consent, by companies, militaries and governments looking to rely on AI systems, normally due to hype from the same, few AGI companies. In this way, irrespective of the world we are in, hype is dangerous, because once again it provides the dangerously risky AI companies with more power, which they almost certainly use to pose risks of unacceptable harm on the world population.

In whatever world we are in, AGI research is dangerous

If we are in ‘stochastic parrot’ world, research into AGI is used as an excuse and a fig leaf to hide enormous harms imposed by dangerously stupid AI systems. In this world, AGI research is used to focus on increasing the power of a few, unaccountable, powerful companies, and causes harm for the rest of us, whilst failing to deliver on its promises. By controlling visions of the future, actors gain control over the present. Visions of utopia allow more mundane harms to get ignored, with these companies provided a free pass. 

If we are in the ‘verge of superintelligence’ world, research into AGI is flirting with the apocalypse in a way that is unacceptably dangerous. Stories of the inevitability of AGI development are useful as excuses for those who care little of the existential risk that developing these systems could bring in comparison to their desire to impose their vision of what a glorious future looks like upon mankind.

There may be a counterargument to this, that suggests research here isn’t dangerous, but deployment is, so it is on model deployment we need to regulate and not research. I think in both worlds, this is flawed. In ‘stochastic parrot’ world, even with regulated deployment, unrestricted research is likely to lead to a slippery slope to deployment (as worried about in geoengineering, for example), where research enables a gaining of financial, intellectual and discursive power by the AI companies in a way that makes dangerous deployment of technologies much more likely. And in a ‘verge of superintelligence’ world having powerful doomsday devices developed is probably already an unacceptable risk no matter how strict the governance of deployment is. Even if we think our regulation of deployment is sound, governance mechanisms can break down, the existence of technologies can induce social changes affecting governance and deceptive alignment is enough of a problem that it seems better to simply never try and develop these systems in the first place. Moreover, to suggest the problem doesn’t start with research fails to reckon with the risk of bad actors; whilst one could say that guns don’t murder, people do, had guns not been invented far fewer people would have been killed in violence than are now.  


Why this is a shared battle

I hope the previous paragraphs have shown that whilst the disagreements between the AI Safety and AI Ethics crowds are significant, they are not massively analytically useful or core to understanding the key challenge that we are facing. The relevant question isn’t “are the important harms to be prioritised the existential harms or the non-existential ones?”,  “will AI be agents or not?’, nor ‘will AI be stochastic parrots or superintelligence?” Rather, the relevant question is whether we think that power-accumulation and concentration in and through AI systems, at different scales of capability, is extremely risky. On this, I think we agree, and so whilst scientifically our differences may be significant, in the realm of political analysis it isn’t. Ultimately, it is this power concentration that has the potential to cause harm, and it is ultimately this which we normatively care about.

Moreover, many of these surface level disagreements also aren’t politically or strategically relevant: once we understand that the source of all these risks is a small group of AI companies recklessly forging ahead and concentrating power, it becomes much clearer that both communities in fact share interests in finding ways to (1) slow down/halt research; (2) avert and cool down AI hype; (3) spur much greater public/government scrutiny into whether (and if yes, how) we want to develop advanced AI technologies. 

What we gain from each other

This essay framed itself as suggesting that both the ‘AI Ethics’ and ‘AI Safety’ crowds can benefit each other. Thus far, I’ve mostly suggested that the AI Safety crowd should realise that even if the AI Ethics crowd were incorrect about dismissing the importance of existential risks from AI, that their analysis, that power accumulation and concentration through and by AI, originating from a small number of powerful and unaccountable corporations, is the major cause of the threats we face, is correct. From this perspective, the AI Safety crowd probably should come and fight in the trenches with the AI Ethics people as well, realising that their identification of the core of the issue has been broadly correct, even if they underestimated how bad these corporations could make things. Moreover, the AI Ethics crowds seem to have been more effective at tempering AI Hype in contrast to the way in which AI Safety crowds have potentially sped up AI development, so practically there may be significant benefit in collaboration. 

However, I’m not sure if the exchange here is so one-sided. I think the AI Safety community has a lot to offer the AI Ethics community as well. Technical AI Safety techniques, like RLHF or Constitutional AI, whilst potentially not very beneficial from an AI Safety perspective, seem to have had a meaningfully significant impact on making systems more ethical. Moreover, the moral inflation and urgency that Existential Harms can bring seems to resonate with the public, and so politically may be very useful tools if utilised to fight the companies rather than empower them. Intellectually, AI Safety provides much greater urgency and impetus for governing research and cutting the problem off at the sources (which has been underexplored so far) , a concern which would likely be more muted in AI Ethics discussions.  By regulating these problems at the sources, AI Ethics work can be made a lot easier and less reactive. Moreover, the focus from the AI Safety crowd on risks from systems that look vastly different from the risks we face now may be useful even if we don’t develop AGI; risks and harms will change in the future just as they have changed in the past, and anticipatory governance may be absolutely essentially at reducing these.  So even if one doesn’t buy my suggestion that we are on the same side of the most analytically relevant distinction, I hope that the insights and political benefits that the two communities have to offer each other will be enough cause for common ground to start working together. 

Coalitional Politics

If one accepts my (not particularly groundbreaking) analysis that the ultimate problem is the power of the AI companies, how do we combat this? There are lots of ways to do this, from narrow technocratic governance to broad range political salience raising, to ethics teams within corporations and broad governance frameworks and many other approaches. Each of these are necessary and useful, and I don’t argue against any of them. Rather, I’m arguing for a broad, pluralistic coalition taking a variety of approaches to AI governance, with more focus put towards work to raise the political salience of the restriction of AI Research than currently is. 

Given AI Ethics and AI Safety people are actually concerned with the same phenomena of harms arising from the unaccountable power enabling dangerously risky behaviour from a very small number of AI companies, then we also have the same solution; take them on. Use all the discursive and political tools at our disposal to curb their power and hold them to account. We need a big tent to take them on. We need op eds in major newspapers attesting to the dangerous power and harms these AI companies have and are happy to risk. We need to (continue to) expose just how their messianic arrogance endangers people, and let the public see what these few key leaders have said about the world they are pushing us towards.  We need to mobilise peoples worries such that politicians will react, establishing a culture against the unaccountable power of these AI companies. We need to show people across the political spectrum (even those we disagree with!) how this new power base of AI companies has no one's interests at heart but their own, so no matter where you fall, they are a danger to your vision of a better world. There is nascent public worries around AGI and these AI companies, we just need to activate this through a broad coalition to challenge the power of these companies and wrestle control of humanity’s future from them. Hopefully this can lay the groundwork for formal governance and at the very least quickly create a political culture reflective of the degree of worry that ought to be held about these companies’ power. 


There is nothing inevitable about technology development, and there is nothing inevitable about the status quo. In my ‘home field’ of Solar Geoengineering, considerably smaller coalitions, less well funded and powerful than what we could build in the AI space in a few months, successfully halted technology development for at least the last decade. Similar coalitions have constrained GMOs in various regions of the worldnuclear energy and nuclear weapons for peaceful purposes. There are enough reasons to oppose development of AGI systems from the perspective of all sorts of worldviews and ethical systems to build such coalitions; this has successfully occurred in a number of the above examples, and it may be even easier in the context of AI. Some have tried to make a start on this (e.g. PiperMarcus and GarnerGebru etc), but a larger and more diverse coalition trying to raise the political salience of curbing AI companies’ power is key. Bringing genuine restriction of these companies power into the overton window, building coalitions across political divides to do this, building constituencies of people who care about regulating the power of AI companies, raising the salience of the issue in the media and crafting and envisioning new futures for ourselves are all vital steps that can be taken.  We can build a relevant civil society to act as a powerful counterbalance to corporate power. 

This isn’t an argument to shut down existing narrow technocratic initiatives, or academic research presenting alternative directions for AI; rather, it is an argument that we need to do more, and do it together. There seems to be a gaping narrative hole (despite the admirable attempts of a few people to fill it) in pushing for a public political response to these AI companies. These discourses, social constructions and visions of the future matter to technology development and governance. They put pressure and establish norms that guide near term corporate decision making, government policy, and how society and public relate to technology and its governance. 

Urgency

I would also argue that this issue is urgent. Firstly, around ChatGPTBing/Sydney and now GPT-4, AI is experiencing a bit of a period of political attention at present. Public and government attention at the moment are good, and plausibly as good as they’ll ever be, for a politics to slow AGI development, and we are most powerful pushing for this together, rather than fighting and mocking each other in an attempt to gain political influence. This may be a vital moment that coalitional politics can be a powerful lever for enacting change, where the issue is suitably malleable to political contestation, to the formation of a governance object and to framing of how this issue could be solved; these are the exact times where power can be asserted over governance, and so assembling a coalition may give us that power. 

There is also a risk that if we don’t foster such a coalition soon, both of our communities get outmanoeuvred by a new wave of tech enthusiasts that are currently pushing very hard to accelerate AI, remove all content or alignment filters, open-source and disseminate all capabilities with little care for the harms caused and more. Indeed, many tech boosters are beginning to paint AI ethics and AI risk advocates as two sides of the same coin. To counteract this movement, it is key for both communities to bury the hatchet and combat these plausibly rising threats together. Divided we fall. 

So what does coalitional politics look like?

I think this question is an open one, something we will need to continue to iterate with in this context, learn by doing and generally work on together. Nonetheless, I will give some thought.

Firstly, it involves trying to build bridges with people who we think have wrong conceptions of the harms of AI development. I hope my argument that the political source of harm looks the same has convinced you, so let's work together to address that, rather than mocking, insulting and refusing to talk to one another. I understand people from AI Safety and AI Ethics have serious personal and ethical problems with one another; that needn’t translate to political issues. Building these bridges not only increases the number of people in our shared coalition, but the diversity of views and thinkers, allowing for new ideas to develop. This broad, pluralistic and diverse ecosystem will likely come not just with political, but with epistemic benefits as well. 

Secondly, it involves using the opportunities we can to raise the political salience of the issue of the power of AI companies as much as we can. At present, we are at something of a moment of public attention towards AI; rather than competing with one another for attention and discursive control, we ought to focus on the common concern we have. Whether the impetus for regulating these companies comes from motivations of concentration of corporate power or existential harms, it raises the salience of the issue and increases the pressure to regulate these systems, as well as increasing the pressure on companies to self-regulate. We must recognise our shared interest in joining into a single knowledge network and work out how best to construct a governance object to achieve our shared ends. At the moment, there is a weird discursive vacuum despite the salience of AI. We can fill this vacuum, and this will be most effective if done together. Only by filling this vacuum can we successfully create a landscape that can allow the curbing of the power of these corporations. People are already trying to do this, but the louder and broader the united front against these companies are, the better. 

Then, we need to try and create a culture that pressures political leaders and corporations, no matter where politically they fall, that these unaccountable companies have no right to legislate our future for us. We can do this agenda and culture shift through activismthrough the mediathrough the law, through protests and through political parties, as well as more broadly how discourses and imaginaries are shaped in key fora (social media, traditional media, fiction, academic work, conversations); the power of discourses have long been recognised in the development and stabilisation of socio-technical systems.  Democracy is rarely ensured by technocracy alone; it often takes large scale cu requires large scale cultural forces. Luckily, most people seem to support this!

We then need suggestions for policy and direct legal action to restrict the power and ability of these AI companies to do what they currently do. Again, luckily these exist. Compute governance, utilising competition law, holding companies legally accountable for harmful outputs of  generative AI  (and slightly more tangentially platforms), supporting copyright suits and more seem like ways we can attack these companies and curb their power. Human rights suits may be possible. In general, there is an argument to suggest that the use of the courts is an important and underexplored lever to keep these companies accountable.  Moreover, given the risks these companies themselves suggest they are imposing, other more speculative suits based on various other rights and principles, as has occurred in the climate context, may be possible. This is just some of a shopping list of policy and direct actions a broad coalitional movement could push for. People are already pushing for these things, but with better organisation that comes with more groups, the ability to push some of these into practice may be significantly enhanced. With a broader, diverse coalition, our demands can get stronger. 

Sure, this coalitional politics will be hard. Building bridges might sometimes feel like losing sight of the prize as we focus on restricting the power of these agents of doom via other arguments and means alongside whatever each of our most salient concerns is. It will be hard to form coalitions with people you feel very culturally different from. Ultimately, if we want to curb the ability of AI companies to do harm, we need all the support we can get, not just from those in one culture, but those in many. I hope many, a lot of whom have already contributed so much to this fight in both AI Safety and AI Ethics, will take up such an offer for coalitional politics, at this potentially vital moment.

Acknowledgements: Matthijs Maas gave me substantial advice and help, despite substantive disagreements with aspects of the essay, and conversations with Maathijs Maas and Seth Lazar provided a lot of the inspiration for me to write this. 

59

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 3:42 PM

Some thoughts I had while reading that I expect you'd agree with:

  1. There is probably a lot of overlap in the kinds of interventions that (some) AI safety folks would be on board with and the kinds of interventions that (some) AI ethics folks would be on board with. For example, it seems like (many people in) both groups have concerns about the rate of AI progress and would endorse regulations/policies that promote safe/responsible AI development.
  2. Given recent developments in AI, and apparent interest in regulation that promotes safety, it seems like now might be a particularly good time for people to think seriously about how the AIS community and the AI ethics community could work together.
  3. Despite differences, it would be surprising if there was rather little that the "two" communities could learn from each other.
  4. I appreciate the links and examples. I'll probably go through them at some point soon and possibly DM you. I think a lot of people are interested in this topic, but few have the time/background to actually "do research" and "compile resources". It seems plausible to me that more "lists of resources/examples/case studies" could improve reasoning on this topic (even moreso than high-level argumentation, and I say that as someone who's often advocating for more high-level argumentation!)

Some thoughts I had while reading that you might disagree with (or at least I didn't see acknowledged much in the post):

  1. The differences between the two groups are not trivial, and they'll often lead to different recommendations. For example, if you brought ARC Evals together with (hypothetical) AI Ethics Evals, I imagine they would both agree "evals are important" but they would have strong and serious disagreements about what kinds of evals should be implemented.
  2. In general, when two groups with different worldviews/priorities join coalitions, a major risk is that one (or both) of the groups' goals get diluted. 
  3. It's harder to maintain good epistemics and strong reasoning + reasoning transparency in large coalitions of groups who have different worldviews/goals. ("We shouldn't say X because our allies in AI ethics will think it's weird.") I don't think "X is bad for epistemics" means "we definitely shouldn't consider X", but I think it's a pretty high cost that often goes underappreciated/underacknowledged (Holden made a similar point recently). 
  4. In general, I think the piece could have benefitted from expressing more uncertainty around certain claims, acknowledging counterarguments more, and trying to get an ITT of people who disagree with you. 

'It's harder to maintain good epistemics and strong reasoning + reasoning transparency in large coalitions of groups who have different worldviews/goals. ("We shouldn't say X because our allies in AI ethics will think it's weird.") I don't think "X is bad for epistemics" means "we definitely shouldn't consider X", but I think it's a pretty high cost that often goes underappreciated/underacknowledged'

This is probably a real epistemic cost in my view, but it takes more than identifying a cost to establish that forming a coalition with people with different goals/beliefs is overall epistemically costly, given that doing so also has positive effects like bringing in knowledge that we don't have because no group knows everything. 

Just quickly on that last point: I recognise there is a lot of uncertainty (hence the disclaimer at the beginning). I didn't go through the possible counterarguments because the piece was already so long! Thanks for your comment though, and I will get to the rest of it later!

Interesting that you don't think the post acknowledged your second collection of points. I thought it mostly did. 
1. The post did say it was not suggesting to shut down existing initiatives. So where people disagree on (for example) which evals to do, they can just do the ones they think are important and then both kinds get done. I think the post was identifying a third set of things we can do together, and this was not specific evals, but more about big narrative alliance when influencing large/important audiences. The post also suggested some other areas of collaboration, on policy and regulation, and some of these may relate to evals so there could be room for collaboration there, but I'd guess that more demand, funding, infrastructure for evals helps both kinds of evals.
2. Again I think the post addresses this issue: it talks about how there is this specific set of things the two groups can work on together that is both in their interest to do. It doesn't mean that all people from each group will only work on this new third thing (coalition building), but if a substantial number do, it'll help. I don't think the OP was suggesting a full merger of the groups. They acknowledge the 'personal and ethical problems with one another; [and say] that needn’t translate to political issues'. The call is specifically for political coalition building.
3. Again I don't think the OP is calling for a merger of the groups. They are calling for collaborating on something.
4. OK the post didn't do this that much, but I don't think every post needs to and I personally really liked that this one made its point so clearly. I would read a post which responds to this with some counterarguments with interest so maybe that implies I think it'd benefit from one too, but I wouldn't want a rule/social expectation that every post lists counterarguments as that can raise the barrier to entry for posting and people are free to comment in disagreements and write counter posts.

Ye I basically agree with this.

  1. On evals, I think it is good for us to be doing as much evals as possible, firstly because both sorts of evaluations are important, but also more (even self imposed) regulatory hurdles to jump through, the better. Slow it down and bring the companies under control. 
  2. Indeed, the call is a broader political coalition building. Not everyone, not all the time, not on everything. But on substantially more than we currently are.
  3. Yes
  4. There are a number of counterarguments to this post, but I didn't include them because a) I probably can't give the strongest counterarguments to my own beliefs b) This post was already very long, and I had to cut out sections already on Actor-Network Theory and Agency and something else I can't remember c) I felt it might muddle the case I'm trying to make here if it was intersperced with counterarguments. One quick point on counterarguments is I think a counterargument would need to be strong enough to not just prove that the extreme end result is bad ( a lot more coalition building would be bad ) , but probably that the post is directionally bad (some more coalition building would be bad). 

I am not speaking for the DoD, the US government, or any of my employers.

I think that your claim about technological inevitability is premised on the desire of states to regulate key technologies, sometimes mediated by public pressure. All of the examples listed were blocked for decades by regulation, sometimes supplemented with public fear, soft regulation, etc. That's fine so long as, say, governments don't consider advancements in the field a core national interest. The US and China do, and often in an explicitly securitized form.

Quoting CNAS

China’s leadership – including President Xi Jinping – believes that being at the forefront in AI technology is critical to the future of global military and economic power competition.

English-language Coverage of the US tends to avoid such sweeping statements, because readers have more local context, because political disagreement is more public, and because readers expect it.

But the DoD in the most recent National Defense Strategy identified AI as a secondary priority. Trump and Biden identified it as an area to maintain and advance national leadership in. And, of course, with the US at the head they don't need to do as much in the way of directing people, since the existing system is delivering adequate results.

Convincing the two global superpowers not to develop militarily useful technology while tensions are rising is going to be the first time in history that has ever been accomplished.

That's not to say that we can't slow it down. But AI very much is inevitable if it is useful, and it seems like it will be very useful.

A number of things. Firstly, this criticism may be straightforwardly correct; it may be pursuing something that is the first time in history (I'm less convinced eg bioweapons regulation etc) ; nonetheless, other approaches to TAI governance seem similar (eg trust 1 actor to develop a transformative and risky technology and not use it for ill). It may indeed require such change, or at least change of perceptionof the potential and danger of AI (which is possible). Secondly, this may not be the case. Foundation models (our present worry) may be no more (or even less) beneficial in military contexts than narrow systems. Moreover, foundation models, developed by private actors, seem pretty challenging to their power in a way that neither the Chinese government nor US military is likely to accept. Thus, AI development may continue without dangerous model growth. Finally, very little development of foundation models are driven by military actors, and the actors that do develop it may be constructed as legitimately trying to challenge state power. If we are on a path to TAI (we may not be), then it seems in the near term only a very small number of actors, all private, could develop it. Maybe the US Military could gain the capacity to, but it seems hard at the moment for them to

Since the expected harms from AI are obviously much smaller in expectation in an extreme "stochastic parrot" world where we don't have to worry at all about X-risk from superintelligent systems, it actually does very much matter whether you're in that world if you're proposing a general attempt to block AI progress:  if the expected harms from further commercial development of AI are much smaller, they are much more likely to be outweighed by the expected benefits. 

I think this is a) not necessarily true (as shown in the essay, it could still lead to existential catastrophe eg by integration with nuclear command and control) and b) if we ought to sll be pulling in the same direction against these companies, why is the magnitude difference relevant. Moreover, your claims would suggest that the AI Ethics crowd would be more pro-AI development than the AI Safety crowd. In practice, the opposite is true, so I'm not really sure why your argument holds

'think this is a) not necessarily true (as shown in the essay, it could still lead to existential catastrophe eg by integration with nuclear command and control) '

The *expected* harm can still be much lower, even if the threat is not zero. I also think 'they might get integrated with nuclear command and control' naturally suggests much more targeted action than does "they are close to superintelligent systems and any superintelligent systems is mega dangerous no matter what its designed for". 

'if we ought to sll be pulling in the same direction against these companies, why is the magnitude difference relevant'

Well, it's not relevant if X-risk from superintelligence is in fact significant. But I was talking about the world where it isn't. In that world, we possibly shouldn't be pulling against the companies overall at all: merely showing that there are still some harms from their actions is not enough to show that we should be all-things-considered against them. Wind farms impose some externalities on wild-life, but that doesn't mean they are overall bad. 

'Moreover, your claims would suggest that the AI Ethics crowd would be more pro-AI development than the AI Safety crowd. In practice, the opposite is true, so I'm not really sure why your argument holds'

I don't think so. Firstly, people are not always rational. I am suspicious that a lot of the ethics crowds sees AI/tech companies/enthusiasm about AI, as a sort of like a symbol of a particular kind of masculinity that they, as a particular kind of American liberal feminists dislike. This in my view, biases them in favor of the harms outweighing the benefits, and is also related to a particular style of US liberal identity politics where once a harm has been identified and associated with white maleness the harmful thing must be rhetorically nuked from orbit, and any attempt to think about trade-offs is pathetic excuse making. Secondly, I think many of the AI safety crowd  just really like AI and think its cool: roughly they see it as a symbol of the same kind of stuff as the enemies, it's just, they like that stuff, and its tied up with their self-esteem. Secondly, I think many of them hope strongly for something like paradise/immortality through 'good AI' just as much as they fear the bad stuff. Maybe that's all excessively cynical, and I don't hold the first view about the ethics people all that strongly,  but I think a wider 'people are not always rational' point applies. In particular, people are often quite scope-insensitive. So just because Bob and Alice both think Xs is harmful, but Alice's view implies it is super-mega deadly harmful, and Bob's view just that it is pretty harmful, doesn't necessarily mean Bob will denounce it less passionately than Alice. 
 

'expected harm can be still much lower' this may be correct, but not convinced its orders of magnitude. And it also hugely depends on ones ethical viewpoint. My argument here isn't that under all ethical theories this difference doesn't matter (it obviously does), but that to the actions of my proposed combined AI Safety and Ethics knowledge network that this distinction actually matters very little.  This I think answers your second point as well; I am addressing this call to people who broadly think that on the current path, risks are too high. If you think we are nowhere near AGI and that near term AI harms aren't that important, then this essay simply isn't addressed to you. 

I think this is the core point I'm making. It is not that the stochastic parrots vs superintelligence distinction is  necessarily irrelevant if one is deciding for oneself if to care about AI. However, once one thinks that the dangers of the status quo are too high for whatever reason,  then the distinction stops mattering very much. 

Curated and popular this week
Relevant opportunities