G

GideonF

2109 karmaJoined

Bio

Participation
5

(Slowly) shifting from XRisk into insect welfare. Currently working on slowing AI and SRM and GCR. 

How I can help others

Reach out to me if you have questions about SRM/Solar geoengineering

Comments
182

I do think we have to argue that national securitisation is more dangerous than humanity securitisation, or non-securitised alternatives. I think its important to note that whilst I explicitly discuss humanity macrosecuritisation, there are other alternatives as well that Aschenbrenner's national securitisation compromises, as I briefly argue in the piece.

Of course, I have not and was not intending to provide an entire and complete argument for this (it is only 6,000 words) , although I think I do go further to proving this than you give me credit for here. As I summarise in the piece, the Sears (2023) thesis provides a convincing argument from empirical examples that national securitisation (and a failure of humanity macrosecuritisation) is the most common factor in the failure of Great Powers to adequately combat existential threats (eg the failure of the Baruch Plan/international control of nuclear energy, the promotion of technology competition around AI vs arms agreements with the threat of nuclear winter, BWC, montreal protocol). Given this limited but still significant data that I draw on, I do think it is unfair to suggest that I haven't provided an argument that national securitisation isn't more dangerous on net. Moreover, as I address in the piece, Aschenbrenner fails to provide any convincing track record of success of national securitisation, whilst his use of historical analogies (Szilard, Oppenheimer and Teller), all indicate he is pursuing a course of action that probably isn't safe. Whilst of course I didn't go through every argument, I think Section 1 provides arguments that national securitisation isn't inevitable, Section 2 provides the argument that, at least from historical case studies, humanity macrosecuritisation is safer than national securitisation. The other sections show why I think Aschenbrenner's argument is dangerous rather than just wrong, and how he ignores important other factors.

The core of Aschenbrenner's argument is that national securitisation is desirable and thus we ought to promote and embrace it ('see you in the desert'). Yet he fails to engage with the generally poor track record of national securitisation at promoting existential safety, or fails to provide a legitimate counter-argument. He also, as we both acknowledge, fails to adequate deal with possibilities for international collaboration. His argument for why we need national securitisation seems to be premised on three main ideas: it is inevitable (/there are no alternatives), the values of the USA 'winning' the future is our most important concern (whilst alignment is important, I do think it is secondary to Aschenbrenner to this), the US natsec establishment is the way to ensure that we get a maximally good future. I think Aschenbrenner is wrong on the first point (and certainly, fails to adequeatly justify it). On the second point, he overestimates the importance of the US winning compared to the difficulty of alignment, and certainly, I think his argument for this fails to deal with many of the thorny questions here (what about non-humans? how does this freedom remain in a world of AGI etc?). On the third point, I think he goes some way to justify why the US natsec establishment would be more likely to 'win' a race, but fails to show why such a race would be safe (particularly given its track record). He also fails to argue that natsec would allow for the values we care about to be preserved (US natsec doesn't have the best track record with reference to freedom, human rights etc).

On the point around the instability of international agreements. I do think this is the strongest argument against my model of humanity macrosecuritisation leading to a regime that stops the development of AGI. However, as I allude to in the essay, this isn't the only alternative to national securitisation. Since publishing the piece this is the biggest mistake in reasoning (and I'm happy to call it that) that I see people making. The chain of logic that goes 'humanity macrosecuritisation leading to an agreement would be unstable therefore promoting national securitisation is the best course of action' is flawed; one needs to show that the plethora of other alternatives (depolitical/political/riskified decisionmaking, or humanity macrosecuritisation but without an agreement) are not viable - Aschenbrenner doesn't address this at all. I also, as I think you do, see Aschenbrenner's argument against an agreement as containing very little substance - I don't mean to say its obviously wrong, but he hardly even argues for it.

I do think stronger arguments for the need to nationally securitise AI could be provided, and I also think they are probably wrong. Similarly, I think stronger arguments than mine can be provided with regards to why we need to humanity macrosecuritise superintelligence and how international collaboration on controlling AI development (I am working on something like this) that can address some of the concerns that one may have. But the point of this piece is to engage with the narratives and arguments in Aschenbrenner's piece. I think he fails to justify national securitisation whilst also taking action that endangers us (and I'm hearing from people connected to US politics that the impact of this piece may actually be worse than I feared).

On the stable totalitarianism point, I also think its useful to note that it is not at all obvious that the risk of stable totalitarianism is more under some form of global collaboration than it is under a nationally securitised race.

On these three points:

  • Yes, the Project is a significant possibility. People like Aschenbrenner make this more likely to happen, and we should be trying to oppose it as much as possible. Certainly, there is a major 'missing mood' in Aschenbrenner's piece (and the interview), where he seems to greet the possibility of the Project with glee.
  • I'm actually pretty unsure whether improving cybersecurity is very important. The benefits are well known. However, if you don't improve cybersecurity (or can't), then advancing AI becomes much more dangerous withg much less upside, so racing becomes harder. With worse cybersecurity, a pause may be more likely. Basically, I'm unsure and I don't think its as simple as most people think. Its also not obvious to me that, for example, America directly sharing model weights with China wouldn't be a positive thing.
  • Certainly according to my ethics I am not 'neutral pro-humanity', but rather care about a flourishing and just future for all sentient beings. On this axis, I do think the difference is more marginal than many would expect. I would probably guess that US/the free world would be better to have relatively greater power, although with some caveats (eg I'm not sure I trust the CIA very much to have a large amount of control). I think both groups 'as-is', particularly in a nationally securitised 'race' are rather far from the optimal, and this difference is very morally significant. So I think I'm definitiely MUCH more concerned than Aschenbrenner is about avoiding a nationally securitised race (also because I'm more concerned with misalignment than I think he is).

Thanks for this reply Stephen, and sorry for my late reply, I was away.

I think its true that Aschenbrenner gives (marginally) more consideration than I gave him credit for - not actually sure how I missed that paragraph to be honest! Even then, whilst there is some merit to that argument, I think he needs to much better justify his dismissal of an international treaty (along similar lines to your shortform piece). As I argue in the essay, I think that such lack of stability requires a particular reading of how states acts - for example, I argue if we buy a form of defensive realism, states may in fact be more inclined to reach a stable equilibrium/. Moreover, as I argue, I think Aschenbrenner fails to acknowledge how his ideas on this may well become a self-fulfilling prophecy.

I actually think I just disagree with your characterisation of my second point, although it could well be a flaw in my communication, and if so I apologise. My argument isn't even that values of freedom and democracy, or even a narrower form of 'American values' wouldn't be better for the future (see below for more discussion on that), its that national securitisation has a bad track record at promoting collaboration and dealing with extreme risk and we have good reason to think it may be bad in the case of AI. So even if Aschenbrenner doesn't frame it as national securitisation for the sake of nationalism, but rather national securitisation for the sake of all humanity, the impacts will be the same. The point of that paragraph was simply to preempt a critique that is exactly what you say. I also think its clear that Aschenbrenner in his piece is happy to conflate those values with 'American nationalism/dominance' (eg 'America must win'), so I'm not sure him making this distinction actually matters.

I also probably am much less bearish on American dominance than Aschenbrenner is. I'm not sure the American national security establishment actually has a good track record of preserving a 'raucous plurality', and if (as Aschenbrenner wants) we expect superintelligence to be developed through that institution, I'm not overly confident in how good it will be. Whilst I am no friend of dictatorships, I'm also unconvinced that if one cares about raucous pluralism that US dominance, or certainly to the extent that Aschenbrenner envisions, is necessarily a good thing. Moreover, even in American democracy, the vast majority of moral patients aren't represented at all. I'm essentially unconvinced that the benefits of America 'winning' a nationally securitised AI race anywhere near oughtweigh the geopolitical risk, misalignment risk, and most importantly the risk of not taking our time to construct a mutually beneficial future for all sentient beings. I think I have put this paragraph quite crudely, and would be happy to elaborate further, although it isn't actually central to my argument.

I think its wrong to say that my argument doesn't work without significant argument against those two premises. Firstly, my argument was that Aschenbrenner was 'dangerous', which required highlighting why the narrative choice was problematic. Secondly, yes, there is more to do on those points, but given Aschenbrenner's failure to give in depth argumentation on those points, I thought that they would be better to deal with as their own pieces (which I may or may not right). In my view, the most important aspect of the piece was Aschenbrenner's claim that national securitisation is necessary to secure the safest outcomes, and I do feel the piece was broadly successful at arguing that this is a dangerous narrative to propogate. I do think if you hold Aschenbrenner's assumptions strongly, namely cooperation is very difficult, alignment is easy-ish and the most important thing is for an American AI lead as this leads to a maximally good future by maximising free expression and political expression, then my argument is not convincing. I do, however, think this model is based on some rather controversial assumptions, and given the dangers involved, woefully insufficiently justified by Aschenbrenner in his essay.

One final point is that it is still entirely non-obvious, as I mention in the essay, that national securitisation is the best frame even if a pause is impossible, or even weaker, if it is an unstable equilibrium.

Answer by GideonF-1
1
1

Non-consequentialist effective altruism/animal welfare/cause prio/longtermism

I assume is this is an accidental mispelling of Quakerism

There seems to be this belief that arthopod welfare is some ridiculous idea only justified by extreme utilitarian calculations, and that loads of EA animal welfare money goes to it at the expensive of many other things, and this just seems really wrong to me. Firstly, arthropods hardly get any money at all, they are possibly the most neglected, and certainly amongst the most neglected, areas of animal welfare. Secondly, the argument for arthropod welfare is essentially exactly the same as your classic antispeciesist arguments; there aren't morally relevant differences between arthropods and other animals that justifies not equally considering their interests (or if you want to be non-utilitarian, equally considering them). Insects can feel pain (or certainly, the evidence is probably strong enough that they would probably pass the bar of sentience under UK law), and have other sentient experiences, so why would we not care about their welfare? Indeed, non-utilitarian philosophers also take this idea seriously: Christine Korsgaard, one of the most prominent Kantian philosophers today, sees insects as part of the circle of animals that are under moral consideration, and Nussbaum's capabilities approach is restricted to sentient animals, and I think we have good reason to think insects are sentient as well. Many insects seem to have potentially rich inner lives, and have things that go well and badly for them, things they strive to do, feelings of pain etc. What principled reason could we give for their exclusion, that wouldn't be objectionably speciesist. Also, all arthropod welfare work at present is about farmed animals; those farmed animals just happen to be arthropods!

Some useful practical ideas that could emerge:

  • Inform what welfare requiremens ought to be put into law when farming insects
  • Inform and lobby the insect farming industry to protect these welfare requirements (eg corporate campaigns); do this in a similar way to how decapod welfare research has informed the work of the Shrimp Welfare Project
  • Understand the impacts of pesticides on insect welfare, and use this to lobby for pesticide substitutes
  • Improve the evidence base of insect sentience such that they can be incorporated into law (although I think the evidence is probably at least as strong as decapods which are already seen as sentient under UK Law).

Insect suffering is here now and real, and there is a lot of practical things we could do about it; dismissing it as 'head in the cloud philosophers' seems misguided to me

I think it's probably important to note that some people (ie me) do in fact think a unilateral pause by one of the major players (eg USA, China, UK, EU) may actually be pretty effective if done in the right way with the right messaging (likely to be useful in pushing towards a coordinated or uncoordinated global pause). Particularly if the US paused, I could very much see this starting a change reaction

I think this is untrue with regards to animal protests. My impression is a decently significant percentage of EA people working on animals have participated in protests

As another former fellow and research manager (climate change), this seems perhaps a bit of a strange justification.

The infrastructure is here - similar to Moritz's point, whilst Cambridge clearly has a very strong AI infrastructure, the comparative advantage of Cambridge over any other location, would, at least to my mind, be the fact it has always been a place of collaboration across different cause areas and considerations of the intersections and synergies involved (ie through CSER). It strikes me that in fact other locales, such as London (which probably has one of the highest concentration of AI Governance talent in the world) may have been a better location than Cambridge. I think this idea that Cambridge is best suited for purely AI seems surprising, when many fellows commented (me included) on the usefulness of having people from lots of different cause areas around, and the events we managed to organise (largely due to the Cambridge location) were mostly non-AI yet got good attendence throughout the cause areas.

Success of AI-safety alumni - similar to Moritz, I remain skeptical of this point (I think there is a closely related point which I probably endorse, which I will discuss later). It doesn't seem obvious that, when accounting for career level, and whether participants were currently in education, that AI safety actually scores better. Firstly, you have the problem of differing sample size, for example, take climate change; there have only been 7 climate change fellows (5 of which were last summer, and of those (depending on how you judge it), only 3 have been available for job opportunities for more than 3 months after the fellowship, so the sample size is much smaller than AI Safety and governance (and they have achieved a lot in that time). Its also, ironically, not clear that the AI Safety and Governance cause areas have been more successful at the metric of 'engaging in AI Safety projects'; for example, 75% of one of the non-AI cause areas' fellows from 2022 are currently employed in, or have offers for PhD's in, AI XRisk related projects, which seems a similar rate of success than AI in 2022.

I think the bigger thing that acts in favour of making it AI focused it that it is much easier for junior people to get jobs or internships in AI Safety and Governance than in XRisk focused work in some other cause areas; there simply are more role available for talented junior people that are clearly XRisk related. This might be clearly one reason to make ERA about AI. However, whilst I mostly buy this argument, its not 100% clear to me that this means counterfactual impact is higher. Many of the people entering into the AI safety part of the programme may have gone on to fill these roles anyway (I know of something similar to this being the case with a few rejected applicants), or the person whom they got the role above may have been only marginally worse. Whereas, for some of the cause areas, the participants leaned less XRisk-y by background, so ERA's counterfactual impact may be stronger, although it also may be higher variance. I think on balance, this does seem to support the AI switch, but by no margin am I sure of this.

Load more