Gideon Futerman

1723 karmaJoined Dec 2021



Independent researcher on SRM and GCR/XRisk and on pluralisms in existential risk studies

How I can help others

Reach out to me if you have questions about SRM/Solar geoengineering


I feel in a number of areas this post relies on the concept of AI being constructed/securitised in a number of ways that seem contradictory to me. (By constructed, I am referring to the way the technology is understood, percieved and anticipated, what narratives it fits into and how we understand it as a social object. By securitised, I mean brought into a limited policy discourse centred around national security that justifies the use of extraordinary measures (eg mass surveillance or conflict) to combat, concerned narrowly with combatting the existential threat to the state, which is roughly equal to the government, states territory and society. )

For example, you claim that hardware would be unlikely to be part of any pause effort, which would imply that AI is constructed to be important, but not necessarily exceptional (perhaps akin to climate change). This is also likely what would allow companies to easily relocate without major issues. You then claim it is likely international tensions and conflict would occur over the pause, which would imply thorough securitisation such that breaching a pause would be considered a threat enough to national security that conflict could be counternanced; therefore exceptional measures to combat the existential threat are entirely justified(perhaps akin to nuclear weapons or even more severe). Many of your claims of what is 'likely' seem to oscillate between these two conditions, which in a single juristiction seem unlikely to occur simultaeously. You then need a third construction of AI as a technology powerful and important enough to your country to risk conflict with the country that has thoroughly securitised it. SImilarly there must be elements in the paused country that are powerful that also believe it is a super important technology that can be very useful, despite its thorough securitisation (or because of it; I don't wish to project securitisation as necessarily safe or good! Indeed, the links to military development, which could be facilitated by a pasue, may be very dangerous indeed.)

You may argue back two points; either that whilst all the points couldn't occur simultanously, they are all pluasible. Here I agree, but then the confidence in your language would need to be toned down. Secondly that these different constructions of AI may differ across juristictions, meaning that all of these outcomes are likely. This also seems certainly unlikely, as countries are impacted by each other; narratives do spread, particularly in an interconnected world and particularly if they are held by powerful actors. Moreover, if powerful states are anywhere close to risking conflict over this, other economic or diplomatic measures, would be utilised first, likely meaning the only countries that would continue to develop it would be those who construct it as a super important (those who didn't would likely give into the pressure). In a world where the US or China construct the AI Pause as a vital matter of national security, middle ground countries in their orbit allowing its development would not be counternanced. 

I'm not saying a variety of constructions are not plausible. Nor am I saying that we necessarily fall to the extreme painted in the above paragraph (honestly this seems unlikely to me, but if we don't then a Pause by global cooperation seems more plausible). Rather, I am suggesting that as it stands your idea of 'likely outcomes', are, together, very unlikely to happen, as they rely on different worlds to one another.

I think the most likely thing is that on a post like this the downvotes vs disagreevotes distinction isn't very strong. Its suggestions, so one would upvite the suggestions one likes most, and downvote those you like least (to contribute to visibility). If this is the case, I think its pretty fair to be honest. 

If not, then I can only posit a few potential reasons, but these all seem  bad to me that I would assume the above is true:

  • People think 80K platforming people who think climate change could contribute to XRisk would be actively harmful (eg by distracting people from more important problems)
  • People think 80K platforming Luke (due to his criticism of EA- which I assume they think is wrong or bad faith) would be actively harmful, so it shouldn't be considered
  • People think having a podcast specifically talking about what EA gets wrong about XRisk would be actively harmful (perhaps it would turn newbies off, so we shouldn't have it)
  • People think suggesting Luke is trolling because they think their is no chance that 80K would platform him (this would feel very uncharitable towards 80K imo)

Christine Korsgaard on Kantian Approaches to animal welfare/ about her recent-ish book 'Fellow Creatures'

Some of the scholars who've worked on Insects or Decapod and Pain/Sentience (Jonathan Birch, Meghan Barrett, Lars Chittika etc)

Bob Fischer on comparing interspecies welfare

I think another discussion presenting SRM in the context of GCR might be good; there has now been a decent amount of research on this which probably proposes actions rather different from what SilverLining presents.

SilverLining is also decently controversial I the SRM community, so some alternative perspectives would probably be better than Kelly

Send me a DM if you're interested, I'd be happy to provide a bunch of resources and to put you in contact with some people who could help send a bunch of resources

Hi John,

Sorry to revisit this, and I understand if you don't. I must apologies if my previous comments felt a bit defensive from my side, as I do feel your statements towards me were untrue, but I think I have more clarity on the perspective you've come from and some of the possible baggage brought to this conversation, and I'm truly sorry if I've be ignorant of relevant context. 

I think this comment is more going to address the overall conversation between us two on here, and where I perceive it to have gone, although I may be wrong, and I am open to corrections. 

Firstly, I think you have assumed this statement is essentially a product of CSER, perhaps because it has come from me, who was a visit at CSER, and has been similarly critical of your work in a way that I know some at CSER have. [I should say, for the record on this, I do think your work is of high quality, and I hope you've never got the impression that I don't. Perhaps some of my criticisms last year towards the review process your report went through felt poor quality (and I can't remember what they were and may not stand by them today), but if so, I am sorry.] Nonetheless, I think its really important to keep in mind that this statement is absolutely not a 'CSER' statement; I'd like to remind you of the signatories, and whilst every signatory doesn't agree with everything, I hope you can see why I got so defensive when you claimed that the signatories weren't being transparent and actually attempting to just make EA another left-wing movement. I tried really hard to get a plurality of voices in this document, which is why such an accusation offended me, but ultimately I shouldn't have got defensive over this, and I must apologise. 

Secondly, on that point, I think we may have been talking about different things when you said 'heterodox CSER approaches to EA.' Certainly, I think Ehrlich and much of what he has called for is deeply morally reprehensible, and the capacity for ideas like his to gain ground is a genuine danger of pluralistic xrisk, because it is harder to police which ideas are acceptable or not (similarly, I have recieved criticism because this letter fails to call out eugenics explicitly, another danger). Nonetheless, I think we can trust as a more pluralistic community develops it would better navigate where the bounds of acceptable or unacceptable views and behaviours are, and that this would be better than us simply suggesting this now. Maybe this is a crux we/the signatories and much of the commens section disagree on. I think we can push for more pluralism and diversity in response to our situation whilst trusting that the more pluralistic ERS community will police how far this can go. You disagree and think we need to lay this out now otherwise it will either a) end up with anything goes, including views we find moral reprehensible or b) will mean EA is hijaked by the left. I think the second argument is weaker, particularly because this statement is not about EA, but about building a broader field of Existential Risk Studies, although perhaps you see this as a bit of a trojan horse. I understand I am missing some of the historical context that makes you think it is, but I hope that the signatories list may be enough to show you that I really do mean what I say when I call for pluralism.

I also must apologise if the call for retraction of certain parts of your comment seemed uncollegiate or disrespectful to you; this was certainly not my intention. I, however, felt that your painting of my views was incorrect, and thought you may, in light of this, be happy to change; although given you are not happy to retract, I assume you are either trying to make the argument that these are in fact my underlying beliefs (or that I am being dishonest, although I have no reason to suspect you would say this!).

I think there are a few more substantive points we disagree on, but to me this seems like the crux of the more heated discussion, and I must apologise it got so heated

in response to your first point, I think one of the hopes of creating a pluralistic xrisk community is so that different parts of the community actually understand what work and persepctives each are doing, rather than either characturing them/misrepresenting them (for example, I've heard people outside EA assuming all EA XRisk work is basically just what Bostrom says) or just not knowing what other have to say. Ultimately, I think the workshop that this statement came out of did this really well, and so I hope if there is desire to move towards a more pluralistic community (which, perhaps from this forum, there isn't) then we would better understand each others persepctives and why we disagree, and gain value from this disagreement. One example here is I think I personally have gained huge value from my discussions with John Halstead on climate, and really trying to understand his position. 

I agree on the last paragraph, and is definitely a tension we will have to try anda resolve over time. This is one of the reasons we spoke about "we suggest that the power to confer support for different approaches should be distributed among the community rather than allocated by a few actors and funders, as no single individual can adequately manifest the epistemic and ethical diversity we deem necessary." which would hopefully go someway to make sure that more forms of pluralism can assert themselves. Obviously, though, this won't be perfect, and we will have to create spaces where voices that may previously not have been heard, because they don't have all the money or aren't loud and assertive, would get heard; this will be hard, and will definitely be difficult for someone like me who is clearly quite loud and likes to get my opinion out there. 

NB: (I would also like to comment, and I really don't want to be antagonistic to John as I do deeply respect him, but I do think his representation of 'CSER-type heterodoxy' or at least how he's framed it with his two chief examples being me and Luke seems to me to be a misrepresentation. I know this may be arguing back too much, but given he's said I believe something I don't, I think its important to put the record straight (I'd hope its unintentional, although we have actually spoken a lot about my views))

Load more