G

GideonF

1793 karmaJoined Dec 2021

Bio

Participation
5

Independent researcher on SRM and GCR/XRisk and on pluralisms in existential risk studies

How I can help others

Reach out to me if you have questions about SRM/Solar geoengineering

Comments
173

As another former fellow and research manager (climate change), this seems perhaps a bit of a strange justification.

The infrastructure is here - similar to Moritz's point, whilst Cambridge clearly has a very strong AI infrastructure, the comparative advantage of Cambridge over any other location, would, at least to my mind, be the fact it has always been a place of collaboration across different cause areas and considerations of the intersections and synergies involved (ie through CSER). It strikes me that in fact other locales, such as London (which probably has one of the highest concentration of AI Governance talent in the world) may have been a better location than Cambridge. I think this idea that Cambridge is best suited for purely AI seems surprising, when many fellows commented (me included) on the usefulness of having people from lots of different cause areas around, and the events we managed to organise (largely due to the Cambridge location) were mostly non-AI yet got good attendence throughout the cause areas.

Success of AI-safety alumni - similar to Moritz, I remain skeptical of this point (I think there is a closely related point which I probably endorse, which I will discuss later). It doesn't seem obvious that, when accounting for career level, and whether participants were currently in education, that AI safety actually scores better. Firstly, you have the problem of differing sample size, for example, take climate change; there have only been 7 climate change fellows (5 of which were last summer, and of those (depending on how you judge it), only 3 have been available for job opportunities for more than 3 months after the fellowship, so the sample size is much smaller than AI Safety and governance (and they have achieved a lot in that time). Its also, ironically, not clear that the AI Safety and Governance cause areas have been more successful at the metric of 'engaging in AI Safety projects'; for example, 75% of one of the non-AI cause areas' fellows from 2022 are currently employed in, or have offers for PhD's in, AI XRisk related projects, which seems a similar rate of success than AI in 2022.

I think the bigger thing that acts in favour of making it AI focused it that it is much easier for junior people to get jobs or internships in AI Safety and Governance than in XRisk focused work in some other cause areas; there simply are more role available for talented junior people that are clearly XRisk related. This might be clearly one reason to make ERA about AI. However, whilst I mostly buy this argument, its not 100% clear to me that this means counterfactual impact is higher. Many of the people entering into the AI safety part of the programme may have gone on to fill these roles anyway (I know of something similar to this being the case with a few rejected applicants), or the person whom they got the role above may have been only marginally worse. Whereas, for some of the cause areas, the participants leaned less XRisk-y by background, so ERA's counterfactual impact may be stronger, although it also may be higher variance. I think on balance, this does seem to support the AI switch, but by no margin am I sure of this.

It  seems that the successful opposition to previous technologies was indeed explicitly against that technology, and so I'm not sure the softening of the message you suggest is actually necessarily a good idea. @charlieh943  recent case study into GM crops highlighted some of this (https://forum.effectivealtruism.org/posts/6jxrzk99eEjsBxoMA/go-mobilize-lessons-from-gm-protests-for-pausing-ai - he suggests emphasising the injustice of the technology might be good); anti-SRM activists have been explictly against SRM (https://www.saamicouncil.net/news-archive/support-the-indigenous-voices-call-on-harvard-to-shut-down-the-scopex-project), anti-nuclear activists are explicitly against nuclear energy and many more. Essentially, I'm just unconvinced that 'its bad politics' is necessarily supported by case studies that are most relevant to AI.

Nonetheless, I think there are useful points here both about what concrete demands could look like, or who useful allies could be, and what more diversified tactics could look like. Certainly, a call for a morotorium is not necessarily the only thing that could be useful in pushing towards a pause. Also, I think you make a point that a 'pause' might not be the best message that people can rally behind, although I reject the opposition. I think, in a similar way to @charlieh943 that emphasising injustice may be one good message that can be rallied around. I also think a more general 'this technology is dangerous and allowing companies to make it are dangerous' may also be a useful rallying message, which I have argued for in the past https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different

I feel in a number of areas this post relies on the concept of AI being constructed/securitised in a number of ways that seem contradictory to me. (By constructed, I am referring to the way the technology is understood, percieved and anticipated, what narratives it fits into and how we understand it as a social object. By securitised, I mean brought into a limited policy discourse centred around national security that justifies the use of extraordinary measures (eg mass surveillance or conflict) to combat, concerned narrowly with combatting the existential threat to the state, which is roughly equal to the government, states territory and society. )

For example, you claim that hardware would be unlikely to be part of any pause effort, which would imply that AI is constructed to be important, but not necessarily exceptional (perhaps akin to climate change). This is also likely what would allow companies to easily relocate without major issues. You then claim it is likely international tensions and conflict would occur over the pause, which would imply thorough securitisation such that breaching a pause would be considered a threat enough to national security that conflict could be counternanced; therefore exceptional measures to combat the existential threat are entirely justified(perhaps akin to nuclear weapons or even more severe). Many of your claims of what is 'likely' seem to oscillate between these two conditions, which in a single juristiction seem unlikely to occur simultaeously. You then need a third construction of AI as a technology powerful and important enough to your country to risk conflict with the country that has thoroughly securitised it. SImilarly there must be elements in the paused country that are powerful that also believe it is a super important technology that can be very useful, despite its thorough securitisation (or because of it; I don't wish to project securitisation as necessarily safe or good! Indeed, the links to military development, which could be facilitated by a pasue, may be very dangerous indeed.)

You may argue back two points; either that whilst all the points couldn't occur simultanously, they are all pluasible. Here I agree, but then the confidence in your language would need to be toned down. Secondly that these different constructions of AI may differ across juristictions, meaning that all of these outcomes are likely. This also seems certainly unlikely, as countries are impacted by each other; narratives do spread, particularly in an interconnected world and particularly if they are held by powerful actors. Moreover, if powerful states are anywhere close to risking conflict over this, other economic or diplomatic measures, would be utilised first, likely meaning the only countries that would continue to develop it would be those who construct it as a super important (those who didn't would likely give into the pressure). In a world where the US or China construct the AI Pause as a vital matter of national security, middle ground countries in their orbit allowing its development would not be counternanced. 

I'm not saying a variety of constructions are not plausible. Nor am I saying that we necessarily fall to the extreme painted in the above paragraph (honestly this seems unlikely to me, but if we don't then a Pause by global cooperation seems more plausible). Rather, I am suggesting that as it stands your idea of 'likely outcomes', are, together, very unlikely to happen, as they rely on different worlds to one another.

I think the most likely thing is that on a post like this the downvotes vs disagreevotes distinction isn't very strong. Its suggestions, so one would upvite the suggestions one likes most, and downvote those you like least (to contribute to visibility). If this is the case, I think its pretty fair to be honest. 

If not, then I can only posit a few potential reasons, but these all seem  bad to me that I would assume the above is true:

  • People think 80K platforming people who think climate change could contribute to XRisk would be actively harmful (eg by distracting people from more important problems)
  • People think 80K platforming Luke (due to his criticism of EA- which I assume they think is wrong or bad faith) would be actively harmful, so it shouldn't be considered
  • People think having a podcast specifically talking about what EA gets wrong about XRisk would be actively harmful (perhaps it would turn newbies off, so we shouldn't have it)
  • People think suggesting Luke is trolling because they think their is no chance that 80K would platform him (this would feel very uncharitable towards 80K imo)
Answer by GideonFSep 13, 202311
6
2

Christine Korsgaard on Kantian Approaches to animal welfare/ about her recent-ish book 'Fellow Creatures'

Answer by GideonFSep 13, 202335
12
2

Some of the scholars who've worked on Insects or Decapod and Pain/Sentience (Jonathan Birch, Meghan Barrett, Lars Chittika etc)

Bob Fischer on comparing interspecies welfare

Answer by GideonFSep 13, 20231
5
14

Luke Kemp on:

  • Climate Change and Existential Risk
  • The role of Horizon Scans in Existential Risk Studies
  • His views on what EA gets wrong about XRisk
  • Deep Systems Thinking and XRisk.

Alternatively for another Climate Change and XRisk that would be narrower and less controversial/critical of EA than Luke is, Constantin Arnsschedit would be good

I think another discussion presenting SRM in the context of GCR might be good; there has now been a decent amount of research on this which probably proposes actions rather different from what SilverLining presents.

SilverLining is also decently controversial I the SRM community, so some alternative perspectives would probably be better than Kelly

Send me a DM if you're interested, I'd be happy to provide a bunch of resources and to put you in contact with some people who could help send a bunch of resources

Hi John,

Sorry to revisit this, and I understand if you don't. I must apologies if my previous comments felt a bit defensive from my side, as I do feel your statements towards me were untrue, but I think I have more clarity on the perspective you've come from and some of the possible baggage brought to this conversation, and I'm truly sorry if I've be ignorant of relevant context. 

I think this comment is more going to address the overall conversation between us two on here, and where I perceive it to have gone, although I may be wrong, and I am open to corrections. 

Firstly, I think you have assumed this statement is essentially a product of CSER, perhaps because it has come from me, who was a visit at CSER, and has been similarly critical of your work in a way that I know some at CSER have. [I should say, for the record on this, I do think your work is of high quality, and I hope you've never got the impression that I don't. Perhaps some of my criticisms last year towards the review process your report went through felt poor quality (and I can't remember what they were and may not stand by them today), but if so, I am sorry.] Nonetheless, I think its really important to keep in mind that this statement is absolutely not a 'CSER' statement; I'd like to remind you of the signatories, and whilst every signatory doesn't agree with everything, I hope you can see why I got so defensive when you claimed that the signatories weren't being transparent and actually attempting to just make EA another left-wing movement. I tried really hard to get a plurality of voices in this document, which is why such an accusation offended me, but ultimately I shouldn't have got defensive over this, and I must apologise. 

Secondly, on that point, I think we may have been talking about different things when you said 'heterodox CSER approaches to EA.' Certainly, I think Ehrlich and much of what he has called for is deeply morally reprehensible, and the capacity for ideas like his to gain ground is a genuine danger of pluralistic xrisk, because it is harder to police which ideas are acceptable or not (similarly, I have recieved criticism because this letter fails to call out eugenics explicitly, another danger). Nonetheless, I think we can trust as a more pluralistic community develops it would better navigate where the bounds of acceptable or unacceptable views and behaviours are, and that this would be better than us simply suggesting this now. Maybe this is a crux we/the signatories and much of the commens section disagree on. I think we can push for more pluralism and diversity in response to our situation whilst trusting that the more pluralistic ERS community will police how far this can go. You disagree and think we need to lay this out now otherwise it will either a) end up with anything goes, including views we find moral reprehensible or b) will mean EA is hijaked by the left. I think the second argument is weaker, particularly because this statement is not about EA, but about building a broader field of Existential Risk Studies, although perhaps you see this as a bit of a trojan horse. I understand I am missing some of the historical context that makes you think it is, but I hope that the signatories list may be enough to show you that I really do mean what I say when I call for pluralism.

I also must apologise if the call for retraction of certain parts of your comment seemed uncollegiate or disrespectful to you; this was certainly not my intention. I, however, felt that your painting of my views was incorrect, and thought you may, in light of this, be happy to change; although given you are not happy to retract, I assume you are either trying to make the argument that these are in fact my underlying beliefs (or that I am being dishonest, although I have no reason to suspect you would say this!).

I think there are a few more substantive points we disagree on, but to me this seems like the crux of the more heated discussion, and I must apologise it got so heated

Load more