Independent researcher on SRM and GCR/XRisk and on pluralisms in existential risk studies
Reach out to me if you have questions about SRM/Solar geoengineering
It seems that the successful opposition to previous technologies was indeed explicitly against that technology, and so I'm not sure the softening of the message you suggest is actually necessarily a good idea. @charlieh943 recent case study into GM crops highlighted some of this (https://forum.effectivealtruism.org/posts/6jxrzk99eEjsBxoMA/go-mobilize-lessons-from-gm-protests-for-pausing-ai - he suggests emphasising the injustice of the technology might be good); anti-SRM activists have been explictly against SRM (https://www.saamicouncil.net/news-archive/support-the-indigenous-voices-call-on-harvard-to-shut-down-the-scopex-project), anti-nuclear activists are explicitly against nuclear energy and many more. Essentially, I'm just unconvinced that 'its bad politics' is necessarily supported by case studies that are most relevant to AI.
Nonetheless, I think there are useful points here both about what concrete demands could look like, or who useful allies could be, and what more diversified tactics could look like. Certainly, a call for a morotorium is not necessarily the only thing that could be useful in pushing towards a pause. Also, I think you make a point that a 'pause' might not be the best message that people can rally behind, although I reject the opposition. I think, in a similar way to @charlieh943 that emphasising injustice may be one good message that can be rallied around. I also think a more general 'this technology is dangerous and allowing companies to make it are dangerous' may also be a useful rallying message, which I have argued for in the past https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different
I feel in a number of areas this post relies on the concept of AI being constructed/securitised in a number of ways that seem contradictory to me. (By constructed, I am referring to the way the technology is understood, percieved and anticipated, what narratives it fits into and how we understand it as a social object. By securitised, I mean brought into a limited policy discourse centred around national security that justifies the use of extraordinary measures (eg mass surveillance or conflict) to combat, concerned narrowly with combatting the existential threat to the state, which is roughly equal to the government, states territory and society. )
For example, you claim that hardware would be unlikely to be part of any pause effort, which would imply that AI is constructed to be important, but not necessarily exceptional (perhaps akin to climate change). This is also likely what would allow companies to easily relocate without major issues. You then claim it is likely international tensions and conflict would occur over the pause, which would imply thorough securitisation such that breaching a pause would be considered a threat enough to national security that conflict could be counternanced; therefore exceptional measures to combat the existential threat are entirely justified(perhaps akin to nuclear weapons or even more severe). Many of your claims of what is 'likely' seem to oscillate between these two conditions, which in a single juristiction seem unlikely to occur simultaeously. You then need a third construction of AI as a technology powerful and important enough to your country to risk conflict with the country that has thoroughly securitised it. SImilarly there must be elements in the paused country that are powerful that also believe it is a super important technology that can be very useful, despite its thorough securitisation (or because of it; I don't wish to project securitisation as necessarily safe or good! Indeed, the links to military development, which could be facilitated by a pasue, may be very dangerous indeed.)
You may argue back two points; either that whilst all the points couldn't occur simultanously, they are all pluasible. Here I agree, but then the confidence in your language would need to be toned down. Secondly that these different constructions of AI may differ across juristictions, meaning that all of these outcomes are likely. This also seems certainly unlikely, as countries are impacted by each other; narratives do spread, particularly in an interconnected world and particularly if they are held by powerful actors. Moreover, if powerful states are anywhere close to risking conflict over this, other economic or diplomatic measures, would be utilised first, likely meaning the only countries that would continue to develop it would be those who construct it as a super important (those who didn't would likely give into the pressure). In a world where the US or China construct the AI Pause as a vital matter of national security, middle ground countries in their orbit allowing its development would not be counternanced.
I'm not saying a variety of constructions are not plausible. Nor am I saying that we necessarily fall to the extreme painted in the above paragraph (honestly this seems unlikely to me, but if we don't then a Pause by global cooperation seems more plausible). Rather, I am suggesting that as it stands your idea of 'likely outcomes', are, together, very unlikely to happen, as they rely on different worlds to one another.
I think the most likely thing is that on a post like this the downvotes vs disagreevotes distinction isn't very strong. Its suggestions, so one would upvite the suggestions one likes most, and downvote those you like least (to contribute to visibility). If this is the case, I think its pretty fair to be honest.
If not, then I can only posit a few potential reasons, but these all seem bad to me that I would assume the above is true:
Christine Korsgaard on Kantian Approaches to animal welfare/ about her recent-ish book 'Fellow Creatures'
Some of the scholars who've worked on Insects or Decapod and Pain/Sentience (Jonathan Birch, Meghan Barrett, Lars Chittika etc)
Bob Fischer on comparing interspecies welfare
I think another discussion presenting SRM in the context of GCR might be good; there has now been a decent amount of research on this which probably proposes actions rather different from what SilverLining presents.
SilverLining is also decently controversial I the SRM community, so some alternative perspectives would probably be better than Kelly
Send me a DM if you're interested, I'd be happy to provide a bunch of resources and to put you in contact with some people who could help send a bunch of resources
Sorry to revisit this, and I understand if you don't. I must apologies if my previous comments felt a bit defensive from my side, as I do feel your statements towards me were untrue, but I think I have more clarity on the perspective you've come from and some of the possible baggage brought to this conversation, and I'm truly sorry if I've be ignorant of relevant context.
I think this comment is more going to address the overall conversation between us two on here, and where I perceive it to have gone, although I may be wrong, and I am open to corrections.
Firstly, I think you have assumed this statement is essentially a product of CSER, perhaps because it has come from me, who was a visit at CSER, and has been similarly critical of your work in a way that I know some at CSER have. [I should say, for the record on this, I do think your work is of high quality, and I hope you've never got the impression that I don't. Perhaps some of my criticisms last year towards the review process your report went through felt poor quality (and I can't remember what they were and may not stand by them today), but if so, I am sorry.] Nonetheless, I think its really important to keep in mind that this statement is absolutely not a 'CSER' statement; I'd like to remind you of the signatories, and whilst every signatory doesn't agree with everything, I hope you can see why I got so defensive when you claimed that the signatories weren't being transparent and actually attempting to just make EA another left-wing movement. I tried really hard to get a plurality of voices in this document, which is why such an accusation offended me, but ultimately I shouldn't have got defensive over this, and I must apologise.
Secondly, on that point, I think we may have been talking about different things when you said 'heterodox CSER approaches to EA.' Certainly, I think Ehrlich and much of what he has called for is deeply morally reprehensible, and the capacity for ideas like his to gain ground is a genuine danger of pluralistic xrisk, because it is harder to police which ideas are acceptable or not (similarly, I have recieved criticism because this letter fails to call out eugenics explicitly, another danger). Nonetheless, I think we can trust as a more pluralistic community develops it would better navigate where the bounds of acceptable or unacceptable views and behaviours are, and that this would be better than us simply suggesting this now. Maybe this is a crux we/the signatories and much of the commens section disagree on. I think we can push for more pluralism and diversity in response to our situation whilst trusting that the more pluralistic ERS community will police how far this can go. You disagree and think we need to lay this out now otherwise it will either a) end up with anything goes, including views we find moral reprehensible or b) will mean EA is hijaked by the left. I think the second argument is weaker, particularly because this statement is not about EA, but about building a broader field of Existential Risk Studies, although perhaps you see this as a bit of a trojan horse. I understand I am missing some of the historical context that makes you think it is, but I hope that the signatories list may be enough to show you that I really do mean what I say when I call for pluralism.
I also must apologise if the call for retraction of certain parts of your comment seemed uncollegiate or disrespectful to you; this was certainly not my intention. I, however, felt that your painting of my views was incorrect, and thought you may, in light of this, be happy to change; although given you are not happy to retract, I assume you are either trying to make the argument that these are in fact my underlying beliefs (or that I am being dishonest, although I have no reason to suspect you would say this!).
I think there are a few more substantive points we disagree on, but to me this seems like the crux of the more heated discussion, and I must apologise it got so heated