A critical failure mode in many discussions of technological risk is the assumption that maintaining status quo for technology would lead to maintaining the status quo for society. Lewis Anslow suggests that this "propensity to treat technological stagnation as safer than technological acceleration" is a fallacy. I agree that it is an important failure of reasoning among some EAs, and want to call it out clearly.
One obvious example of this, flagged by Anslow, is the anti-nuclear movement. It was not an explicitly pro-coal position, but because there was continued pressure for economic growth, the result of delaying nuclear technology wasn't less power usage, it was more coal. To the extent that they succeeded narrowly, they damaged the environment.
The risk from artificial intelligence systems today is very arguably significant, but stopping future progress won't reduce the impact of extant innovations. Stopping where we are today would still lead to continued problems with mass disinformation assisted by generative AI, and we'll see continued progress towards automation of huge parts of modern work even without more capable systems as the systems which exist are deployed in new ways. It also seems vanishingly unlikely that the pressures on middle class jobs, artists, and writers will decrease even if we rolled back the last 5 years of progress in AI - but we wouldn't have the accompanying productivity gains which could be used to pay for UBI or other programs.
All of that said, the perils of stasis aren't necessarily greater than those of progress. This is not a question of safety versus risk, it is a risk-risk tradeoff. And the balance of the tradeoff is debatable - it is entirely possible to agree that there are significant risks to technological stasis, and even that AI would solve problems, and still debate what strategy to promote - whether it is safer to accelerate through the time of perils, promote differential technological development, or to shut it all down.
This is a really interesting take, and I agree with many elements. There is one element I want to explore more, and one I'd like to contest.
Firstly, I find a lot of the acceleration vs deceleration debate to be mostly theoretical and academic - not unlike debating whether or not it is better to have tides or to stop them and have a still ocean. At the end of the day (four times a day in most places, if we're being pedantic) the tide is still going to do its thing. It's the same with technical progress. Could you make it harder to innovate and improve technology? Yes. But realistically speaking having a pause or freeze of status quo in anything approaching an effective manner is just not possible. It's the same issue I had with signing an open letter declaring a freeze. You can get everyone in the nation to sign an open letter saying "Don't commit crimes", but that isn't going to solve the crime problem. But that's a bit of a tangent and I don't want to hijack your post nor your comments with unrelated debate.
Secondly, I think the nuclear and AI debates are quite poor comparisons. Much of this is anecdotal, having worked in both industries in a regulation role. Firstly, the very high levels of anti-nuclear campaigning and risk aversion have resulted in nuclear energy being a very heavily (and effectively) regulated industry. If it was not for the amount of anti-nuclear sentiment, I don't think we'd have that level of security today. I think that's partly what makes it so safe. I agree when you discuss the risk tradeoffs between coal and nuclear that it's not as clear-cut as may be imagined, but I don't think it supports the core argument very well. Also, nuclear energy and AI are such different industries to undertake risk reduction in - mostly because of the leverages of control you have through licensing, resources, and capital. However, this may be because of the aforementioned lobbying resulting in very burdensome regulation and perhaps AI will be similarly easy to regulate in future.
It's also very possible that I'm misinterpreting your point, so please do let me know if that's the case.
Ultimately I agree with your core point that this is a fallacy seen in much AI Safety reasoning, and that even stopping now would be shutting the stable door after the horse has bolted, but I think that there is a middle ground where speed of improvement and slower safeguards is a good way to lessen risk. I actually think nuclear energy is a good example of this, rather than a poor one.
No worries, there was always a chance I was misinterpreting the claim in that section. Happy for us to skip that.
For my second section I was talking more about stasis in the more full sense ie a pause in innovation in certain areas. Some are asking for full stasis for a period of time in the name of safety, others for a slow-down. I agree that safe stasis is a fallacy for the reasons I outlined, and agree with most of your points - particularly everything being a risk-risk tradeoff. I'm not entirely sold on the plausability of slowdowns or pauses from a logistical deployment perspective, which is where I think I got bogged down in the reeds in my response there.