I'm a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.
I want to emphasize that this just sets a lower bound on the importance.
E.g. there's a theory that fungal infections are the primary cause of cancer.
How much of chronic fatigue is due to undiagnosed fungal infections? Nobody knows. I know someone with chronic fatigue who can't tell whether it's due in part to a fungal infection. He's got elevated mycotoxins in his urine, but that might be due to past exposure to a moldy environment. He's trying antifungals, but so far the side effects have prevented him from taking more than a few doses of the two that he has tried.
It feels like we need something more novel than slightly better versions of existing approaches to fungal infections. Maybe something as radical as nanomedicine, but that's not very tractable yet.
It's not obvious that unions or workers will care as much about safety as management. See this post for some historical evidence.
There are many ways to slow AI development, but I'm concerned that it's misleading to label any of them as pauses. I doubt that the best policies will be able to delay superhuman AI by more than a couple of years.
A strictly enforced compute threshold seems like it would slow AI development by something like 2x or 4x. AI capability progress would continue via distributed training, and by increasing implementation efficiency.
Slowing AI development is likely good if the rules can be enforced well enough. My biggest concern is that laws will be carelessly written, with a result that most responsible AI labs obey their spirit, but that the least responsible lab will find loopholes to exploit.
That means proposals should focus carefully on trying to imagine ways that AI labs could evade compliance with the regulations.