Yeah, I think there’s a big difference between how Republican voters feel about it and how their elites do. Romney is, uhh, not representative of most elite Republicans, so I’d be cautious there
Do we have any idea how Republican elites feel about AI regulation?
This seems like the biggest remaining question mark which will determine how much AI regulation we get. It's basically guaranteed that Republicans will have to agree to AI regulation legislation, and Biden can't do too much without funding in legislation. Also there's a very good chance Trump wins next year and will control executive AI Safety regulation.
Politics is really important, so thank you for recognizing that and adding to discussion about Pause.
But this post confuses me. You start by talking about how protests are stronger when they are centered on something people care about rather than simply policy advocacy. Which, I don't know if I agree with, but it's an argument that you can make. But then you shift toward advocating for regulation rather than pause. Which is also just policy advocacy, right? And I don't understand why you'd expect it to have better politics than a pause. Your points about needing companies to prove they are safe is pretty much the same point that Holly Elmore has been making, and I don't know why they apply better to regulation than a Pause.
Reading this great thread on SBF's bio it seems like his main problem was stimulants wrecking his brain. He was absurdly overconfident in everything he did, did not think things through, didn't sleep, and admitted to being deficient in empathy ("I don't have a soul"). Much has been written about deeper topics like naive utiliarianism and trust in response to SBF, but I wonder if the main problem might just be the drug culture that exists in certain parts of EA. Stimulants should be used with caution, and a guy like SBF probably should never have been using them, or at least nowhere near the amount he was getting.
I think the judgement calls used in coming up with moral weights have less to do with caring about animals and more to do with how much you think attributes like intelligence and self-awareness have to do with sentience. They're applied to animals, but I think they're really more neuroscience/philosophy intuitions. The people who have the strongest/most out-of-the-ordinary intuitions are MIRI folk, not animal lovers.
Yeah i guess that makes sense. But uh.... have other institutions actually made large efforts to preserve such info? Which institutions? Which info?
This might be a dumb question, but shouldn't we be preserving more elementary resources to rebuild a flourishing society? Current EA is kind of only meaningful in a society with sufficient abundant resources to go into nonprofit work. It feels like there are bigger priorities in the case of sub-x-risk.
I don't think points about timelines reflect an accurate model of how AI regulations and guardrails are actually developed. What we need is for Congress to pass a law ordering some department within the executive branch to regulate AI, eg by developing permitting requirements or creating guidelines for legal AI research or whatever. Once this is done, the specifics of how AI is regulated are mostly up to that executive branch, which can and will change over time.
Because of this, it is never "too soon" to order the regulation of AI. We may not know exactly what regulations would be like, but this is very unlikely to be written into law anyway. What we want right now is to create mechanisms to develop and enforce safety standards. Similar arguments apply to internal safety standards at companies developing AI capabilities.
It seems really hard for us to know exactly when AGI (or ASI or whatever you want to call it) is actually imminent. Even if it was possible, however, I just don't think last-minute panicking about AGI would actually accomplish much. It's all but impossible to quickly create societal consensus that the world is about to end before any harm has actually occurred. I feel like there's an unrealistic image of "we will panic and then everyone will agree to immediately stop AI research" implicit in this post. The smart thing to do is to develop mechanisms early and then use these mechanisms when we get closer to crunch time.
If OpenPhil’s allocation is really so dependent on moral weight numbers, you should be spending significant money on research in this area, right? Are you doing this? Do you plan on doing more of this given the large divergence from Rethink’s numbers?