I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
Hey, welcome to the EA forum! I hope you stick around.
I pretty much agree with this post. The argument put forward by AI risk doomers is generally flimsy and weak, with core weaknesses involving unrealistic assumptions about what AGI would actually be capable of, given limitations of computational complexity and the physical difficulty of technological advancements, and also a lack of justification for assuming AI will be fanatical utility function maximisers. I think the chances of human extinction from AI are extremely low, and that estimates around here are inflated by subtle groupthink, poor probabilistic treatment of speculative events, and a few just straight up wrong ideas that were made up a long time ago and not updated sufficiently for the latest events in AI.
That being said, AI advancements could have a significant effect on the world. I think it's fairly likely that if AI is misused, there may be a body count, perhaps a significant one. I don't think it's a bad idea to be proactive and think ahead about how to manage the risks involved. There is a middle ground between no regulation and bombing data centers.
This might be stating the obvious, but this article is not a balanced accounting of the positive and negative effects of the effective altruism movement. It's purely a list of "the good stuff", with only FTX and the openAI mess being mentioned as "the bad stuff".
For example, the article leaves out EA's part in helping get openAI off the ground, which many in AI think is a big mistake, and I believe has caused a notable amount of real world harm already.
It also leaves out the unhealthy cultish experiences at leverage, the alleged abuse of power at nonlinear, and the various miniature cults of personality that lead to extremely serious bad outcomes, as well as the recent scandals over sexual harrassment and racism.
It's also worth pointing out that in a counterfactual world without EA, a lot of people would still be donating and doing good work. Perhaps a pure Givewellian movement would have formed, focusing on evidence based global health alone, without the extreme utilitarianism and other weird stuff, and would have saved even more lives.
This is not to say that EA has not been overall good for the world. I think EA has done a lot of good, and we should be proud of our achievements. But EA is not as good as it could be, and fixing that starts with honest and good-faith critique of it's flaws. I'll admit you won't find a lot of that on twitter though.
I doubt we as humans can make any strong statements about what such a machine can or can't do
Yes, actually, we can. It can't move faster than the speed of light. It can't create an exact simulation of my brain with no brain scan. It can't invent working nanotechnology without a lab and a metric shit-ton of experimentation.
Intelligence is not fucking magic. Being very smart does not give you a bypass to the laws of physics, or logistics, or computational complexity.
Nuclear warheads require humans to push the button. Engineered pandemics have a tradeoff, where highly deadly diseases will burn themselves out before killing everyone, and highly spreadable diseases are not as deadly. Merely killing 95% of humanity would not be enough to defeat us. The AI needs electricity: we don't.
You will not be able to shut down AI development with such incredibly weak arguments and no supporting evidence.
I am all for safety and research. But if you want to advocate for drastic action, you need to actually make a case for it. And that means not handwaving away the obvious questions, like "how on earth could an AI kill everyone, when everyone has a pretty high interest in not being killed, and are willing to take drastic action to do so".
An unsafe AGI can kill far, far more than even the worst air accident. It can kill more conscious beings than train crashes, shipwrecks, terror attacks, pandemics, and even nuclear wars combined. It can kill every sentient being on Earth and render the planet permanently uninhabitable by any biological lifeforms. AI (and more specifically AGI/ASI) could also find a way to leave planet Earth, eventually consuming other sentient beings in different star systems, even in the absence of superluminal travel.
A lot of people say this, but I have never seen any compelling evidence to back this claim up. To be clear, I'm referring to the claim that an AI could achieve this in a short amount of time without being noticed and stopped.
As far as I know, not a single big name AI researcher, not even the AI safety concerned, believes in FOOM(nigh-unbounded intelligence explosion). I have extensively looked at molecular nanotech research, and I do not believe it can be invented in a short amount of time by non-godlike AI.
Without molecular nanotech, I do not see a reliable path for an AGI to defeat humanity. Every other method appears to me to be heavily luck based.
One point that hasn't been mentioned: GCR's may be many, many orders of magnitude more likely than extinctions. For example, it's not hard to imagine a super deadly virus that kills 50% of the worlds population , but a virus that manages to kill literally everyone, including people hiding out in bunkers, remote villages, and in antarctica, doesn't make too much sense: if it was that lethal, it would probably burn out before reaching everyone.
I don't think it would very surprising if a giant pile of linear algebra calculations figured out how to do linear algebra calculations.
As another published academic, I'll add another downside and another upside:
My downside is that the tone of academic articles tends to incredibly dry. Humour and conversational tone are not unheard of, but are generally frowned upon, and it can make papers a bummer to read and write. Casual audiences will be less likely to read your work as a result. To remedy this, it might be worth doing a summary post of your work that is more accessible to general audiences.
My upside is that peer review really forces you to engage with the existing literature on a subject. Yes, this is often time consuming and painful (which is why most people wouldn't do it otherwise), but it a) forces you to back up your claims, and b) forces you to check what's actually been done before. EA (and especially Rationalists) can have a bad habit of not invented here syndrome, reinventing the wheel when very smart people have already spent years working on a subject.
It gets paid back as well: next time an academic is looking at the same subject, they are forced to consider your research and perspective, and may add or expand on it in a way you never thought to do.
The vote system is explained here. Theoretically a strong upvote from a power user could reach +16 votes, although I think the maximum anyone's gotten to is +10.
I think the system is kinda weird (although it benefits me), but it's better now that the agreevotes are counted equally.
Mousing over the original comment, it currently has 69 votes which has somehow managed to average to a karma of 1. Seems to have split the crowd exactly evenly.
Yeah, I think you ended up asking "would it be good for a lot of people to share our values", instead of "should we try to actively recruit tons of people to our specific community"
I'm allowing for the possibility that we hit another AI winter, and the new powerful technology just doesn't arrive in our lifetime. Or that the technology is powerful for some things, but remains too unreliable for use in life-critical situations and is kept out of them.
I think it's likely that AI will have at least an order or magnitude or two greater body count than it has now, but I don't know how high it will be.