I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
Hey, great post. I mostly agree with your points here, and agree that an intelligence explosion is incredibly unlikely, especially anytime soon.
I'm not too sure about the limits of algorithm point: My impression is that current AI architecture is incredibly inefficient at using data when compared to humans. So it seems like even if we hit the limit with current architecture, there's room to invent new algorithms that are better.
I'm interested in your expertise as a a computational neuroscientist, do you think there are any particular insights from that field that are applicable to these discussions?
Thanks for the comments! I didn't want to put estimates on the likelihood of each scenario, just to point out that they make more sense than a traditional paperclipper scenario. The chance of EA ending the world is extremely low, but if you consider who might have the means, motive, and opportunity to carry out such a task, I think EAers are surprisingly high up the list, after national government and greedy corporations.
I don't feel qualified to speculate too much about future AI models or LLM's vs RL. None of the current models have shown any indication of fanaticism, so there doesn't seem to be much of a reason for that to change just by pumping more computing power into them.
My read on this so far is that low estimates for P(doom|AGI) are either borne of ignorance of what the true difficulties in AI alignment are; stem from wishful thinking / a lack of security mindset; or are a social phenomenon where people want to sound respectable and non-alarmist; as opposed to being based on any sound technical argument.
After spending a significant amount of my own free time writing up technical arguments that AI risk is overestimated, I find it quite annoying to be told that my reasons must be secretly based on social pressure. No, I just legitimately think you're wrong, as do a huge number of other people who have been turned away from EA by dismissive attitudes like this.
If I had to state only one argument (there are very many) that P of AGI doom is low, it'd be the following.
Conquering the world is really really really hard.
Conquering the world starting from nothing is really, really, really, ridiculously hard.
Conquering the world, starting from nothing, when your brain is fully accessible to your enemy for your entire lifetime of plotting, is stupidly, ridiculously, insanely hard.
Every time I point this basic fact out, the response is a speculative science fiction story, or an assertion that "a superintelligence will figure something out". But nobody actually knows the capabilities of this invention that doesn't exist yet. I have seen zero convinc
Why is "it will be borderline omnipotent" being treated as the default scenario? No invention in the history of humanity has been that perfect, especially early on. No intelligence in the history of the universe has been that flawless. Can you really be 90% sure that
I think that just as the risks of AGI are overstated, so too are the potential benefits. Don't get me wrong, i expect it would still be revolutionary and incredible, just not magical.
Going from tens of thousands of biomedical researchers to hundreds of millions would definitely greatly speed up medical research... but I think you would run into diminishing returns, as the limiting bottleneck is often not the number of researchers. For example, coming up with the covid vaccine took barely any time at all, but it took years to get it out due to the need for human trials and to actually build and distribute the thing.
I still think there would be a massive boost, but perhaps not a "jump in forward a century" one. It's hard to predict exactly what the shortcomings of AGI will be, but there has never been a technology that lacked shortcomings, and I don't think AGI will be the exception.
I'm seeing an argument against dogmatically enforcing particular sub-branches of feminism, but that is not at all what the OP has suggested. Being open to feminism means being open to a variety of opinions within feminism.
What stance do you actually want EA to take when it comes to this issue? Do you want to shun feminist scholars, or declare their opinions to be unworthy of serious thought?
There might be differences between identifying with feminism and 'being open to scholars of feminism, queer studies and gender studies' though. Most Americans probably aren't familiar with academia to know of its latest thinking.
We are either open to feminist scholarship or we are not. Do you think that if EA openly declared itself hostile to scholars of feminism, that most self described feminists would not be annoyed or alienated, at least a little bit? This seems rather unlikely.
There's a similarly large gap between scholars of conservativism and the average conservative. If EA declared that conservative scholars were not welcome, do you think the average conservative would be fine with it?
I definitely agree with your point that EA should avoid becoming an elite haven, and should be checking to ensure that it is not needlessly exclusionary.
However, I'm not sure that your equation of feminism with the professional managerial class actually holds up. According to this poll, 61% of american women identify with feminism "very well" or "somewhat well", including 54% of women without college degrees and 42% of women who lean republican. This is very far from being a solely elite thing!
If you want to be welcoming to women, you have to be welcoming to feminists. That doesn't mean cancelling or excluding people over terminology disputes or minor opinions. It means listening to people, and treating their viewpoints as valid and acceptable.
I agree with this, there are definitely two definitions at play. I think a failure to distinguish between these two definitions is actually a big problem with the AI doom argument, where they end up doing an unintentional motte-and-bailey between the two definitions.
David Thornstad explains it pretty well here. The "people want money" definition is trivial and obviously true, but does not lead to the "doom is inevitable" conclusion. I have a goal of eating food, and money is useful for that purpose, but that doesn't mean I automatically try and accumulate all the wealth on the planet in order to tile the universe with food.