This post will be direct because I think directness on important topics is valuable. I sincerely hope that my directness is not read as mockery or disdain towards any group, such as people who care about AI risk or religious people, as that is not at all my intent. Rather my goal is to create space for discussion about the overlap between religion and EA.
–
A man walks up to you and says “God is coming to earth. I don’t know when exactly, maybe in 100 or 200 years, maybe more, but maybe in 20. We need to be ready, because if we are not ready then when god comes we will all die, or worse, we could have hell on earth. However, if we have prepared adequately then we will experience heaven on earth. Our descendants might even spread out over the galaxy and our civilization could last until the end of time.”
My claim is that the form of this argument is the same as the form of most arguments for large investments in AI alignment research. I would appreciate hearing if I am wrong about this. I realize when it’s presented as above it might seem glib, but I do think it accurately captures the form of the main claims.
Personally, I put very close to zero weight on arguments of this form. This is mostly due to simple base rate reasoning: humanity has seen many claims of this form and so far all of them have been wrong. I definitely would not update much based on surveys of experts or elites within the community making the claim or within adjacent communities. To me that seems pretty circular and in the case of past claims of this form I think deferring to such people would have led you astray. Regardless, I understand other people either pick different reference classes or have inside view arguments they find compelling. My goal here is not to argue about the content of these arguments, it’s to highlight these similarities in form, which I believe have not been much discussed here.
I’ve always found it interesting how EA recapitulates religious tendencies. Many of us literally pledge our devotion, we tithe, many of us eat special diets, we attend mass gatherings of believers to discuss our community’s ethical concerns, we have clear elites who produce key texts that we discuss in small groups, etc. Seen this way, maybe it is not so surprising that a segment of us wants to prepare for a messiah. It is fairly common for religious communities to produce ideas of this form.
–
I would like to thank Nathan Young for feedback on this. He is responsible for the parts of the post that you liked and not responsible for the parts that you did not like.
Thanks for writing this.
I thought about why I buy the AI risk arguments despite the low base rate, and I think the reason touches on some pretty important and nontrivial concepts.
When most people encounter a complicated argument like the ones for working on AI risk, they are in a state of epistemic learned helplessness: that is, they have heard many convincing arguments of a similar form be wrong, or many convincing arguments for both sides. The fact that an argument sounds convincing fails to be much evidence that it's true.
Epistemic learned helplessness is often good, because in real life arguments are tricky and people are taken in by false arguments. But when taken to an extreme, it becomes overly modest epistemology: the idea that you shouldn't trust your models or reasoning just because other people whose beliefs are similar on the surface level disagree. Modest epistemology would lead you to believe that there's a 1/3 chance you're currently asleep, or that the correct religion is 31.1% likely to be Christianity, 24.9% to be Islam, and 15.2% to be Hinduism.
I think that EA does have something in common with religious fundamentalists: an orientation away from modest epistemology and towards taking weird ideas seriously. (I think the number of senior EAs who used to be religious fundamentalists or take other weird ideas seriously is well above the base rate.) So why do I think I'm justified in spending my career either doing AI safety research or field-building? Because I think the community has better epistemic processes than average.
Whether it's through calibration, smarter people, people thinking for longer or more carefully, or more encouragement of skepticism, you have to have a thinking process that results in truth more often than average, if you want to reject modest epistemology and still believe true things. From the inside, the EA/rationalist subcommunity working on AI risk is clearly better than most millenarians (you should be well-calibrated about this claim, but you can't just say "but what about from the outside?"-- that's modest epistemology). If I think about something for long enough, talk about it with my colleagues, post it on the EA forum, invite red-teaming, and so on, I expect to reach the correct conclusion eventually, or at least decide that the argument is too tricky and remain unsure (rather than end up being irreversibly convinced of the wrong conclusion). I'm very worried about this ceasing to be the case.
Taking weird ideas seriously is crucial for our impact: I think of there being a train to crazy town which multiplies our impact by >2x at every successive stop, has increasingly weird ideas at every stop, and at some point the weird ideas cease to be correct. Thus, good epistemics are also crucial for our impact.
I'm not sure how likely this is but probably over 10%? I've heard that social movements generally get unwieldier as they get more mainstream. Also some people say this has already happened to EA, and now identify as rationalists or longtermists or something. It's hard to form a reference class because I don't know how much EA benefits from advantages like better o... (read more)