This post will be direct because I think directness on important topics is valuable. I sincerely hope that my directness is not read as mockery or disdain towards any group, such as people who care about AI risk or religious people, as that is not at all my intent. Rather my goal is to create space for discussion about the overlap between religion and EA.

A man walks up to you and says “God is coming to earth. I don’t know when exactly, maybe in 100 or 200 years, maybe more, but maybe in 20. We need to be ready, because if we are not ready then when god comes we will all die, or worse, we could have hell on earth. However, if we have prepared adequately then we will experience heaven on earth. Our descendants might even spread out over the galaxy and our civilization could last until the end of time.”

My claim is that the form of this argument is the same as the form of most arguments for large investments in AI alignment research. I would appreciate hearing if I am wrong about this. I realize when it’s presented as above it might seem glib, but I do think it accurately captures the form of the main claims.

Personally, I put very close to zero weight on arguments of this form. This is mostly due to simple base rate reasoning: humanity has seen many claims of this form and so far all of them have been wrong. I definitely would not update much based on surveys of experts or elites within the community making the claim or within adjacent communities. To me that seems pretty circular and in the case of past claims of this form I think deferring to such people would have led you astray. Regardless, I understand other people either pick different reference classes or have inside view arguments they find compelling. My goal here is not to argue about the content of these arguments, it’s to highlight these similarities in form, which I believe have not been much discussed here.

I’ve always found it interesting how EA recapitulates religious tendencies. Many of us literally pledge our devotion, we tithe, many of us eat special diets, we attend mass gatherings of believers to discuss our community’s ethical concerns, we have clear elites who produce key texts that we discuss in small groups, etc. Seen this way, maybe it is not so surprising that a segment of us wants to prepare for a messiah. It is fairly common for religious communities to produce ideas of this form.

I would like to thank Nathan Young for feedback on this. He is responsible for the parts of the post that you liked and not responsible for the parts that you did not like.

 

Comments44
Sorted by Click to highlight new comments since: Today at 10:46 AM

"Humanity has seen many claims of this form." What exactly is your reference class here? Are you referring just to religious claims of impending apocalypse (plus EA claims about AI technology? Or are you referring more broadly to any claim of transformative near-term change?

I agree with you that claims of supernatural apocalypse have a bad track record, but such a narrow reference class doesn't (IMO) include the pretty technically-grounded concerns about AI. Meanwhile, I think that a wider reference class including other seemingly-unbelievable claims of impending transformation would include a couple of important hits. Consider:

  • It's 1942. A physicist tells you, "Listen, this is a really technical subject that most people don't know about, but atomic weapons are really coming. I don't know when -- could be 10 years or 100 -- but if we don't prepare now, humanity might go extinct."

  • It's January 2020 (or the beginning of any pandemic in history). A random doctor tells you "Hey, I don't know if this new disease will have 1% mortality or 10% or 0.1%. But if we don't lock down this entire province today, it could spread to the entire world and cause millions of deaths."

  • It's 1519. One of your empire's scouts tells you that a bunch of white-skinned people have arrived on the eastern coast in giant boats, and a few priests think maybe it's the return of Quetzalcoatl or something. You decide that this is obviously crazy -- religious-based forecasting has a terrible track record, I mean these priests have LITERALLY been telling you for years that maybe the sun won't come up tomorrow, and they've been wrong every single time. But sure enough, soon the European invaders have slaughtered their way to your capital and destroyed your civilization.

Although the Aztec case is particularly dramatic, many non-European cultures have the experience of suddenly being invaded by a technologically superior foe powered by an exponentially self-improving economic engine -- this sounds at least as similar to AI worries as your claim that Christianity and AI worry are in the same class. There might even be more stories of sudden European invasion than predictions of religious apocalypse, which would tilt your base-rate prediction decisively towards believing that transformational changes do sometimes happen.

I appreciate the pushback. I'm thinking of all claims that go roughly like this: "a god-like creature is coming, possibly quite soon. If we do the right things before it arrives, we will experience heaven on earth. If we do not, we will perish." This is narrower than "all transformative change" but broader than something that conditions on a specific kind of technology. To me personally, this feels like the natural opening position when considering concerns about AGI.

I think we probably agree that claims of this type are rarely correct, and I understand that some people have inside view evidence that sways them towards still believing the claim. That's totally okay. My goal was not to try to dissuade people from believing that AGI poses a possibly large risk to humanity, it was to point to the degree to which this kind of claim is messianic. I find that interesting. At minimum, people who care a lot about AGI risk might benefit from realizing that at least some people view them as making messianic claims.

I appreciate the pushback. I'm thinking of all claims that go roughly like this: "a god-like creature is coming, possibly quite soon. If we do the right things before it arrives, we will experience heaven on earth. If we do not, we will perish."

I do think Jackson's example of what it might feel to non-European cultures with lower military tech to have white conquerers arrive with overwhelming force feels like a surprisingly fitting case study of this paragraph.

I can think of far more examples of terrible things happening than realized examples that's analogous to "If we do the right things before it arrives, we will experience heaven on earth." (Perry Expedition is perhaps the closest example that comes to mind). But I think it was not wrong to ex ante believe that technology can be used for lots of good, and that the foreigners at least in theory can be negotiated with.

I wasn't really trying to say "See, messianic stories about arriving gods really work!", as to say "Look, there are a lot of stories about huge dramatic changes, AI is not more similar to the story of Christianity as it is to stories about new technologies or plagues or a foreign invasion."  I think the story of European world conquest is particularly appropriate not because it resembles anyone's religious prophecies, but because it is an example where large societies were overwhelmed and destroyed by the tech+knowledge advantages of tiny groups.  This is similar to AI, which would start out outnumbered by all of humanity but might have a huge intelligence + technological advantage.

Responding to your request for times when knowledge of European invasion was actionable for natives:  The "Musket Wars" in New Zealand were "a series of as many as 3,000 battles and raids fought among Māori between 1807 and 1837, after Māori first obtained muskets and then engaged in an intertribal arms race in order to gain territory or seek revenge for past defeats".  The bloodshed was hugely net-negative for the Māori as a whole, but individual tribes who were ahead in the arms race could expand their territory at the expense of enemy groups.

Obviously this is not a very inspiring story if we are thinking about potential arms races in AI capabilities:

Māori began acquiring European muskets in the early 19th century from Sydney-based flax and timber merchants. Because they had never had projectile weapons, they initially sought guns for hunting.  Ngāpuhi chief Hongi Hika in 1818 used newly acquired muskets to launch devastating raids from his Northland base into the Bay of Plenty, where local Māori were still relying on traditional weapons of wood and stone. In the following years he launched equally successful raids on iwi in Auckland, Thames, Waikato and Lake Rotorua, taking large numbers of his enemies as slaves, who were put to work cultivating and dressing flax to trade with Europeans for more muskets. His success prompted other iwi to procure firearms in order to mount effective methods of defence and deterrence and the spiral of violence peaked in 1832 and 1833, by which time it had spread to all parts of the country except the inland area of the North Island later known as the King Country and remote bays and valleys of Fiordland in the South Island. In 1835 the fighting went offshore as Ngāti Mutunga and Ngāti Tama launched devastating raids on the pacifist Moriori in the Chatham Islands.

I am commenting here and upvoting this specifically because you wrote "I appreciate the pushback." I really like seeing people disagree while being friendly/civil, and I want to encourage us to do even more of that. I like how you are exploring and elaborating ideas while being polite and respectful.

Here are a couple thoughts on messianic-ness specifically:

  • With the classic messiah story, the whole point is that you know the god's intentions and values.  Versus of course the whole point of the AI worry is that we ourselves might create a godlike being (rather than a preexisting being arriving), and its values might be unknown or bizarre/incomprehensible.   This is an important narrative difference (it makes the AI worry more like stories of sorcerers summoning demons or explorers awakening mad Lovecraftian forces), even though the EA community still thinks it can predict some things about the AI and suggest some actions we can take now to prepare.
  • How many independent messianic claims are there, really?  Christianity is the big, obvious example.  Judaism (but not Islam?) is another.  Most religions (especially when you count all the little tribal/animistic ones) are not actually super-messianic -- they might have Hero's Journey figures (like Rama from the Ramayana) but that's different from the epic Christian story about a hidden god about to return and transform the world.

I am interpreting you as saying:
"Messianic stories are a human cultural universal, humans just always fall for this messianic crap, so we should be on guard against suspiciously persuasive neo-messianic stories, like that radio astronomy might be on the verge of contacting an advanced alien race, or that we might be on the verge of discovering that we live in a simulation."  (Why are we worried about AI and not about those other equally messianic possibilities?  Presumably AI is the most plausible messianic story around?  Or maybe it's just more tractable since we're designing the AI vs there's nothing we can do about aliens or simulation overlords.)

But per my second bullet point, I don't think that Messianic stories are a huge human universal.  I would prefer a story where we recognize that Christianity is by far the biggest messianic story out there, and it is probably influencing/causing the perceived abundance of other messianic stories in culture (like all the messianic tropes in literature like Dune, or when people see political types like Trump or Obama or Elon as "savior figures").  This leads to a different interpretation:

"AI might or might not be a real worry, but it's suspicious that people are ramming it into the Christian-influenced narrative format of the messianic prophecy.  Maybe people are misinterpreting the true AI risk in order to fit it into this classic narrative format; I should think twice about anthropomorphizing the danger and instead try to see this as a more abstract technological/economic trend."

This take is interesting to me, as some people (Robin Hanson, slow takeoff people like Paul Christiano) have predicted a more decentralized version of the AI x-risk story where there is a lot of talk about economic doubling times and whether humans will still complement AI economically in the far future, instead of talking about individual superintelligent systems making treacherous turns and being highly agentic.  It's plausible to me that the decentralized-AI-capabilities story is underrated because it is more complicated / less viral / less familiar a narrative.  These kinds of biases are definitely at work when people, eg, bizarrely misinterpret AI worry as part of a political fight about "capitalism".  It seems like almost any highly-technical worry is vulnerable to being outcompeted by a message that's more based around familiar narrative tropes like human conflict and good-vs-evil morality plays.

But ultimately, while interesting to think about, I'm not sure how far this kind of "base-rate tennis" gets us.  Maybe we decide to be a little more skeptical of the AI story, or lean a little towards the slow-takeoff camp.  But this is a pretty tiny update compared to just learning about different cause areas and forming an inside view based on the actual details of each cause.

Thanks for writing this.

I thought about why I buy the AI risk arguments despite the low base rate, and I think the reason touches on some pretty important and nontrivial concepts.

When most people encounter a complicated argument like the ones for working on AI risk, they are in a state of epistemic learned helplessness: that is, they have heard many convincing arguments of a similar form be wrong, or many convincing arguments for both sides. The fact that an argument sounds convincing fails to be much evidence that it's true.

Epistemic learned helplessness is often good, because in real life arguments are tricky and people are taken in by false arguments. But when taken to an extreme, it becomes overly modest epistemology: the idea that you shouldn't trust your models or reasoning just because other people whose beliefs are similar on the surface level disagree. Modest epistemology would lead you to believe that there's a 1/3 chance you're currently asleep, or that the correct religion is 31.1% likely to be Christianity, 24.9% to be Islam, and 15.2% to be Hinduism.

I think that EA does have something in common with religious fundamentalists: an orientation away from modest epistemology and towards taking weird ideas seriously. (I think the number of senior EAs who used to be religious fundamentalists or take other weird ideas seriously is well above the base rate.) So why do I think I'm justified in spending my career either doing AI safety research or field-building? Because I think the community has better epistemic processes than average

Whether it's through calibration, smarter people, people thinking for longer or more carefully, or more encouragement of skepticism, you have to have a thinking process that results in truth more often than average, if you want to reject modest epistemology and still believe true things. From the inside, the EA/rationalist subcommunity working on AI risk is clearly better than most millenarians (you should be well-calibrated about this claim, but you can't just say "but what about from the outside?"-- that's modest epistemology). If I think about something for long enough, talk about it with my colleagues, post it on the EA forum, invite red-teaming, and so on, I expect to reach the correct conclusion eventually, or at least decide that the argument is too tricky and remain unsure (rather than end up being irreversibly convinced of the wrong conclusion). I'm very worried about this ceasing to be the case.

Taking weird ideas seriously is crucial for our impact: I think of there being a train to crazy town which multiplies our impact by >2x at every successive stop, has increasingly weird ideas at every stop, and at some point the weird ideas cease to be correct. Thus, good epistemics are also crucial for our impact.

I really appreciate this response, which I think understands me well. I also think it expresses some of my ideas better than I did. Kudos Thomas. I have a better appreciation of where we differ after reading it.

I’m curious on what exactly you see your opinions as differing here. Is it just how much to trust inside vs outside view, or something else?

I'm not sure that it's purely "how much to trust inside vs outside view," but I think that is at least a very large share of it. I also think the point on what I would call humility ("epistemic learned helplessness") is basically correct. All of this is by degrees, but I think I fall more to the epistemically humble end of the spectrum when compared to Thomas (judging by his reasoning). I also appreciate any time that someone brings up the train to crazy town, which I think is an excellent turn of phrase that captures an important idea.

I really enjoyed this comment, thanks for writing it Thomas!

Thanks a lot for this comment! I think delving into the topic of epistemic learned helplessness will help me  learn how to form proper inside views, which is something I've been struggling with.

 

I'm very worried about this ceasing to be the case.

Are you worried just because it would be really bad if EA in the future (say 5 years)  was much worse at coming to correct conclusions, or also because you think it's likely that will happen?

Are you worried just because it would be really bad if EA in the future (say 5 years)  was much worse at coming to correct conclusions, or also because you think it's likely that will happen?

I'm not sure how likely this is but probably over 10%? I've heard that social movements generally get unwieldier as they get more mainstream. Also some people say this has already happened to EA, and now identify as rationalists or longtermists or something. It's hard to form a reference class because I don't know how much EA benefits from advantages like better organization and currently better culture.

To form proper inside views I'd also recommend reading this post, which (in addition to other things) sketches out a method for healthy deference:

I think that something like this might be a good metaphor for how you should relate to doing good in the world, or to questions like "is it good to work on AI safety". You try to write down the structure of an argument, and then fill out the steps of the argument, breaking them into more and more fine-grained assumptions. I am enthusiastic about people knowing where the sorrys are--that is, knowing what assumptions about the world they're making. Once you've written down in your argument "I believe this because Nick Bostrom says so", you're perfectly free to continue believing the same things as before, but at least now you'll know more precisely what kinds of external information could change your mind.

The key event which I think does good here is when you realize that you had an additional assumption than you realized, or when you realized that you'd thought that you understood the argument for X but actually you don't know how to persuade yourself of X given only the arguments you already have.

Hey! I liked certain parts of this post and not other parts of this post. I appreciate the thoughtfulness by which you critique EA through this post.

On your first point about the AI messiah: 

I think the key distinction is that there are many reasons to believe this argument about the dangers of an AGI are correct, though. Even if many claims with a similar form are wrong, that doesn't exclude this specific claim from being right. 

"Climate scientists keep telling us about how climate change is going to be so disastrous and we need to be prepared. But humanity has seen so many claims of this form and they've all been so wrong!"

The key distinction is that there is a lot of reason to believe that AGI will be dangerous. There is also a lot of reason to support the claim that we are not prepared currently. Without addressing that chain of logic directly, I don't think I'm convinced by this argument.

On your second point about the EA religious tendencies:

Because religious communities are one of the most common communities we see, there's obviously going to be parallels that exist between religious communities and EA. 

Some of these analogies hold, others not so much. We, too, want to community build, network, and learn from each other. I'd love for you to point at specific examples of things EA do, from conferences to holding EA university groups, that are ineffective or unnecessary.

To perhaps a greater point of EA perhaps becoming too groupthink-y, which I think may be warranted:

I think a key distinction is that EA has a healthy level of debate, disagreement, and skepticism - while religions tend to demand blind faith in believing something unprovable. This ongoing debate on how to do the most good I personally find the most valuable in the community - and I hope this spirit never dies.

Keep on critiquing EA; I think such critiques are extremely valuable. Thanks for writing this.

Maintaining that healthy level of debate, disagreement, and skepticism is critical, but harder to do when an idea becomes more popular. I believe most of the early "converts" to AI Safety have carefully weighed the arguments and made a decision based on analysis of the evidence. But as AI Safety becomes a larger portion of EA, the idea will begin to spread for other, more "religious" reasons (e.g., social conformity, $'s, institutionalized recruiting/evangelization, leadership authority). 

As an example, I'd put the belief in prediction markets as an EA idea that tends towards the religious. Prediction markets may well be a beneficial innovation, but I personally don't think we have good evidence one way or the other yet. But due to the idea's connection to rationality and EA community leaders, it has gained many adherents who probably haven't closely evaluated the supporting data. Again, maybe the idea is correct and this is a good thing. But I think it is better if EA had fewer of these canonized, insider signals, because it makes reevaluation of the ideas difficult.

This point has helped me understand the original post more.

I feel that too many times, many EAs take current EA frameworks and ways of thinking for granted instead of questioning those frameworks and actively trying to identify flaws and in-built assumptions. Thinking through and questioning those perspectives is a good exercise in general but also extremely helpful to contribute to the motivating worldview of the community.

Still don't believe that this necessarily means EAs "tend toward the religious" - there are probably several layers of nuance that are missing in that statement.

All in all, I'd love to see more people critique EA frameworks and conventional EA ideas in this forum - I believe there are plenty of flaws to be found.

I'm pretty confused here. On the one hand, I think it's probably good to have less epistemic deference and more independent thinking in EA. On the other, I think if I take your statements literally  and extend them, I think they're probably drawing the boundaries of "religious" way too broadly, in mostly-unhelpful ways.

As an example, I'd put the belief in prediction markets as an EA idea that tends towards the religious. Prediction markets may well be a beneficial innovation, but I personally don't think we have good evidence one way or the other yet. But due to the idea's connection to rationality and EA community leaders, it has gained many adherents who probably haven't closely evaluated the supporting data. Again, maybe the idea is correct and this is a good thing. But I think it is better if EA had fewer of these canonized, insider signals, because it makes reevaluation of the ideas difficult.

I think people who study forecasting are usually aware of the potential limitations of prediction markets. See e.g. here, here, and here. And to the extent they aren't, this is because "more research is needed", not because of an unhealthy deference to authority.

People who don't study forecasting may well overestimate the value of prediction markets, and some of this might be due to deference. But I don't know, this just seems unavoidable as part of a healthy collective epistemic process, and categorizing it as "tends towards the religious" just seems to stretch the definition of "religious" way too far. 

Analogously, many non-EAs also believe that a) handwashing stops covid-19, and b) the Earth orbits the Sun. In both cases, the epistemic process probably looks much more like some combination of "people I respect believe this", "this seems to make sense", and "the authorities believe this" rather than a deep principled understanding of the science. And this just seems...broadly not-religious to me? Of course, the main salient difference between a) and b) is that one of the above is probably false. But I don't think it'd be appropriate to frame "have a mistaken belief because the apparent scientific consensus is incorrect" as "religious"

Thanks for the kind words Richard.

Re: your first point: I agree people have inside view reasons for believing in risk from AGI. My point was just that it's quite remarkable to believe that, sure, all those other times the god-like figure didn't show up, but that this time we're right. I realize this argument will probably sound unsatisfactory to many people. My main goal was not to try to persuade people away from focusing on AI risks, it was to point out that the claims being made are very messianic and that that is kind of interesting sociologically.

Re: your second point: I  should perhaps have been clearer: I am not making a parallel to religion as a way of criticizing EA. I think religions are kind of amazing. They're one of the few human institutions that have been able to reproduce themselves and shape human behaviour in fairly consistent ways over thousands of years. That's an incredible accomplishment. We could learn from them.

I have come to see the term 'religion' (as well as 'ideology') as unhelpful in these discussions. It might be helpful to taboo these words and start talking in terms of 'motivating world-views' instead.

To avoid lowering the quality of discussion by just posting snarky memes, I'll explain my actual position:
"People may have bad reasons to believe X" is a good counter against the argument "People believe X, therefore X".  So for anyone whose thought process is "These EAs are very worried about AI so I am too", I agree that there's a worthwhile discussion to be had about why those EAs believe what they do, what their thought process is, and the track record both of similar claims and of claims made by people using similar thought processes. This is because you're using their reasoning as a proxy - the causal chain to reality routes through those people's reasoning methods. Like, "My doctor says this medicine will help me, so it will" is an argument that works because it routes through the doctor having access to evidence and arguments that you don't know about, and you have a reason to believe that the doctor's reasoning connects with reality well enough to be a useful proxy.

However, the fact that some EAs believe certain things about AI is not the only, nor the main, nor even a major component of the evidence and argument available. You can look at the arguments those people make, and the real world evidence they point to. This is so much stronger than just looking at their final beliefs that it basically makes it irrelevant. Say you go outside and the sun is out, the sky is clear, and there's a pleasantly cool breeze, and you bump into the world's most upbeat and optimistic man, who says "We're having lovely weather today". If you reason that this man's thought process is heavily biased and he's the kind of person who's likely to say the weather is nicer than it is, and therefore you're suspicious of his claim that the weather is nice, I'd say you're making some kind of mistake.

I'm not so sure about the religious tendencies, at least not in comparison to other communities or social movements. Especially if the people who seem to be most interested in AI alignment are the ones least interested in tithing/special diets .

Also roughly half the people who are seen as leaders  don't identify as effective altruist.  It would be hard to imagine leaders in the environmentalism movement not describing themselves as environmentalists.

What if the man you’re talking to then shows that praying to the approaching god actually works?

What if the man shows that anyone who knows the appropriate rituals can pray to the god and get real benefits from doing so?

What if people have already made billions of dollars in profit from industrialized prayer?

What if the man conclusively shows that prayers to the god have been getting easier and more effective over time?

In such a case, you should treat the god’s approach very seriously. Without such proof, you should dismiss his claims. The difference is that gods are imaginary, and AI is real.

Many environmentalists make claims that sound something like this:

“Because of the mistakes and greed of mankind, a terrible disaster is coming to the earth. I don’t know when exactly, but we are heading in the wrong direction and things are getting worse; maybe in 100 or 200 years, maybe more, but maybe in 20. We need to be change our behaviour and prepare for the consequences, because if not there will be many terrible disasters and we could all die. However, if we change our ways and prepared adequately then we can live a harmonious existence; not only will disaster be averted but many of our other problems will be resolved."

Clearly this is not exactly the same as the religious story or the AI story, but it definitely has a lot of similarities. And there are definitely some environmentalists who seem very ideological (e.g. those who oppose nuclear and hydro power, or claim the world will end in 12 years). But I don't think this is a very strong argument overall for skepticism about climate change. Just because there are some bad reasons to believe something doesn't mean we should ignore the good ones, nor ignore relevant qualified experts if we lack the ability to evaluate the arguments ourselves.

In brief: writing that is kind, relevant to the discussion at hand, and honest.

https://forum.effectivealtruism.org/posts/yND9aGJgobm5dEXqF/guide-to-norms-on-the-forum

This comment looks strange because the entire thread above it has been deleted by the moderators, including the question I was responding to. I wasn't chastising Ryan.

Obvious point, but you could assign significant credence to this being the right take, and still think working on A.I. risk is very good in expectation, given exceptional neglectedness and how bad an A.I. takeover could be. Something feels sleazy and motivated about this line of defence to me, but I find it hard to see where it goes wrong. 

Something feels sleazy and motivated about this line of defence to me, but I find it hard to see where it goes wrong. 

I'm not sure if we're picking up on the same notion of sleaziness, and I guess it depends on what you mean by "significant credence", and "working on A.I risk" but I think it's hard to imagine someone doing really good mission-critical research work if they come into it from a perspective of "oh I don't think AI risk is at all an issue but smart people disagree and there's a small chance that I'm wrong and the EV is higher than working on other issues." Though I think it's plausible my psychology is less well-suited to "grim determination" than most people in EA.

(Donations or engineering in comparison seem comparatively much more reasonable). 

[anonymous]2y3
0
0

Just my anecdotal experience, but when I ask a lot of EAs working in or interested in AGI risk why they think it's a hugely important x-risk, one of the first arguments that comes to people's minds is some variation on "a lot of smart people [working on AGI risk] are very worried about it". My model of many people in EA interested in AI safety is that they use this heuristic as a dominant factor in their reasoning — which is perfectly understandable! After all, formulating a view of the magnitude of risk from transformative AI without relying on any such heuristics is extremely hard. But I think this post is a valuable reminder that it's not particularly good epistemics for lots of people to think like this.

when I ask a lot of EAs working in or interested in AGI risk

Can I ask roughly what work they're doing? Again I think it makes more sense if you're earning-to-give or doing engineering work, and less if you're doing conceptual or strategic research. It also makes sense if you're interested in it as an avenue to learn more.

Thank you. This brings together nicely some vague concerns I had, that I didn't really know how to formulate beyond "Why are people going around with Will MacAskill's face on their shirts?".

Great points, very much liked your directness.

I think this is worthwhile flagging, but I also wonder how useful we should expect this kind of relatively shallow pattern matching to be in terms of making decisions?

For example, sure we have gathering, but all community organisations have gathering. How similar are our gatherings to religious gatherings? Most EA meetups I've been to tend to either be social or have a talk/discussion like all other kinds of groups interested in a particular topic. There doesn't seem to be anything specifically religious about how we go about these.

I imagine that if you went through some of the other similarities you identified, then I suspect mightn't find the similarity to run very deep. I think this is especially likely if you also look at other causes generally considered non-religious, such as environmentalism or political parties in order to establish a baseline.

Hmm, do you have close friends who grew up in non-Abrahamic cultures? I feel like a lot of the stuff in this post seems to presume, though does not state outright, that something like religion (or even a very specific form of religion??) is approximately a human universal. This feels very alien/surprising compared to my upbringing (which is not particularly atheist by Chinese standards, like my mom literally converted to Christianity), so I'm wondering if one of us is just confused here.

(Looping back here because I thought the post never made sense to me but it did to a lot of people. At first I attributed it to just pro- vs anti- LT tribalism, but wanted to advance the alternative hypothesis that we just have different expectations for the appeal of religiosity or how probable different ideas* map to Christianity in particular).

*I also thought the mapping of "simulation hypothesis -> Christianity" that seems popular in some circles recently to be kind of silly pattern-matching.

To answer your question directly: yes, but I did not when I was young. I'm pretty steeped in Abrahamic cultural influences. That said, I do not think the post presumes anything about universal religious experiences or anything like that.

However, I'd probably express these ideas a little bit differently if I had to do it again now. Mainly, I'd try harder to separate two ideas. While I'm as convinced as ever that "messianic AI"-type claims are very likely wrong, I think the fact that lots of people make claims of that form may just show that they're from cultures that are Abrahamic or otherwise strongly influenced by that thinking and so when they grasp for ways to express their hopes and fears around AI they latch onto that. So to the extent that people are offering those kinds of claims about AI, I remain very skeptical of those specific claims. However, I do not think that one should jump from that to complacency around AI. Hopefully that helps to clear things up.

One can imagine, say, a Christian charitable organization where non-Christians see its work for the poor, the sick, the hungry, and those in prison as good, and don't really mind that some of the money also goes to funding theologians and building cathedrals.

Although Christianity kind of has it built in that you have to help the poor even if the Second Coming is tomorrow. The risk in EA is if people were to become erroneously convinced of a short AI timeline and conclude all the normal charitable stuff is now pointless.

DC
2y3
0
0

Moynihan's book X-Risk makes a key subtle distinction between religious apocalypticism where God is in control and the secular notion of a risk that destroys all value that only humanity can stop. I'm uncertain how much the distinction is maintained in practice but I recommend the book.

It does seem like an optimistic expectation that there will be an arrival of entities that are amazingly superior to us. This is not far-fetched though. Computers already surpass humans' capacities on several thought processes, and therefore have already demonstrated that they are better in some aspects of intelligence. And we've created robots that can outperform humans in virtually all physical tasks. So, the expectation is backed by evidence.

Expecting super AGI differs from expecting the arrival of a messiah-like figure in that instead of expecting a future in which an entity will come on its own and end all our suffering and improve our lives immeasurably, we are doing the work to make AI improve our lives. Also, the expectations differ in how we prepare for them. In the case of the messiah, it seems like acting moral so we can get into heaven might be vague, unchanging, and random. On the other hand, in the case of super AGI,  AI safety work is constantly changing and learning new things. However, it is still interesting that the two expectations bear a resemblance. 

And we've created robots that can outperform humans in virtually all physical tasks.

Not that this is at all central to your point, but I don't think this is true. We're capable of building robots that move with more force and precision than humans, but mostly only in environments that are pretty simple or heavily customised for them. The cutting edge in robots moving over long distances or over rough terrain (for example) seems pretty far behind where humans are. Similarly, I believe fruit-picking is very hard to automate, in ways that seem likely to generalise to lots of similar tasks.

I also don't think we're very close to artificial smell, although possibly people aren't working on it very much?

Yes, I think you are right. Sorry, I made too broad of a statement when I only had things like strength and speed in mind.

Thanks for writing this. A similar observation lead me to write this post.

Reality has no requirement to conform to your heuristics of what is ‘normal’, but I think that we could use some more outside-view work on just how bizarre and unsettling this world-view is, even if it is true.

If anything, this is a claim that people have been bringing up on Twitter recently, the parallels between EA and religion. It’s certainly something we should be aware of since, having ”blind faith” in religion is something that be good, we don’t seem to actually want to do this within EA. I could explain why I think AI risk is different from messiah thing, but Rob Miles explains it well here: 

Given limited information (but information nonetheless), I think AI risk could potentially lead to serious harm or not at all, and it’s worth hedging our bets on this cause area (among others). This feels different then choosing to have blind faith in a religion, but I can see why outsiders think this. Though we can be victims of post-rationalization, I think religious folks have reasons to believe in a religion. I think some people might gravitate towards AI risk as a way to feel more meaning in their lives (or something like that), but my impression is that this is not the norm. 

At least in my case,  it’s like, “damn we have so many serious problems in the world and I want to help with them all, but I can’t. So, I’ll focus on areas of personal fit and hedge my bets even though I’m not so sure about this AI thing and donate what I can to these other serious issues.”

Ty
2y2
0
0

The major difference, at least for me, is that with religion you're expected to believe in an oncoming superpower based on little more than the supposed credibility of those giving testimony whereas with superintelligent AI we can witness a god being incubated through our labor. 

A person does not need a great amount of technical knowledge or a willingness to defer to experts to get the gist of the AI threat. A machine that can continually and autonomously improve its general problem solving ability will eventually render us potentially powerless. AI computers look like they could be those machines given their extremely rapid development as well as inherent advantages over humans (eg faster information transfer times compared to humans who require on substances like electrolytes and neurotransmitters to physically travel). It will not necessarily take a string of programming Einsteins to develop the necessary code for superintelligence as an evolutionary trial-and-error approach will eventually accomplish this. 

Curated and popular this week
Relevant opportunities