Crossposted from https://kirstensnotebook.blogspot.com/2021/04/biblical-advice-for-people-with-short.html?m=1

I have a surprising number of friends, or friends of friends, who believe the world as we know it will likely end in the next 20 or 30 years.

They believe that transformative artificial intelligence will eventually either: a) solve most human problems, allowing humans to live forever, or b) kill/enslave everyone.

A lot of people honestly aren't sure of the timelines, but they're sure that this is the future. People who believe there's a good chance of transformative AI in the next 20-30 years are called people with "short timelines."

There are a lot of parallels between people with short AI timelines and the early Christian church. Early Christians believed that Jesus was going to come back within their lifetimes. A lot of early Christians were quitting their jobs and selling their property to devote more to the church, in part because they thought they wouldn't be on earth for much longer! Both early Christians and people with short AI timelines believe(d):

-you're on the brink of eternal life,

-you've got a short window of opportunity to make things better before you lock in to some kind of end state, and

-everything's going to change in the next 20 or 30 years, so you don't need a pension!

So what advice did early church leaders give to Christians living with these beliefs?

Boldly tell the truth: Early church leaders were routinely beaten, imprisoned or killed for their controversial beliefs. They never told early Christians to attempt to blend in. They did, however, instruct early Christians to...

Follow common sense morality: The Apostle Paul writes to the Romans that they should "Respect what is right in the sight of all people." Even though early Christians had a radically different worldview from others at the time, they're encouraged to remain married to their unbelieving spouses, be good neighbours, and generally act in a way that would be above reproach. As part of that, church leaders also advised early Christians...

Don't quit your day job: In Paul's second letter to the Thessalonians, he had to specifically tell them to go get jobs again, because so many of them had quit their jobs and become busybodies in preparation for the apocalypse. Even Paul himself, while preaching the Gospel, sometimes worked as a tentmaker. Early Christians were advised to work. A few of them worked full time on the mission of spreading the good news of Christ with the support and blessing of their community. Most of them worked on the normal boring jobs that they had before. In the modern day, this would likely also include making sure you have a pension and do other normal life admin.

I am uncertain how much relevance Christian teachings have for people with short AI timelines. I don't know if it's comforting or disturbing to know that you're not the first community to experience life that you believe to be at the hinge of history.

67

0
0

Reactions

0
0
Comments23
Sorted by Click to highlight new comments since: Today at 2:28 PM

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

The Christians in this story did not live relatively normal lives. I tried to make that clear under the point about speaking the truth boldly, but it's been a point of confusion in a couple of the comments, so perhaps I should update the post.

I was specifically pushing back on the "don't quit your day job" part of the post, since I think that for talented people who are thinking seriously and planning ahead, it's often not as risky (financially, socially, etc) as it seems to do even pretty crazy-seeming stuff in pursuit of an ambitious goal. I think on the margin we should be encouraging people to dream big and take on more risk. (But also, my personal life feels very normie and risk-averse and I often have to pump myself up to make necessary life changes... maybe we hang around two very different social environments!) I definitely think that people should have prudent financial plans -- indeed, I think it's good to have a very high savings rate, like 50% -- but I think that's complementary with being willing to make big life pivots when the opportunity arises (since it gives you the financial freedom to bear higher risk).

I think EA and early Christianity are in 100% agreement with the idea that you should "follow common sense morality" even if you are a believer in total hedonic utilitarianism or the ten commandments or whatever, since doing underhanded stuff that goes against common-sense morality would destroy the reputation of the wider movement.

If anything, Christianity goes a lot harder on "speak the truth boldly" than EA which is often concerned with appearing respectable, avoiding politicization, and gaining influence within existing institutions. I'm torn on this because there's a lot to be said for EA's nuanced utilitarian approach, but I also think that sometimes the movement can be a bit too timid and focused on working within existing institutions. I think EAs should stick to our guns more often in several areas, but we probably don't want to be heroic, "early Christian martyrs" levels of outspoken.

The spectrum from "live a totally normal life" to "optimize your life around a very important set of rare/unpopular ideas" is a pretty high-dimensional space, so there are a lot of different factors here. For example, I was trying to push back on "don't quit your day job" insofar as it means "don't take big career risks out of idealism". But one could also translate Paul's advice as "stop trying to join this growing popular movement by getting meta jobs at EA organizations where you can feel good hanging out with a bunch of like-minded folks -- instead, the movement as a whole would benefit if more people tried to spread/apply Christianity independently in their own preexisting careers." And that advice I might agree with, idk!

Relatedly, a behaviour I dislike is  being repeatedly publicly wrong without changing and acknowledging fault. Mainstream Christianity is guilty of this, though so are many other social movements.

I think if  it turns out that short AI timelines are wrong, those with short timelines should acknowledge it and the EA as a whole should seek to understand why we got it so wrong. I will think it odd  if those who make repeatedly wrong predictions continue to be taken seriously.

Also, I'd like to see more concrete testable short term predictions from those we trust with AI predictions. Are they good forecasters in general? Are they well calibrated or insightful in ways we can test?

I think if it turns out that short AI timelines are wrong, those with short timelines should acknowledge it and the EA as a whole should seek to understand why we got it so wrong. I will think it odd if those who make repeatedly wrong predictions continue to be taken seriously.

I think this only applies to people who are VERY confident in short timelines. Say you have a distribution over possible timelines that puts 50% probability on <20 years, and 20% probability on >60 years. This would be a really big deal! It's a 50% chance of the world wildly changing in 20 years. But having no AGI within 60 years is only a 5x update against this model, hardly a major sin of bad prediction.

Though if someone is eg quitting their job and not getting a pension they probably have a much more extreme distribution, so your point is pretty valid there.

Though if someone is eg quitting their job and not getting a pension they probably have a much more extreme distribution, so your point is pretty valid there.

I'm confused at that implication. I would make bets of that magnitude at substantially lower probabilities than 50%, and in fact have done so historically. 

Though maybe "quitting their job and not getting a pension" is meant as a metaphor for "take very big life risks," whereas to me e.g. quitting Google to join a crypto startup even though I had <20% credence in crypto booming, or explicitly not setting aside retirement monies in my early twenties, both  seemed liked pretty comfortable risks at the time, and almost not worth writing about from a risk-taking angle.

Though maybe "quitting their job and not getting a pension" is meant as a metaphor for "take very big life risks,"

That's fair pushback - a lot of that really doesn't seem that risky if you're young and have a very employable skillset. I endorse this rephrasing of my view, thanks

I guess you're still exposed to SOME increased risk, eg that the tech industry in general becomes much smaller/harder to get into/less well paying, but you're still exposed to risks like "the US pension system collapses" anyway, so this seems reasonable to mostly ignore. (Unless there's a good way of buying insurance against this?)

Mainstream Christianity is guilty of this, though so are many other social movements.

All sects of any organized religion ultimately originate from what's likely to have been a singular, unified version from when the religion began. Unless any sect has acknowledged what original prophecies in the religion were wrong, they've all made the same mistakes. As far as I'm aware, almost no minor sects of any organized religion acknowledge those mistakes any more than the mainstream sects.


EA as a whole should seek to understand why we got it so wrong

There isn't anything like a consensus to the point it's not evident that even a majority of the EA/x-risk community has short timelines for artificial general intelligence (AGI). There have been one or more surveys of the AI safety/alignment community on this subject but I'm not aware if there are one or more sets of data cataloguing the timelines of specific agencies in the field.


Also, I'd like to see more concrete testable short term predictions from those we trust with AI predictions. Are they good forecasters in general? Are they well calibrated or insightful in ways we can test?

Improving forecasting has become relevant to multiple focus areas in EA, so it's become something of a focus area in itself.  There are multiple forecasting organizations that specifically focus on existential risks (x-risks) in general and also AI timelines. 

As far as I'm aware, "short timelines" for such predictions range from a few months to a few years out. I'm not aware either if whole organizations making AI timeline predictions are logging their predictions the way individual forecasters are. The relevant data may not yet be organized in a way that directly provides a summary track record for the different forecasters in question. Yet much of that data does exist and should be accessible. It wouldn't be too hard to track and catalogue it to get those answers. 

Mau
2y12
0
0

or b) kill/enslave everyone

Tangent: did you mean this literally? I know some folks who are worried about people being killed, but I haven't heard of anyone worrying about human enslavement (and that distinction seems like a point in favor of "people worried about this stuff aren't picking scary-sounding scenarios at random," since automated labor would presumably be way more efficient than human labor in these scenarios).

I have heard some people who are concerned about human extinction vis a vis AI. Re: "enslave," that wasn't a great wording choice. I was trying to gesture at S-risks like a stable dictatorship underpinned by AI or other scenarios where humanity loses our autonomy.

A significant minority of utilitarians and fellow travelers in EA, mostly negative(-leaning) utilitarians but others as well, are concerned machine superintelligence (MSI) may be programmed wrong and for indefinite/arbitrary periods of time potentially either:

a. retain humans, their descendants or simulations of them as hostages and subject them to endless torture in a mistaken conception that its helping instead of harming them.

b. generate artificial (sub-)agents with morally relevant sentience/experiences but program those agents to act in ways that conflict with their own well-being.

Yeah--is your sense that "enslave everyone" (in the context of what humans to humans) feels like an especially good handle on either of those scenarios? (That's all I initially meant to nitpick--not whether such scenarios are plausible.)

Another nitpick: actually, I haven't heard about (a) as described here--anything you'd suggest I look at? (I'm initially skeptical, since having such a mistaken conception for a long time doesn't seem all that superintelligent to me. Is what you had in mind scenarios in which torture is motivated by strategic extortion or maybe sadism, since these don't seem to require a mistaken conception that it's helping?)

Summary: Slavery is only used as a rough analogy for either of these scenarios because there aren't real precedents for these kinds of scenarios in human history. To understand how a machine superintelligence could do something like torturing everyone until the end of time while still being superintelligent, check out:

                                                                                                                                                                                                                                                               

"Enslavement" is a rough analogy for the first scenario only because there isn't a simple, singular concept that characterizes such a course of events without precedent in human history. The second scenario is closer to enslavement but the context is different than human slavery (or even the human 'enslavement' of non-human animals, such as in industrial farming). It's more similar to the MSI being like an ant queen, but as an exponentially more rational agent, and the sub-agents are drones. 

Another nitpick: actually, I haven't heard about (a) as described here--anything you'd suggest I look at?

A classic example from the rationality community is of an AGI programmed to maximize human happiness and trained to recognize such on a dataset of smiling human faces. In theory, a failure mode therein could be the AGI producing endless copies of humans whose muscles in their faces it stimulates to always have them smiling their entire lives. 

That's an example so reductive as to be maybe too absurd for anyone to expect something like that would happen. Yet it was meant to establish proof of concept. In terms of whose making a "mistake," it's hard to describe without someone more about the theory of AI alignment. To clarify, what I should have said is that while such an outcome could appear to be an error on the part of the AGI, it would really be a human error for having programmed it wrong, and the AGI would be properly executing on its goal as it was programmed to do.

Complexity of value is a concept that gets at part of that kind of problem. Eliezer Yudkowsky of the Machine Intelligence Research Institute (MIRI) expanded on it in a paper he authored called "Artificial Intelligence as a Positive and Negative Factor in Global Risk" for the Global Catastrophic Risks handbook, curated by Nick Bostrom of the Future of Humanity Institute (FHI), and originally published by Oxford University Press in 2008. Bostrom's own book from 2014, Superintelligence, comprehensively reviewed potential, anticipated failure modes for AI alignment. Bostrom would also have extensively covered this kind of failure mode but I forget in what part of the book that was.

I'm guessing there have been updates to these concepts in the several years since those works were published but I haven't kept up to date with that research literature in the last few years. Reading one or more of those works should give you the basics/fundamentals for understanding the subject. You could use those as a jumping-off point to ask further questions on the EA Forum, LessWrong or the Alignment Forum if you want to learn more after. 

Thanks for the detailed response / sharing the resources! I'm familiar with them (I had been wondering if there was a version of (a) that didn't involve the following modification, although it seems like we're on a similar page)

To clarify, what I should have said is that while such an outcome could appear to be an error on the part of the AGI, it would really be a human error

You're welcome :)

A cynical atheist would say that early Christians on some level were not certain of their beliefs, which was an important factor in the recommendations. 

People who believe in transformative AI can openly acknowledge that there is uncertainty about the future but maybe that will amount to the same thing. 

It seems like these observations could be equally explained by Paul correctly having high credence in long timelines, and giving advice that is appropriate in worlds where long timelines are true, without explicitly trying to persuade people of his views on timelines. Given that, I'm not sure there's any strong evidence that this is good advice to keep in mind when you actually do have short timelines, regardless of your views on the Bible.

I think an important difference is the explicitness of credences. I expect most of the short-timeline AI people to have explicit probability distributions and I expect them to behave accordingly. This would then definitely entail retirement savings etc. as (from my personal encounters) many have non-negligible probability mass on AGI after their lifetime.

I'm sure there are also the "99.9% within the next 20 years"-people but I'm sure they're doing better within their subculture than early Christians did and usually don't risk unemployment, starvation, or ostracism.

Summary

  1. While some narratives about AI alignment bear a conspicuous resemblance to the apocalyptic thinking and eschatology of some Christians in history, there isn't much that fundamentally distinguishes that mindset towards AI alignment from similar mindsets towards other ostensible existential risks.
  2. This has been true at times during the last century and remains true today. It was at times crucial, if not necessary, for some of those involved in other communities similar to long-termist effective altruism to make decisions and take actions that contradicted much of this advice.

                                                                                                                                                                                                                                                            ---

This advice is in one way also applicable to other potential global catastrophic or existential risks as well but in another way may not be applicable to any of them. Even before the advent of nuclear weapons, World War II (WWII) was feared to potentially destroy civilization. Between the Cold War that began a few years later and different kinds of global ecological catastrophe, there are hundreds of millions of people across several generations who have experienced for more than half a century a life in way that had them convinced they were living at the hinge of history. While such concerns may have been alleged to be fears too similar to religious eschatology, almost all of them were rooted in secular phenomena examined from a naturalistic and materialist perspective.

This isn't limited to generic populations and includes communities that are so similar to the existential risk (x-risk) reduction community of today that they serve as a direct inspiration for our present efforts. After the Manhattan Project, Albert Einstein and other scientists who contributed to the effort but weren't aware of the full intentions of the government of the United States for nuclear weapons both wanted to do something about their complicity in such destruction. For the record, while they weren't certain either way, at the time many of those scientists feared a sufficiently large-scale nuclear war could indeed cause human extinction. Among others, those scientists founded the Bulletin of Atomic Scientists, likely the first ever 'x-risk reduction' organization in history.

In both the United States and the Soviet Union, scientists and others well-placed to warn the public about the cataclysmic threat posed by the struggle for more power by both superpowers took personal and professional risks. Some of those who did so were censured, fired and/or permanently lost their careers. Some were even criminally convicted or jailed. Had they not, perhaps none of us would have ever been born to try reducing x-risks or talk about how to think about that today.

To some extent, the same likely remains true in multiple countries today. The same is also true for the climate crisis. Employees of Amazon who have made tweets advocating for greater efforts to combat the climate crisis have been fired because their affiliation with Amazon in that way risks bringing too much attention to how Amazon itself contributes to the crisis. There also more and more people who through civil disobedience have gotten arrested for their participation in civil disobedience to combat the climate crisis or other global catastrophic risks. 

I've known many in effective altruism who've changed their careers so to focus on x-risk reduction not limited to AI alignment. There are millions of young people around the world who are pursuing careers intended to do the same because they both believe it's more important than anything else they could do and it's futile to pursue anything else in the face of looming catastrophe. All of this is anticipated to be critical in their lifetimes, often in the next 20-30 years. All of those people have also been presumed to be delusional in a way akin to the apocalyptic delusions of religious fanatics in history.

While for the other risks there isn't the same expected potential for transhumanism, indefinite life extension and utopian conditions, the future of humankind and perhaps all life is considered to be under threat. Beyond effective altruism, I've got more and more friends, and friends of friends, who are embracing a mindset entailing much of the above. Perhaps what should surprise us is that more people we don't know from in and around effective altruism aren't doing the same. 

I am confused by this comment because I think you're suggesting the Bulletin of Atomic Scientists didn't follow the advice above, but it sounds like they followed it to the letter.

I recognize there is some ambiguity in my comment. I also read your article again and I noticed some ambiguity I perceived on my part. That seems to be the source of confusion.

To clarify, it was not only the Bulletin of Atomic Scientists (BAS) who took those personal and professional risks in question. Other scientists and individuals who were not 'leaders' took those risks too. Albert Einstein did so personally outside the BAS but he called on any and all scientists to be willing to blow the whistle if necessary, even if they risked going to jail.

For such leading scientists to call on others to also be (tentatively) willing to take such risks if necessary contradicts the advice of early church leaders to the laity to "not quit their day jobs."

Nobody was advising scientists in positions to reduce x-risks or whatnot to embrace a value system so different they'd personally spurn those who didn't share it. Yet my impression is that during the Cold War, "common sense morality" would be loyalty to the authorities in the United States or Soviet Union, including to not challenge their Cold War policies. In that case, scientists and other whistleblowers would have been defying commonly accepted public morality.

I think I've addressed this under the "Boldly tell the truth" bullet. Early Christians were encouraged to share their beliefs even if it would result in their deaths, which seems much more extreme than potentially losing a job.

If you're interested in how they balanced these two seemingly contradictory topics, I could write more about that later, but I thought it would be pretty straightforward (speak boldly and honestly about your beliefs, and in all other respects be a good citizen).

Summary: The difference between early Christianity and modern movements focused on reducing prospective existential risks is to that to publicly and boldly speak one's beliefs that go against the ruling ideology was considered against common sense morality during the Cold War. Modern x-risk movements can't defend themselves from suppression as well because their small communities subject to severe conditions in modern police/surveillance states.

                                                                                                                                                                                                                                             

Some scientists and whistleblowers in the Soviet Union and the United States not only lost their jobs but were imprisoned for a number of years, or were otherwise legally punished or politically persecuted in ways that had severe consequences beyond the professional. As far as I'm aware, none of them were killed and I'd be very surprised if any of them were. 

Please don't concern yourself to write more on this subject on my behalf. I'm satisfied with the conclusion that the difference between early Christians and the modern whistleblowers in question is that for the whistleblowers to publicly and boldly express their honest beliefs was perceived as a betrayal of good citizenship. The two major conditions that come to mind that determined these different outcomes are:

1. The Authoritarianism on Both Sides of the Iron Curtain During the Cold War. 

Stalinist Russia is of course recognized as being totalitarian but history has been mythologized to downplay how much liberal democracy in the United States was at risk of failing during the same period. I watched a couple documentaries on that subject produced to clarify the record about the facts of the matter during the McCarthyist era. The anti-communism of the time was becoming extreme in a way well-characterized in a speech Harry S. Truman addressed to Congress. I forget the exact quote but to paraphrase it, it went something like: "we didn't finish beating fascism only for us to descend into fascism ourselves." 

2. The Absence of an Authoritative Organization on the Part of the Defectors

(Note: In this case, I don't mean "defector" to be pejorative but only to indicate that members of the respective communities took actions defying rules established by the reigning political authority.

As I understand it, Christianity began dramatically expanding even within a few years of Jesus' crucifixion. Over the next few decades, it became a social/religious organization that grew enough that it became harder and harder for the Roman Empire to simply quash. There was not really an organization for Cold War whistleblowers that had enough resources to meaningfully defend its members from being suppressed or persecuted.

Curated and popular this week
Relevant opportunities