Crossposted from https://kirstensnotebook.blogspot.com/2021/04/biblical-advice-for-people-with-short.html?m=1
I have a surprising number of friends, or friends of friends, who believe the world as we know it will likely end in the next 20 or 30 years.
They believe that transformative artificial intelligence will eventually either: a) solve most human problems, allowing humans to live forever, or b) kill/enslave everyone.
A lot of people honestly aren't sure of the timelines, but they're sure that this is the future. People who believe there's a good chance of transformative AI in the next 20-30 years are called people with "short timelines."
There are a lot of parallels between people with short AI timelines and the early Christian church. Early Christians believed that Jesus was going to come back within their lifetimes. A lot of early Christians were quitting their jobs and selling their property to devote more to the church, in part because they thought they wouldn't be on earth for much longer! Both early Christians and people with short AI timelines believe(d):
-you're on the brink of eternal life,
-you've got a short window of opportunity to make things better before you lock in to some kind of end state, and
-everything's going to change in the next 20 or 30 years, so you don't need a pension!
So what advice did early church leaders give to Christians living with these beliefs?
Boldly tell the truth: Early church leaders were routinely beaten, imprisoned or killed for their controversial beliefs. They never told early Christians to attempt to blend in. They did, however, instruct early Christians to...
Follow common sense morality: The Apostle Paul writes to the Romans that they should "Respect what is right in the sight of all people." Even though early Christians had a radically different worldview from others at the time, they're encouraged to remain married to their unbelieving spouses, be good neighbours, and generally act in a way that would be above reproach. As part of that, church leaders also advised early Christians...
Don't quit your day job: In Paul's second letter to the Thessalonians, he had to specifically tell them to go get jobs again, because so many of them had quit their jobs and become busybodies in preparation for the apocalypse. Even Paul himself, while preaching the Gospel, sometimes worked as a tentmaker. Early Christians were advised to work. A few of them worked full time on the mission of spreading the good news of Christ with the support and blessing of their community. Most of them worked on the normal boring jobs that they had before. In the modern day, this would likely also include making sure you have a pension and do other normal life admin.
I am uncertain how much relevance Christian teachings have for people with short AI timelines. I don't know if it's comforting or disturbing to know that you're not the first community to experience life that you believe to be at the hinge of history.
Summary: Slavery is only used as a rough analogy for either of these scenarios because there aren't real precedents for these kinds of scenarios in human history. To understand how a machine superintelligence could do something like torturing everyone until the end of time while still being superintelligent, check out:
"Enslavement" is a rough analogy for the first scenario only because there isn't a simple, singular concept that characterizes such a course of events without precedent in human history. The second scenario is closer to enslavement but the context is different than human slavery (or even the human 'enslavement' of non-human animals, such as in industrial farming). It's more similar to the MSI being like an ant queen, but as an exponentially more rational agent, and the sub-agents are drones.
A classic example from the rationality community is of an AGI programmed to maximize human happiness and trained to recognize such on a dataset of smiling human faces. In theory, a failure mode therein could be the AGI producing endless copies of humans whose muscles in their faces it stimulates to always have them smiling their entire lives.
That's an example so reductive as to be maybe too absurd for anyone to expect something like that would happen. Yet it was meant to establish proof of concept. In terms of whose making a "mistake," it's hard to describe without someone more about the theory of AI alignment. To clarify, what I should have said is that while such an outcome could appear to be an error on the part of the AGI, it would really be a human error for having programmed it wrong, and the AGI would be properly executing on its goal as it was programmed to do.
Complexity of value is a concept that gets at part of that kind of problem. Eliezer Yudkowsky of the Machine Intelligence Research Institute (MIRI) expanded on it in a paper he authored called "Artificial Intelligence as a Positive and Negative Factor in Global Risk" for the Global Catastrophic Risks handbook, curated by Nick Bostrom of the Future of Humanity Institute (FHI), and originally published by Oxford University Press in 2008. Bostrom's own book from 2014, Superintelligence, comprehensively reviewed potential, anticipated failure modes for AI alignment. Bostrom would also have extensively covered this kind of failure mode but I forget in what part of the book that was.
I'm guessing there have been updates to these concepts in the several years since those works were published but I haven't kept up to date with that research literature in the last few years. Reading one or more of those works should give you the basics/fundamentals for understanding the subject. You could use those as a jumping-off point to ask further questions on the EA Forum, LessWrong or the Alignment Forum if you want to learn more after.