Tl;dr: Due to quantum computers' abilities to compute multiple superpositions in parallel it seems possible that the capability of a quantum computer to generate utility grows exponentially with the size of that quantum computer. If this exponential conjecture is true, then very near term quantum computers could suffer more than all life on Earth has in the past 4 billion years.
Quantum computing (QC) is a categorically different form of computation. It is capable of tackling certain problems exponentially faster than its classical counterparts. The bounty QC may bring is large: speedups in a broad range of topics, from drug discovery to reinforcement learning. What's more, these speedups may be imminent. Quantum supremacy has already been crossed and the large quantum chip manufacturers are expecting the construction of chips capable of some of these promises before the decade is out.
It seems, however, that these breakthroughs may be rotten. While the power of QC is based in the ability to tackle problems of exponentially larger size, it seems possible that the quantum computer would also be capable of suffering in exponentially larger amounts than a classical computer. Indeed, the same rudimentary quantum computers from the previous paragraph could conceivably suffer more in a second, than all life on Earth has suffered since its inception.
What is a QC
The task of describing the basics of how a quantum computer works is the focus of many dense books, and while we encourage you to read them, we obviously can't compress the entirety into this blog post's introduction. Instead here is a simplified model:
In a classical computer (such as the one you’re looking at now) information is supplied as an input string, , the computer processes this string and spits back out an output string, . At a high level this could be you inputting “what is Will MacAskill's Birthday” to google and it returning “1987”, or at a low level you might ask “what value does f(x) take at x = 3”.
Quantum computers operate similarly, you input a string and you get one out. But they have one crucial advantage, inputs can be prepared in superposition, and outputs will be returned in superposition. So now, instead of just asking one question, , you could ask many questions together, , and the resulting is another valid state to put into the computer. While the nuances of this are many, the power of quantum computing is clear when you know that the amount of states that can be put in grows exponentially with the size of your quantum computer. A quantum computer of size 4 can manage superpositions. Whereas a classical computer that wants to run 2 inputs at once, needs to be twice the size; a classical computer of size 4 can only replicate superpositions.
Obviously, having the sum of the states isn’t as useful as having each input separately; it is the task of many bright careers to figure out how to translate these increased superpositions into increased speed. For some problems (such as those related to period finding), this speed up is often exponential, but for others (such as those related to optimising an arbitrary function), any quantum algorithm is known to be limited to only polynomial speed up.
Quantum suffering subroutines
A good introduction to suffering subroutines is What Are Suffering Subroutines?. The short of it being they are small parts of a larger algorithm which might be morally relevant (e.g. because they’re capable of experiencing suffering). Objections to this idea are numerous and complicated. We assume all readers have at least a rough understanding of this concept. In this section, we discuss the potential for quantum subroutine suffering, which comes with similar considerations to those of classical subroutine suffering, but with potentially astronomically larger stakes.
How large are these stakes? It all depends on how suffering scales. Does it scale along with the number of superpositions (exponential suffering) or is it a smaller, perhaps polynomial increase? If it’s the latter, we probably don’t have to worry about QC suffering in particular; the polynomial increase in suffering is washed out by the increased size and speed of classical computers (at least in the foreseeable future), and quantum computers are of no particular concern anytime soon.
If it’s the former (exponential suffering), we are about to enter a very troubling period. To describe how troubling this exponential suffering would be, it’s useful to compare the amount of compute to other tasks. A recent estimate for the total computational cost of evolution is Petaflop/s-days (Interpreting AI compute trends – AI Impacts), using estimates from How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects. This comes down to less than FLOP in total. Even relatively small quantum computers of 200 qubits will be capable of performing a Quantum Fourier Transform in less than a minute that would require over FLOP to match with a classical machine, more computation than to simulate the entire evolutionary history on Earth!
Scaling of utility
Given the previous section, it seems important to know: which is the case? Does utility scale polynomially or exponentially? Unfortunately we don't have a clear answer to this, in this section we will very briefly outline why you might believe either side. We primarily hope this post leads to someone more capable attempting to answer this question with less uncertainty.
For the exponential case: When you input a superposition of states, , no matter what operations you run, it is always possible to distribute the operations being applied to each element of the superposition, such that the final state is also a superposition , where each is equal to what you would get if you just ran by itself. While you can’t measure each at the end, the evolution of the individual states is equivalent to running them independently, and then placing them into superposition at the end. Given the experiences can be considered to be had independently, it seems possible that each superposition deserves moral weight equivalent to if it had been run independently. Which then implies you have exponentially larger amounts of morally relevant beings, which deserve as a group exponential moral weight.
Against the exponential case: the previous paragraph essentially applies the Everett/Many worlds interpretation of quantum mechanics to the inside of a quantum computer. Thus the many objections raised by philosophers to the Everett interpretation apply to that argument. Of particular concern is the “preferred basis problem”. Essentially there is a mathematical trick wherein you can write any state as a sum of terms in another basis (a change of basis), and due to the linearity of quantum computing, that will produce the same answer as if you ran one term. So there is both a way to write any single state, , as an exponential superposition of different states, and to write an exponential sum of states as just one single state. Defining a model of suffering where you count the number of states and multiply suffering is then very poorly defined. In one basis there is 1 being, in another basis there are exponentially many. To solve this there must be some “prefered basis” to do the counting in.
On top of this scaling question, there is a question of what the smallest morally relevant being is, if any particular state, , (or process on that state) is not morally relevant. Then any multiplication, exponential or otherwise, will just be multiplying a big number by zero. We will not discuss what this smallest state might be as it is equal to the classical case, where it is a topic of contention.
Quantum utility monsters/super-beneficiaries
Given the size of the potential experience of these machines, it seems they are prime candidates for Bostrom-esque AI utility monsters (also denoted as ‘super-beneficiaries’ in Sharing the World with Digital Minds). In this scenario, single labs with machines of a few million qubits could produce many orders of magnitude more utility than the rest of the world put together.
It is important to note that quantum computers might become utility monsters without being capable of anything we would consider “intelligent” thought. This is because in some tasks quantum computers are only capable of solving a problem in the same resources as their classic counterparts, and a classical computer with only 100 bits is not even good at long division! So a quantum computer could have 100 qubits, be completely unable to do anything interesting but still suffer as exp(100).
Nevertheless, as quantum computers become larger, they will be capable of running more than simple subroutines in superposition; they might even be capable of running human beings in superposition. While even in the most optimistic timeline this is very far in the future (we might expect to simulate humans on classical machines first), the points of this post still apply: the experience could be amplified exponentially.
In this post we posed the question of how quantum superpositions relate to suffering and found that the variation in scaling (either polynomial or exponential) can produce wildly different answers. We hope this post will inspire a grant making organisation, or an individual skilled in the philosophy of quantum mechanics, to address these questions in greater detail/with a greater understanding than the authors are able to provide.
The premature posting of this work was prompted when a post by Paul Christiano almost swooped the core idea. So following in the time honoured near-swoop academic tradition of desperately posting asap, we have published before the obvious next question is answered. In another post we hope to take these ideas further and ask: “Suppose many worlds applies to our universe, how does this change the moral weighting of different actions and the long run future?”.
This post has taken over a year to finish writing/procrastinating. In that time a great many people have helped. EB owes particular credit to Robert Harling, George Noether and Wuschel for their corrections, comments and occasionally pushing against my will to make this post better. BIC acknowledges the EA Oxford September 2020 writing weekend and conversations therein which led to the first draft of this post.
A further point on scaling that doesn't fit in the main post:
What if larger brains experience more than smaller brains by a non-linear amount?
Many people believe (correctly or not), that higher forms of intelligence lead to larger moral weight. They care more about humans than great apes. Often by more than just the linear ratio of brain sizes. Assuming this is a correct belief you might choose to ground it in some function that takes in the size of the computer and outputs the moral weight. The scaling of this function is hugely important, indeed if it is exponential then you can reapply all the thinking in this post to classical computers. Furthermore, the proposed exponential scaling of a larger computer now adds in the exponent to the exponential scaling of a quantum computer. Which for any QC in the near future (therefore of limited size) means supercomputers are of much larger concern.
Viewing moral weight as a computational task in some complexity class seems like it could be a very interesting project, but not one that either of the authors has time to pursue.
<!-- Footnotes themselves at the bottom. -->
The point at which a QC performs any single task that classical computers could not do in millenia, this point was crossed in late 2019/2020 for a specific sampling problem. It is important to note that not all tasks become faster at this point since the efficiencies of classical and quantum algorithms vary. ↩︎
Assuming you hold no particularly exotic population ethics opinions. ↩︎
This could potentially be exploited to create a “grander future”, where computroniums simulate multiple experiences in superposition. Although this would require the whole computer communicating coherently, a non trivial task for a stellar sized computer. ↩︎