MB

Matthew_Barnett

3571 karmaJoined

Comments
316

This is a paper about the technological singularity, not the economic singularity. 

I don't see the distinction here. William Nordhaus used the term "economic singularity" in the same sense as the technological singularity. Economists generally believe that technological innovation is the main cause of long-term economic growth, making these two topics inherently interconnected.

This post contains many claims that you interpret OpenAI to be making. However, unless I'm missing something, I don't see citations for any of the claims you attribute to them. Moreover, several of the claims feel like they could potentially be described as misinterpretations of what OpenAI is saying or merely poorly communicated ideas. 

I acknowledge that this post was hastily-written, and it's not necessary to rigorously justify every claim, but your thesis also seems like the type of thing that should be proven, rather than asserted. It would indeed be damning if OpenAI is taking contradictory positions about numerous important issues, but I don't think you've shown that they are in this post. This post would be stronger if you gave concrete examples.

For example, you say that OpenAI is simultaneously claiming,

  1. OpenAI cares a lot about safety (good for public PR and government regulations).
  2. OpenAI isn’t making anything dangerous and is unlikely to do so in the future (good for public PR and government regulations).

Is it true that OpenAI has claimed that they aren't making anything dangerous and aren't likely to do so in the future? Where have they said this?

In my latest post I talked about whether unaligned AIs would produce more or less utilitarian value than aligned AIs. To be honest, I'm still quite confused about why many people seem to disagree with the view I expressed, and I'm interested in engaging more to get a better understanding of their perspective.

At the least, I thought I'd write a bit more about my thoughts here, and clarify my own views on the matter, in case anyone is interested in trying to understand my perspective.

The core thesis that was trying to defend is the following view:

My view: It is likely that by default, unaligned AIs—AIs that humans are likely to actually build if we do not completely solve key technical alignment problems—will produce comparable utilitarian value compared to humans, both directly (by being conscious themselves) and indirectly (via their impacts on the world). This is because unaligned AIs will likely both be conscious in a morally relevant sense, and they will likely share human moral concepts, since they will be trained on human data.

Some people seem to merely disagree with my view that unaligned AIs are likely to be conscious in a morally relevant sense. And a few others have a semantic disagreement with me in which they define AI alignment in moral terms, rather than the ability to make an AI share the preferences of the AI's operator. 

But beyond these two objections, which I feel I understand fairly well, there's also significant disagreement about other questions. Based on my discussions, I've attempted to distill the following counterargument to my thesis, which I fully acknowledge does not capture everyone's views on this subject:

Perceived counter-argument: The vast majority of utilitarian value in the future will come from agents with explicitly utilitarian preferences, rather than those who incidentally achieve utilitarian objectives. At present, only a small proportion of humanity holds partly utilitarian views. However, as unaligned AIs will differ from humans across numerous dimensions, it is plausible that they will possess negligible utilitarian impulses, in stark contrast to humanity's modest (but non-negligible) utilitarian tendencies. As a result, it is plausible that almost all value would be lost, from a utilitarian perspective, if AIs were unaligned with human preferences.

Again, I'm not sure if this summary accurately represents what people believe. However, it's what some seem to be saying. I personally think this argument is weak. But I feel I've had trouble making my views very clear on this subject, so I thought I'd try one more time to explain where I'm coming from here. Let me respond to the two main parts of the argument in some amount of detail:

(i) "The vast majority of utilitarian value in the future will come from agents with explicitly utilitarian preferences, rather than those who incidentally achieve utilitarian objectives."

My response:

I am skeptical of the notion that the bulk of future utilitarian value will originate from agents with explicitly utilitarian preferences. This clearly does not reflect our current world, where the primary sources of happiness and suffering are not the result of deliberate utilitarian planning. Moreover, I do not see compelling theoretical grounds to anticipate a major shift in this regard.

I think the intuition behind the argument here is something like this:

In the future, it will become possible to create "hedonium"—matter that is optimized to generate the maximum amount of utility or well-being. If hedonium can be created, it would likely be vastly more important than anything else in the universe in terms of its capacity to generate positive utilitarian value.

The key assumption is that hedonium would primarily be created by agents who have at least some explicit utilitarian goals, even if those goals are fairly weak. Given the astronomical value that hedonium could potentially generate, even a tiny fraction of the universe's resources being dedicated to hedonium production could outweigh all other sources of happiness and suffering.

Therefore, if unaligned AIs would be less likely to produce hedonium than aligned AIs (due to not having explicitly utilitarian goals), this would be a major reason to prefer aligned AI, even if unaligned AIs would otherwise generate comparable levels of value to aligned AIs in all other respects.

If this is indeed the intuition driving the argument, I think it falls short for a straightforward reason. The creation of matter-optimized-for-happiness is more likely to be driven by the far more common motives of self-interest and concern for one's inner circle (friends, family, tribe, etc.) than by explicit utilitarian goals. If unaligned AIs are conscious, they would presumably have ample motives to optimize for positive states of consciousness, even if not for explicitly utilitarian reasons.

In other words, agents optimizing for their own happiness, or the happiness of those they care about, seem likely to be the primary force behind the creation of hedonium-like structures. They may not frame it in utilitarian terms, but they will still be striving to maximize happiness and well-being for themselves and others they care about regardless. And it seems natural to assume that, with advanced technology, they would optimize pretty hard for their own happiness and well-being, just as a utilitarian might optimize hard for happiness when creating hedonium.

In contrast to the number of agents optimizing for their own happiness, the number of agents explicitly motivated by utilitarian concerns is likely to be much smaller. Yet both forms of happiness will presumably be heavily optimized. So even if explicit utilitarians are more likely to pursue hedonium per se, their impact would likely be dwarfed by the efforts of the much larger group of agents driven by more personal motives for happiness-optimization. Since both groups would be optimizing for happiness, the fact that hedonium is similarly optimized for happiness doesn't seem to provide much reason to think that it would outweigh the utilitarian value of more mundane, and far more common, forms of utility-optimization.

To be clear, I think it's totally possible that there's something about this argument that I'm missing here. And there are a lot of potential objections I'm skipping over here. But on a basic level, I mostly just lack the intuition that the thing we should care about, from a utilitarian perspective, is the existence of explicit utilitarians in the future, for the aforementioned reasons. The fact that our current world isn't well described by the idea that what matters most is the number of explicit utilitarians, strengthens my point here.

(ii) "At present, only a small proportion of humanity holds partly utilitarian views. However, as unaligned AIs will differ from humans across numerous dimensions, it is plausible that they will possess negligible utilitarian impulses, in stark contrast to humanity's modest (but non-negligible) utilitarian tendencies."

My response:

Since only a small portion of humanity is explicitly utilitarian, the argument's own logic suggests that there is significant potential for AIs to be even more utilitarian than humans, given the relatively low bar set by humanity's limited utilitarian impulses. While I agree we shouldn't assume AIs will be more utilitarian than humans without specific reasons to believe so, it seems entirely plausible that factors like selection pressures for altruism could lead to this outcome. Indeed, commercial AIs seem to be selected to be nice and helpful to users, which (at least superficially) seems "more utilitarian" than the default (primarily selfish-oriented) impulses of most humans. The fact that humans are only slightly utilitarian should mean that even small forces could cause AIs to exceed human levels of utilitarianism.

Moreover, as I've said previously, it's probable that unaligned AIs will possess morally relevant consciousness, at least in part due to the sophistication of their cognitive processes. They are also likely to absorb and reflect human moral concepts as a result of being trained on human-generated data. Crucially, I expect these traits to emerge even if the AIs do not share human preferences. 

To see where I'm coming from, consider how humans routinely are "misaligned" with each other, in the sense of not sharing each other's preferences, and yet still share moral concepts and a common culture. For example, an employee can share moral concepts with their employer while having very different consumption preferences from them. This picture is pretty much how I think we should primarily think about unaligned AIs that are trained on human data, and shaped heavily by techniques like RLHF or DPO.

Given these considerations, I find it unlikely that unaligned AIs would completely lack any utilitarian impulses whatsoever. However, I do agree that even a small risk of this outcome is worth taking seriously. I'm simply skeptical that such low-probability scenarios should be the primary factor in assessing the value of AI alignment research.

Intuitively, I would expect the arguments for prioritizing alignment to be more clear-cut and compelling than "if we fail to align AIs, then there's a small chance that these unaligned AIs might have zero utilitarian value, so we should make sure AIs are aligned instead". If low probability scenarios are the strongest considerations in favor of alignment, that seems to undermine the robustness of the case for prioritizing this work.

While it's appropriate to consider even low-probability risks when the stakes are high, I'm doubtful that small probabilities should be the dominant consideration in this context. I think the core reasons for focusing on alignment should probably be more straightforward and less reliant on complicated chains of logic than this type of argument suggests. In particular, as I've said before, I think it's quite reasonable to think that we should align AIs to humans for the sake of humans. In other words, I think it's perfectly reasonable to admit that solving AI alignment might be a great thing to ensure human flourishing in particular.

But if you're a utilitarian, and not particularly attached to human preferences per se (i.e., you're non-speciesist), I don't think you should be highly confident that an unaligned AI-driven future would be much worse than an aligned one, from that perspective.

I agree, but I think very few people want to acquire e.g. 10 T$ of resources without broad consent of others.

I think I simply disagree with the claim here. I think it's not true. I think many people would want to acquire $10T without the broad consent of others, if they had the ability to obtain such wealth (and they could actually spend it; here I'm assuming they actually control this quantity of resources and don't get penalized because of the fact it was acquired without the broad consent of others, because that would change the scenario). It may be that fewer than 50% of people have such a desire. I'd be very surprised if it were <1% and, I'd even be surprised if it was <10%.

I agree biological humans will likely become an increasingly small fraction of the world, but it does not follow that AI carries a great risk to humas[1]. I would not say people born after 1960 carry a great risk risk to people born before 1960, even though the fraction of the global resources controlled by the latter is becoming increasingly small.

I think humans born after 1960 do pose a risk to humans born before 1960 in some ordinary senses. For example, the younger humans could vote to decrease medical spending, which could lead to early death for the older humans. They could also vote to increase taxes on people who have accumulated a lot of wealth, which very disproportionately hurts old people. This is not an implausible risk either; I think these things have broadly happened many times in the past.

That said, I suspect part of the disagreement here is about time scales. In the short and medium term, I agree: I'm not so much worried about AI posing a risk to humanity. I was really only talking about long-term scenarios in my above comment.

In my mind, very few humans would want to pursue capabilities which are conducive to gaining control over humanity.

This seems false. Plenty of people want wealth and power, which are "conducive to gaining control over [parts of] humanity". It is true that no single person has ever gotten enough power to actually get control over ALL of humanity, but that's presumably because of the difficulty of obtaining such a high level of power, rather than because few humans have ever pursued the capabilities that would be conducive towards that goal. Again, this distinction is quite important.

There are diminishing returns to having more resources. For example, if you give 10 M$ (0.01 % of global resources) to a random human, they will not have much of a desire to take risks to increase their wealth to 10 T$ (10 % of global resources), which would be helpful to gain control over humanity. To increase their own happiness and that of their close family and friends, they would do well by investing their newly acquired wealth in exchange-traded funds (ETFs). A good imitator AI would share our disposition of not gaining capabilities beyhond a certain point, and therefore (like humans) never get close to having a chance of gaining control over humanity.

I agree that a good imitator AI would likely share our disposition towards diminishing marginal returns to resource accumulation. This makes it likely that such AIs would not take very large risks. However, I still think the main reason why no human has ever taken control over humanity is because there was no feasible strategy that any human in the past could have taken to obtain such a high degree of control, rather than because all humans in the past have voluntarily refrained from taking the risks necessary to obtain that degree of control.

In fact, risk-neutral agents that don't experience diminishing returns to resource consumption will asymptotically eventually lose all their wealth in high-risk bets. Therefore, even without this human imitation argument, we shouldn't be much concerned about risk-neutral agents in most scenarios (including risks from reinforcement learners) since they're very likely to go bankrupt before they ever get to the point at which they can take over the world. Such agents are only importantly relevant in a very small fraction of worlds.

I think humans usually aquire power fairly gradually. A good imitator AI would be mindful that acquiring power too fast (suddenly fooming) would go very much against what humans usually do.

Again, the fact that humans acquire power gradually is more of a function of our abilities than it is a function of our desires. I repeat myself but this is important: these are critical facts to distinguish from each other. "Ability to" and "desire to" are very different features of the situation.

It is very plausible to me that some existing humans would "foom" if they had the ability. But in fact, no human has such an ability, so we don't see anyone fooming in the real world. This is mainly a reflection of the fact that humans cannot foom, not that they don't want to foom.

No human has ever had control over all humanity, so I agree there is a sense in which we have "zero data" about what humans would do under such conditions. Yet, I am still pretty confident that the vast majority of humans would not want to cause human extinction.

I am also "pretty confident" about that, but "pretty confident" is a relatively weak statement here. When evaluating this scenario, we are extrapolating into a regime in which we have no direct experience. It is one thing to say that we can be "pretty confident" in our extrapolations (and I agree with that); it is another thing entirely to imply that we have tons of data points directly backing up our prediction, based on thousands of years of historical evidence. We simply do not have that type of (strong) evidence.

I do not think this is the best comparison. There would arguably be many imitator AIs, and these would not gain near-omnipotent abilities overnight. I would say both of these greatly constrain the level of subjugation. Historically, innovations and new investions have spread out across the broader economy, so I think there should be a strong prior against a single imitator AI suddenly gaining control over all the other AIs and humans.

I agree, but this supports my point: I think imitator AIs are safe precisely because they will not have godlike powers. I am simply making the point that this is different from saying they are safe because they have human-like motives. Plenty of things in the world are safe because they are not very powerful. It is completely different if something is safe because its motives are benevolent and pure (even if it's extremely powerful).

How long-run are you talking about here? Humans 500 years ago arguably had little control over current humans, but this alone does not imply a high existential risk 500 years ago.

I agree with Robin Hanson on this question. However, I think humans will likely become an increasingly small fraction of the world over time, as AIs become a larger part of it. Just as hunter-gatherers are threatened by industrial societies, so too may biological humans one day become threatened by future AIs. Such a situation may not be very morally bad (or deserving the title "existential risk"), because humans are not the only morally important beings in the world. Yet, it is still true that AI carries a great risk to humanity.

What is the risk level above which you'd be OK with pausing AI?

My loose off-the-cuff response to this question is that I'd be OK with pausing if there was a greater than 1/3 chance of doom from AI, with the caveats that:

  • I don't think p(doom) is necessarily the relevant quantity. What matters is the relative benefit of pausing vs. unpausing, rather than the absolute level of risk.
  • "doom" lumps together a bunch of different types of risks, some of which I'm much more OK with compared to others. For example, if humans become a gradually weaker force in the world over time, and then eventually die off in some crazy accident in the far future, that might count as "humans died because of AI" but it's a lot different than a scenario in which some early AIs overthrow our institutions in a coup and then commit genocide against humans.
  • I think it would likely be more valuable to pause later in time during AI takeoff, rather than before AI takeoff

Under what conditions would you be happy to attend a protest? (LMK if you have already attended one!)

I attended the protest against Meta because I thought their approach to AI safety wasn't very thoughtful, although I'm still not sure it was a good decision to attend. I'm not sure what would make me happy to attend a protest, but these scenarios might qualify:

  • A company or government is being extremely careless about deploying systems that pose great risks to the world. (This doesn't count situations in which the system poses negligible risks but some future system could pose a greater risk.)
  • The protesters have clear, reasonable demands that I broadly agree with (e.g. they don't complain much about AI taking people's jobs, or AI being trained on copyrighted data, but are instead focused on real catastrophic risks that are directly addressed by the protest).
Load more