C

CarlShulman

5611 karmaJoined

Comments
379

I have two views in the vicinity. First, there's a general issue that human moral practice generally isn't just axiology, but also includes a number of elements that are built around interacting with other people with different axiologies, e.g. different ideologies coexisting in a liberal society, different partially selfish people or family groups coexisting fairly while preferring different outcomes. Most flavors of utilitarianism ignore those elements, and ceteris paribus would, given untrammeled power, call for outcomes that would be ruinous for ~all currently existing beings, and in particular existing societies. That could be classical hedonistic utilitarianism diverting the means of subsistence from all living things as we know them to fuel more hedonium, negative-leaning views wanting to be rid of all living things with any prospects for having or causing pain or dissatisfaction, or playing double-or-nothing with the universe until it is destroyed with probability 1.

So most people have reason to oppose any form of utilitarianism getting absolute power (and many utilitarianisms would have reason to self-efface into something less scary and dangerous and prone to using power in such ways that would have a better chance of realizing more of what it values by less endangering other concerns). I touch on this in an article with Elliott Thornley.

I have an additional objection to hedonic-only views in particular, in that they don't even take as inputs many of people's concerns, and so more easily wind up hostile to particular individuals supposedly for those individuals' sake. E.g. I would prefer to retain my memories and personal identity, knowledge and autonomy, rather than be coerced into forced administration of pleasure drugs. I also would like to achieve various things in the world in reality, and would prefer that to an experience machine. A normative scheme that doesn't even take those concerns as inputs is fairly definitely going to run roughshod over them, even if some theories that take them as inputs might do so too.

Physicalists and illusionists mostly don't agree with the identification of 'consciousness' with magical stuff or properties bolted onto the psychological or cognitive science picture of minds. All the real feelings and psychology that drive our thinking, speech and action exist. I care about people's welfare, including experiences they like, but also other concerns they have (the welfare of their children, being remembered after they die), and that doesn't hinge on magical consciousness that we, the physical organisms having this conversation, would have no access to. The illusion is of the magical part.

Re desires, the main upshot of non-dualist views of consciousness I think is responding to arguments that invoke special properties of conscious states to say they matter but not other concerns of people. It's still possible to be a physicalist and think that only selfish preferences focused on your own sense impressions or introspection matter, it just looks more arbitrary.

I think this is important because it's plausible that many AI minds will have concerns mainly focused on the external world rather than their own internal states, and running roughshod over those values because they aren't narrowly mentally-self-focused seems bad to me.

Here’s a fairly safe prediction: most of the potential harm from AI is potential harm to nonhuman animals.

I would think for someone who attended an AI, Animals, and Digital Minds conference it should look like an extremely precarious prediction, as AIs will likely immensely outnumber nonhuman animals, and could have much more of most features we could use in measuring 'harm'? 

Rapid fire:
 

  • Nearterm extinction risk from AI is wildly closer to total AI x-risk than the nuclear analog
  • My guess is that nuclear war interventions powerful enough to be world-beating for future generations would look tremendous in averting current human deaths, and most of the WTP should come from that if one has a lot of WTP related to each of those worldviews
  • Re suspicious convergence, what do you want to argue with here? I've favored allocation on VOI and low-hanging fruit on nuclear risk not leveraging AI related things in the past less than 1% of my marginal AI allocation (because of larger more likely near term risks from AI with more tractability and neglectedness); recent AI developments tend to push that down, but might surface something in the future that is really leveraged on avoiding nuclear war
  • I agree not much has been published in journals on the impact of AI being developed in dictatorships
  • Re lock-in I do not think it's remote (my views are different from what that paper limited itself to) for a CCP-led AGI future, 

     

I agree that people should not focus on nuclear risk as a direct extinction risk (and have long argued this), see Toby's nuke extinction estimates as too high, and would assess measures to reduce damage from nuclear winter to developing neutral countries mainly in GiveWell-style or ordinary CBA terms, while considerations about future generations would favor focus on AI, and to a lesser extent bio. 

However, I do think this wrongly downplays the effects on our civilization beyond casualties and local damage of a nuclear war that wrecks the current nuclear powers, e.g. on disrupting international cooperation, rerolling contingent nice aspects of modern liberal democracy, or leading to release of additional WMD arsenals (such as bioweapons, while disrupting defense against those weapons). So the 'can nuclear war with current arsenals cause extinction' question misses most of the existential risk from nuclear weapons, which is indirect in contributing to other risks that could cause extinction or lock-in of permanent awful regimes. I think marginal philanthropic dollars can save more current lives and help the overall trajectory of civilization more on other risks, but I think your direct extinction numbers above do greatly underestimate how much worse the future should be expected to be given a nuclear war that laid waste to, e.g. NATO+allies and the Russian Federation.

You dismiss that here:

> Then discussions move to more poorly understood aspects of the risk (e.g. how the distribution of values after a nuclear war affects the longterm values of transformative AI).

But I don't think it's a huge stretch to say that a war with Russia largely destroying the NATO economies (and their semiconductor supply chains), leaving the PRC to dominate the world system and the onrushing creation of powerful AGI, makes a big difference to the chance of locked-in permanent totalitarianism and the values of one dictator running roughshod over the low-hanging fruit of many others' values. That's very large compared to these extinction effects. It also doesn't require bets on extreme and plausibly exaggerated nuclear winter magnitude.

Similarly, the chance of a huge hidden state bioweapons program having its full arsenal released simultaneously (including doomsday pandemic weapons) skyrockets in an all-out WMD war in obvious ways.

So if one were to find super-leveraged ways reduce the chance of nuclear war (this applied less to measures to reduce damage to nonbelligerent states) then in addition to beating GiveWell at saving current lives, they could have big impacts on future generations. Such opportunities are extremely scarce, but the bar for looking good in future generation impacts is less than I think this post suggests.

Thank you for the comment Bob.

I agree that I also am disagreeing on the object-level, as Michael made clear with his comments (I do not think I am talking about a tiny chance, although I do not think the RP discussions characterized my views as I would), and some other methodological issues besides two-envelopes (related to the object-level ones).  E.g. I would not want to treat a highly networked AI mind (with billions of bodies and computation directing them in a unified way, on the scale of humanity) as a millionth or a billionth of the welfare of the same set of robots and computations with less integration (and overlap of shared features, or top-level control), ceteris paribus. 

Indeed, I would be wary of treating the integrated mind as though welfare stakes for it were half or a tenth as great, seeing that as a potential source of moral catastrophe, like ignoring the welfare of minds not based on proteins. E.g. having tasks involving suffering  and frustration done by large integrated minds, and pleasant ones done by tiny minds, while increasing the amount of mental activity in the former. It sounds like the combination of object-level and methodological takes attached to these reports would favor ignoring almost completely the integrated mind.

Incidentally, in a world where small animals are being treated extremely badly and are numerous, I can see a temptation to err in their favor, since even overestimates of their importance could be shifting things in the right marginal policy direction. But thinking about the potential moral catastrophes on the other side helps sharpen the motivation to get it right.

In practice, I don't prioritize moral weights issues in my work, because I think the most important decisions hinging on it will be in an era with AI-aided mature sciences of mind, philosophy and epistemology. And as I have written regardless of your views about small minds and large minds, it won't be the case that e.g. humans are utility monsters of impartial hedonism (rather than something bigger, smaller, or otherwise different), and grounds for focusing on helping humans won't be terminal impartial hedonistic in nature. But from my viewpoint baking in that integration (and unified top-level control or mental overlap of some parts of computation) close to eliminates mentality or welfare (vs less integrated collections of computations) seems bad in non-Pascalian fashion.

Lots of progress on AI, alignment, and governance. This sets up a position where it is likely that a few years later there's an AI capabilities explosion and among other things:
 

  • Mean human wealth skyrockets, while AI+robots make cultured meat and substitutes, as well as high welfare systems (and reengineering biology) cheap relative to consumers' wealth; human use of superintelligent AI advisors leads to global bans on farming with miserable animals and/or all farming
  • Perfect neuroscientific and psychological knowledge of humans and animals, combined with superintelligent advisors, lead to concern for wild animals; robots with biological like abilities and greater numbers and capacities can safely adjust wild animal ecologies to ensure high welfare at negligible material cost to humanity, and this is done

If it was 2028 it would be more like 'the above has already happened' rather than conditions being well set up for it.

Not much new on that front besides continuing to back the donor lottery in recent years, for the same sorts of reasons as in the link, and focusing on research and advising rather than sourcing grants.

A bit, but more on the willingness of AI experts and some companies to sign the CAIS letter and lend their voices to the view 'we should go forward very fast with AI, but keep an eye out for better evidence of danger and have the ability to control things later.'

My model has always been that the public is technophobic, but that 'this will be constrained like peaceful nuclear power or GMO crops' isn't enough to prevent a technology that enables DSA and OOMs (and nuclear power and GMO crops exist, if AGI exists somewhere that place outgrows the rest of the world if the rest of the world sits on the sidelines). If leaders' understanding of the situation is that public fears are erroneous, and going forward with AI means a hugely better economy (and thus popularity for incumbents) and avoiding a situation where abhorred international rivals can safely disarm their military, then I don't expect it to be stopped. So the expert views, as defined by who the governments view as experts, are central in my picture.

Visible AI progress like ChatGPT strengthens 'fear AI disaster' arguments but at the same time strengthens 'fear being behind in AI/others having AI' arguments. The kinds of actions that have been taken so far are mostly of the latter type (export controls, etc), and measures to monitor the situation and perhaps do something later if the evidential situation changes. I.e. they reflect the spirit of the CAIS letter, which companies like OpenAI and such were willing to sign, and not the pause letter which many CAIS letter signatories oppose.
 
The evals and monitoring agenda is an example of going for value of information rather than banning/stopping AI advances, like I discussed in the comment, and that's a reason it has had an easier time advancing.

I don't want to convey that there was no discussion, thus my linking the discussion and saying I found it inadequate and largely missing the point from my perspective. I made an edit for clarity, but would accept suggestions for another.

 

Load more