RobBensinger

Sequences

Late 2021 MIRI Conversations

Topic Contributions

Comments

'Beneficentrism', by Richard Yetter Chappell

I think beneficentrism is a good word and works fine. Feels well-optimized for its target audience, which I gather is philosophers and philosophy-fans who object to EA because they think EA commits you to utilitarianism.

"Long-Termism" vs. "Existential Risk"

S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren't captured by the short-term x-risk view.

Why not?

Shah and Yudkowsky on alignment failures

Yeah, a story this complicated isn't good for introducing people to AI risk (because they'll assume the added details are necessary for the outcome), but it's great for making the story more interesting and real-feeling.

The real world is less cute and funny, but is typically even more derpy / inelegant / garden-pathy / full of bizarre details.

AI views and disagreements AMA: Christiano, Ngo, Shah, Soares, Yudkowsky

Thanks for the question! I cross-posted it here; Nate Soares replies:

For sure. It's tricky to wipe out humanity entirely without optimizing for that in particular -- nuclear war, climate change, and extremely bad natural pandemics look to me like they're at most global catastrophes, rather than existential threats. It might in fact be easier to wipe out humanity by enginering a pandemic that's specifically optimized for this task (than it is to develop AGI), but we don't see vast resources flowing into humanity-killing-virus projects, the way that we see vast resources flowing into AGI projects. By my accounting, most other x-risks look like wild tail risks (what if there's a large, competent, state-funded successfully-secretive death-cult???), whereas the AI x-risk is what happens by default, on the mainline (humanity is storming ahead towards AGI as fast as they can, pouring billions of dollars into it per year, and by default what happens when they succeed is that they accidentally unleash an optimizer that optimizes for our extinction, as a convergent instrumental subgoal of whatever rando thing it's optimizing).

Visible Thoughts Project and Bounty Announcement

We have now received the first partial run that meets our quality bar. The run was submitted by LessWrong user Vanilla_cabs. Vanilla's team is still expanding the run (and will probably fix some typos, etc. later), but I'm providing a copy of it here with Vanilla's permission, to give others an example of the kind of thing we're looking for:

https://docs.google.com/document/d/1Wsh8L--jtJ6y9ZB35mEbzVZ8lJN6UDd6oiF0_Bta8vM/edit

Vanilla's run is currently 266 steps long. Per the Visible Thoughts Project FAQ, we're willing to pay authors $20 / step for partial runs that meet our quality bar (up to at least the first 5,000 total steps we're sent), so the partial run here will receive $5320 from the prize pool (though the final version will presumably be much longer and receive more; we expect a completed run to be about 1000 steps).

Vanilla_cabs is open to doing paid consultation for anyone who's working on this project. So if you want feedback from someone who understands our quality bar and can demonstrably pass it, contact Vanilla_cabs via their LessWrong profile.

Visible Thoughts Project and Bounty Announcement

In case you missed it: we now have an FAQ for this project, last updated Jan. 7.

The Bioethicists are (Mostly) Alright

I'm guessing a lot of the disagreement comes from looking at different time-slices of 'bioethics', and different parts of the field. From Luke Muehlhauser:

[...] For many decades, doctors were the presumed authorities on medical ethics, and their approach was fairly pragmatic and utilitarian, i.e. focused on competently and professionally doing what is best for the patient.

Starting in the 1960s, new medical capabilities (e.g. heart transplants) and some medical ethics scandals (e.g. the Tuskegee syphilis experiment) seemed to demand ethical analysis, but for the most part, the professional medical community generally didn’t want to spend its time with such “distractions” from the practice of medicine.

A mix of scholars, often theologians or philosophers, began to fill this void by devoting themselves full-time to studying and writing about questions of medical ethics. These people began to call themselves “bioethicists.”

Then, when some key government commissions and court cases came about in the 70s and 80s, the bioethicists had done enough work to establish themselves as “the experts” on these topics that they had a large and lasting influence on some important early laws and court decisions concerning various issues in medical ethics. Since the medical community had also neglected to develop curricular materials for teaching medical ethics, this void was also filled by texts written by bioethicists rather than by medical professionals, and thus whole generations of medical professionals were trained in the bioethicists’ early approach to medical ethics rather than (say) an approach developed by doctors.

These developments annoyed many medical professionals. In part, this was because they felt that professional medical expertise was necessary (and perhaps sufficient) for thinking through the ethical issues that arise in the practice of medicine. Another source of annoyance may have been that bioethicists of the time tended to be more theological and deontological (i.e. less utilitarian), and more cautious about developing and deploying new medical capabilities, compared to doctors.

The early laws and court decisions related to bioethics continue to have an outsized effect, though bioethicists today are probably more diverse than they were in the earliest years of bioethics, and (e.g.) many of them are explicitly utilitarian.

Luke quotes Baker's Before Bioethics:

[...] In Europe, by contrast, organized medicine neither abandoned medical ethics nor abdicated moral authority. Consequently, just as alcoholic and caffeinated beverages retained jurisdiction over social life in European pubs and cafes, rendering soft drinks to the status of second-class beverages, so, too, organized medical and scientific societies (e.g., the British and Dutch medical societies and specialty colleges) retained jurisdiction over medical ethics — relegating aspiring European bioethicists to the status of second-tier authorities.

Thus, the Royal Dutch Medical Association… was able to negotiate physician-initiated euthanasia practices with Dutch legal authorities without involving “bioethicists” in any major decision.

Similarly, the British National Health Service… was also able to initiate a covert rationing scheme limiting use of dialysis and other expensive technologies to younger patients — effectively resolving the rationing problem created by the Scribner shunt by denying access to the elderly — without annoying discussions or protests from “bioethicists.”

Having retained jurisdiction and moral authority over medical ethics, organized medicine in Europe had the prerogative of negotiating with governments to determine the appropriate nature of end-of-life care (euthanasia) or the allocation of scarce resources (age rationing).

In America, by contrast, laissez-faire ethics rendered medicine unwilling to express authoritative moral positions and thus unable to negotiate them with the U.S. government. Thus, these issues were negotiated with “outsiders” invited into the once exclusively medical jurisdiction of “medical” ethics; that is, they were negotiated with “bioethicists.” …to deal with American medicine’s abdication from moral authority, American bureaucrats joined with government and private foundations to empower a hodgepodge of ex-theologians, lawyers, philosophers, social scientists, and humanistic nurses, physicians, and researchers to address issues raised by research ethics scandals and by morally disruptive technologies…

Animal welfare EA and personal dietary options

Note that there might be other crucial factors in assessing whether 'more factory farming' or 'less factory farming' is good on net — e.g., the effect on wild animals, including indirect effects like 'factory farming changes the global climate, which changes various ecosystems around the world, which increases/decreases the population of various species (or changes what their lives are like)'.

It then matters a lot how likely various wild animal species are to be moral patients, whether their lives tend to be 'worse than death' vs. 'better than death', etc.

And regarding:

The number would be much higher than 60% on strictly utilitarian grounds, but humans aren't strict utilitarians and it makes sense for people working hard on improving animal lives to develop strong feelings about their own personal relationship to factory farming, or to want to self-signal their commitment in some fashion.

I do think that most of EA's distinctive moral views are best understood as 'moves in the direction of utilitarianism' relative to the typical layperson's moral intuitions. This is interesting because utilitarianism seems false as a general theory of human value (e.g., I don't reflectively endorse being perfectly morally impartial between my family and a stranger). But utilitarianism seems to get one important core thing right, which is 'when the stakes are sufficiently high and there aren't complicating factors, you should definitely be impartial, consequentialist, scope-sensitive, etc. in your high-impact decisions'; the weird features of EA morality seem to mostly be about emulating impartial benevolent maximization in this specific way, without endorsing utilitarianism as a whole.

Like, an interest in human challenge trials is a very recognizably ‘EA-moral-orientation’ thing to do, even though it’s not a thing EAs have traditionally cared about — and that’s because it’s thinking seriously, quantitatively, and consistently about costs and benefits, it’s consequentialist, it’s impartially trying to improve welfare, etc.

There’s a general, very simple and unified thread running through all of these moral divergences AFAICT, and it’s something like ‘when choices are simultaneously low-effort enough and high-impact enough, and don’t involve severe obvious violations of ordinary interpersonal ethics like "don’t murder", utilitarianism gets the right answer’. And I think this is because ‘impartially maximize welfare’ is itself a simple idea, and an incredibly crucial part of human morality.

Animal welfare EA and personal dietary options

I'd guess the most controversial part of this post will be the claim 'it's not incredibly obvious that factory-farmed animals (if conscious) have lives that are worse than nonexistence'?

But I don't see why. It's hard to be confident of any view on this, when we understand so little about consciousness, animal cognition, or morality. Combining three different mysteries doesn't tend to create an environment for extreme confidence — rather, you end up even more uncertain in the combination than in each individual component.

And there are obvious (speciesist) reasons people would tend to put too much confidence in 'factory-farmed animals have net-negative lives'.

E.g., when we imagine the Holocaust, we imagine relatively rich and diverse experiences, rather than reducing concentration camp victims to a very simple thing like 'pain in the void'.

I would guess that humans' nightmarish experience in concentration camps was usually better than nonexistence; and even if you suspect this is false, it seems easy to imagine how it could be true, because there's a lot more to human experience than 'pain, and beyond that pain, darkness'. It feels like a very open question in the human case.

But just because chickens lack some of the specific faculties humans have, doesn't mean that (if conscious) chicken minds are 'simple', or simple in the particular ways people tend to assume. In particular, it's far from obvious (and depends on contingent theories about consciousness and cognition) that you need human-style language or abstraction in order to have 'rich' experience that just has a lot of morally important stuff going on. A blank map doesn't correspond to a blank territory; it corresponds to a thing we know very little about.

(For similar reasons, I think EAs in general worry far too little about whether chickens and other animals are utility monsters — this seems like a very live hypothesis to me, whether factory-farmed chickens have net-positive lives or net-negative ones.)

More Christiano, Cotra, and Yudkowsky on AI progress

I've written up short summaries of the Discord logs so far (and collected audio versions, where available) on  https://intelligence.org/late-2021-miri-conversations/ 

Load More