Hide table of contents

Epistemic status: squishy values stuff, written quickly, maybe unoriginal

Background

Let's talk about bad situations for a moment. If you find yourself in a bad situation, you do what you can, and that's that. Even if all your available options are bad in an absolute sense, there has got to be a relatively least bad option, and the best you can do is seek it with clear eyes and execute it with a steady hand. I'm reminded of a passage from J.R.R. Tolkien's The Fellowship of the Ring:

I wish it need not have happened in my time," said Frodo.
"So do I," said Gandalf, "and so do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the time that is given us.”

[Source: https://www.goodreads.com/quotes/12357-i-wish-it-need-not-have-happened-in-my-time]

But when you find yourself in a mental model of a possible bad situation, for example, when thinking about it or talking about it, you can see the situation on two levels.

Level 1: You try to see the situation as if you were in it. You evaluate your available options relative to each other. The potential value of doing this is doing some of the work of relative evaluation ahead of time so that you would have more time to execute and be less likely to make a mistake in evaluation if you were actually in the situation.

Level 2: You evaluate the situation in question relative to other situations, which may be present, remembered, imagined, etc. You evaluate the option-set of a situation relative to the option-sets of other situations. This guides the evaluation of interventions to affect the probabilities of actualizing the situations considered.

Example: The situation where you have no money or social support and must choose between stealing food and letting your family go hungry obviously has a worse option-set than those of both the situation where you have money and can comfortably buy the food and the situation where you have social support and can use it to obtain food licitly. Level 1 says if you find yourself in that situation, just grit your teeth and steal the food (probably—you may have different values). Level 2 says that sounds awful, so we should look for ways to give people money and/or social support to avoid actualizing that situation.

Sometimes the distinction isn't so clear. You can think of Level 2 as just Level 1 with a vastly increased set of “available” options, but with less confidence in the expanded options. I read the Yudkowsky post “The Third Alternative” as suggesting keeping the Level 2 perspective in mind even as you find yourself in bad situations [https://www.lesswrong.com/posts/erGipespbbzdG5zYb/the-third-alternative]. But this takes time, which can be precious in certain bad situations. The person in the example could look for a way to get money and/or social support, but they'd better get it in a few weeks, because their family is hungry.

Triage

Consider literal, medical triage. Basically formalized Level 1 thought.

This post was inspired in part by Holly Elmore's defense of triage [https://mhollyelmoreblog.wordpress.com/2016/08/26/we-are-in-triage-every-second-of-every-day/]. I read their post as addressing the failure mode of neglecting serious Level 1 thinking in favor of little more than wishful thinking. However, this failure mode primarily appears when there is an approximate Level 2 consensus that the situation is bad and interventions to avoid it in the future should be sought.

I'm worried about areas where apparently compassionate people are stuck in Level 1 thinking without considering the expanded, speculative Level 2 perspective.

Justice, meritocracy, and triage

Let me clarify these terms as I'll be using them:

Triage: “Not everyone is going to be healthy in this situation. But we can distribute what medical resources we have so as to maximize health.”

Justice: “Not everyone is going to be safe and free in this situation. But we can distribute power so as to maximize safety and freedom.”

Meritocracy: “Not everyone is going to have the things and lifestyle they want in this situation. But we can distribute what economic resources we have so as to maximize people having the things and lifestyle they want.”

Look similar? I think so. They're all responses to different faces of scarcity. But while I see people neglecting Level 1 thinking in the context of triage while having approximate consensus on Level 2, I barely see people seriously think on Level 2 at all in the contexts of justice and meritocracy. It's clear that triage, justice, and meritocracy are better than un-triage, in-justice, and un-meritocracy. But better still are non-triage, non-justice, and non-meritocracy, defined as follows:

Non-Triage: “We have the medical resources to ensure that everyone in this situation is healthy.”

Non-Justice: “We have the resources to ensure that everyone in this situation is safe and free.”

Non-Meritocracy: “We have the economic resources and distribution mechanisms to ensure that everyone in this situation can have the things and lifestyles they want.”

We don't currently have the resources for non-triage. But that doesn't stop people from seeking interventions to approach it. Aren't non-justice and non-meritocracy similarly desirable to approach? We have strong instincts (perhaps with selfish evolutionary origins) and cultural intuitions about avoiding in-justice and un-meritocracy that can make us not immediately feel the costs of denying people what they value the way we feel such costs in the context of triage. But the costs are still there.

Point being?

So would just agreeing that certain kinds of abundance would be desirable make them happen? Of course not. But it might be worth thinking about them now in case they happen suddenly. I fear an otherwise good future in which our rigid Level 1 systems are massively amplified by technological and economic progress as they are without sufficient mechanisms for gracefully winding down if they become unnecessary.

If we were to find ourselves in a future with a true abundance of economic wealth, would the rigid economic systems we had developed to serve meritocracy pointlessly deny the lazy, the unintelligent, and the untrustworthy a share of it? This seems theoretically “easy” to solve with something like an unconditional basic income that grows with the overall economy.

If we were to find ourselves in a future with sufficient technology that people don't die when they don't want to nor are made unsafe, then there would be no compassionate reason for people to remain in life sentences. What are compassionate people to do in such a situation?

And so on.

1

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since: Today at 9:38 PM

I like the use of the "non-X" concept (which is new to me) to explore post-scarcity, a topic that has been talked about a lot within EA. Something like a universal basic income has a lot of popular support among members of this community, and there's a lot of writing on "how good the world could be, if we do things right and don't experience a catastrophe from which we can't recover".

Some resources you might like, if you haven't seen them yet:

Thanks for the cite :)

You're right, I generally think of Level 2 thinking as fighting the hypothetical. For the purposes of our philosophical games, it's really annoying when people can't answer the question and deal with fundamental tradeoffs. It's like fighting the setup to a math problem-- "Does Jane really have to divide up her apples?" They are refusing to engage with their values, which is the point. BUT, irl, it is pretty important not to get locked into a falsely narrow idea of what the situation is and leap to bite bullets. You aren't given the ironclad certainty of the hypo. If you're not sure this is a triage situation, then devote time to figuring that out.

The fear I was addressing in my Triage essay was that people get locked onto "finding another way" as their Level 1 answer. Because there are situations where a creative solution eliminates a hard choice, there must not be any hard choices! They won't take decisive triage action, because that's sub-theoretically-optimal, but they will let the worst outcomes come about (i.e., waiting too long so everyone dies) so long as they didn't have to get their hands dirty. I think the fear that people will rush to drastic action when there were alternatives is just as valid.

I do get a bit annoyed by the fear that we'll get so good at triage that reasoning developed under conditions of emergency and scarcity will get locked in. It's not just you. People seem really afraid of giving in to the logic of triage even if they understand it, like they'll lose some important moral or intellectual faculty if they do so. They especially fear that they shouldn't adopt triage ideas if they won't always have to think that way. It's like they are worried about taking the utilitarianism red pill and not being able to unsee that way of thinking even if they know it's unnecessary. It would be interesting to study why this is. Be that as it may, though, triage thinking is the best thing we have in emergency medical situations under conditions of scarcity, which still exists. Acknowledging tradeoffs and scarcity more broadly still seems pretty important to maximizing utility today as well. I don't think "we may have abundance one day, and then we wouldn't have to think about tradeoffs" is a reason not to employ triage and lose all those QALYs in the meantime. I also think it's very unlikely that triage/tradeoffs, if they were embraced where applicable today, would be much harder to unlearn in conditions of abundance than the deeper, instinctive scarcity thinking we'd have to deal with anyway.