Hide table of contents

Based on a couple of personal experiences, I have observed that when I choose to and have to take care to save a life, it causes me to value the being that I save.

Experiences

1. At some point, I stopped killing house flies and now instead catch them and release them outside. I have found that I'm now more emotionally responsive to insects and their well-being. If I see a fly gravitating toward a window (as they do when they seem to want to get out of a house), I project (or accurately intuit?) emotions in the fly of "I want freedom", and I can relate to that. (But I still kill mosquitoes, out of a sense of "they are too injurious to humans", I guess, and I don't have any emotions about them.)

Once I choose to catch and release a fly, I have to do certain things to care for it, to successfully catch it and release it. So it's both something I will and something I have to submit to.

2. Some time ago, I made a patch for Angband. Angband is an RPG where you tend to have to kill a lot of monsters. You're trying to kill Morgoth, the source of all evil, who lives 100 levels down a grinding dungeon. In my patch, I made it so that if you kill a monster, you incur sorrow, both in ways you understand ("apparent sorrow") and in ways you don't understand ("hidden sorrow", which accumulates when you have apparent sorrow that you haven't worked through). If you have too much sorrow, you burn out and have to quit -- from a gameplay standpoint, the same as death.

You can avoid monsters, or if necessary injure them enough to terrify them, but when you injure them, you risk killing them. In the standard version of Angband, you sometimes have to think carefully about how you approach combat, because monsters can be very dangerous. In the patch, even ordinary combat becomes risky, both to the monster and to you. Terrifying a monster (without killing it) takes more player skill than just killing it.

So while these monsters are in some sense unreal (a few bytes of data magnified by my imagination), I have invested in them as though they are real. And I found as I playtested the patch that I got to where I genuinely valued monster lives, felt sad and like I'd messed up when I killed them by accident.

I'm not much of a gamer, and for a while didn't play much of anything with the (very common) monster-killing mechanic, but recently I tried one that had it and found it hard to kill the monsters. But then I got over that and killed them because the game was too hard for me if I didn't do that, and I didn't feel like the point of that game was to not kill monsters.

EA Applications

1. Distant and future lives are about as real to me as Angband monsters. So if I can do something practical to try to save them, they may become more real to me, which then may help motivate me to do more things to try to save them.

I think my actual motivation in EA/EA-adjacent intending or working is "I need to do something, what can I work on? Where are the most interesting and important problems that I have some ability to work on?" instead of "I care about the well-being of distant or future people in some direct or emotional way." Whatever altruistic inclination I have is in my taste of things to work on, or is baked into how I define "interesting and important" as it guides practical action.

Sometimes I think about my ancestors (Gwern's narrowing moral circle article reminds me of this). They have names and I know a little about some of them. Some ancestors are literal (my grandparents), while others are "spiritual" (artists, philosophers, political leaders, etc. who seem to be predecessors in the path I feel I'm on). Maybe it's possible to think about literal or spiritual descendants, predicting how they might be (to try to know a little about them) in order to be connected to them like one might to one's ancestors.

So you can do practical things just to do practical things, or you can do them for something or someone. Maybe there's a useful difference between the two for some people.

2. People care about what is contested, what has to be fought for or cared for. This might mean that, as EAs (or other world-improvers) make the world easier to live in,

  • People will stop caring, because there's nothing left to fight or care for.
  • Or, people will instinctively know to care, and find ever finer things to fight for or care for.
  • Or, they might push back against easiness.
  • Or, someone might re-engineer humans to feel caring feelings regardless of what they do or experience.
  • Maybe other possibilities.

3. I can undo empathy if it becomes too costly for me personally. It seems like everybody does this, and since it seems so, we are understanding of it. But not understanding it, to some extent, is how we get moral progress.

Further Reading

One book that comes to mind with regard to the above is Leadership and Self-Deception by the Arbinger Institute. A basic idea in the book is that when we don't feel like doing the thing we naturally feel is right, for someone else, we enter an alternate reality that justifies our non-action (for instance, in the alternate reality, the other person is actually a bad person and deserves not to be well-treated).

6

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities