[ Question ]

Has your "EA worldview" changed over time? How and why?

by Ben_Kuhn 8mo23rd Feb 201913 comments

28


If you're Open Phil, you can hedge yourself against the risk that your worldview might be wrong by diversifying. But the rest of us are just going to have to figure out which worldview is actually right. Do you feel like you've made progress on this? Tell me the story!

PS:

In this post, I’ll use “worldview” to refer to a set of highly debatable (and perhaps impossible to evaluate) beliefs that favor a certain kind of giving.

PPS: don't anchor yourself just to open phil's examples (long-term vs short tem or animals vs humans). There are lots of other worldview axes. (Random examples: trust in inside vs outside view, how much weight you put on different types of arguments, general level of skepticism, moral realism, which morals you're realist about, ...)

New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

2 Answers

I feel like I've made progress on this. (Caveat with confirmation bias, Dunning-Kruger, etc.)

Areas where I feel like I've made progress:

  • Used to not think very much about cluelessness when doing cause area comparison; now it's one of my main cause-comparison frameworks.
  • Have become more solidly longtermist, after reading some of Reasons & Persons and Nick Beckstead's thesis.
  • Have gotten clearer on the fuzzies vs. utilons distinction, and now weight purchasing fuzzies much more highly than I used to. (See Giving more won't make you happier.)
  • Have reduced my self-deception around fuzzies & utilons. I used to do a lot more "altruistic" stuff where my actual motivations were about fulfilling some internal narrative but I thought I was acting altruistically (i.e. I was thought I was purchasing utilons whereas on reflection I see that I was purchasing fuzzies). I do this less now.
  • Now believe that it's very important to pay good salaries to people who have developed a track record of doing high-quality, altruistic work. (I used to think this wasn't a leveraged use of funds, because these people would probably continue doing their good work in the counterfactual. My former view wasn't thinking clearly about incentive effects.)
  • Have become less confident in how we construct life satisfaction metrics. (I was naïvely overconfident before.)
  • Now believe that training up one's attention & focus is super important; I was previously treating those as fixed quantities / biological endowments.

Some areas that seem important, where I don't feel like I've made much progress yet:

  • Whether to focus more on satisfying preferences or provisioning (hedonic) utility.
  • Stuff about consciousness & where to draw the line for extending moral patienthood. (Rabbit hole 1, rabbit hole 2)
  • What being a physicalist / monist implies about morality.
  • What believing that I live in a deterministic system (wherein the current state is entirely the result of the preceding state) implies about morality.
  • Whether to be a moral realist or antirealist. (And if antirealist, how to reconcile that with some notion of objectivity such that it's not just "whatever I want" / "everything is permitted," which seem to impoverish what we mean by being moral.)
  • How contemplative Eastern practices (mainly Soto Zen, Tibetan, and Theravada practices) mesh with Western analytic frameworks.
  • Really zoomed-out questions like "Why is there something rather than nothing?"
  • What complexity theory implies about effective action.

I used to think that things have independent positive value and that this value would aggregate and intercompare over time and space.

In other words, I used to believe in some sort of Cosmic Scoreboard where I could weigh, for example, someone’s lifetime happiness against their, or someone else’s, moment of suffering.

I now think that this clinging to things as independently valuable and aggregable contributes to theoretical problems like utility monsters, wireheading, the repugnant conclusion (and other intuitively grotesque outweighing scenarios), infinitarian paralysis, and disagreements in cause prioritization.

I now feel that my previous beliefs in independent positive value, intercomparable aggregates of experiences, and the Cosmic Scoreboard more generally were convenient fictions that helped me avoid ‘EA guilt’ from not helping to prevent the suffering I could—by believing that there could be more important things, or that the suffering could be outweighed instead of prevented.

I’d now say that no kind of outweighing helps the suffering, because the suffering is separate in spacetime; outweighing is a tool of thought we use to prioritize our decisions so as to not regret them later, not a physical process like mixing red and green liquids to see which color wins. We have limited attention, and each moment of suffering is worth preventing for its own sake.

We don’t minimize statistics of aggregate suffering on the Cosmic Scoreboard except as a tool of thought, while in actuality we arrange ourselves and the world so as to prevent as many moments of suffering as we can. Suffering is not a property of lives, populations, or worlds, but of phenomenally bound moments, and those moments are what (I think) we ultimately care about, are moved by, and want to long-term equalize and minimize, from behind the Veil of ignorance.

For more on the internal process that is leading me to let go of independent positive values, replacing them with their interdependent value (in terms of their causal relationships to preventing suffering), here is my comment on the recent post, You have more than one goal, and that's fine:

I don’t see a way to ultimately resolve conflicts between an (infinite) optimizing (i.e., maximizing or minimizing) goal and other goals if they’re conceptualized as independent from the optimizing goal. Even if we consider the independent goals as something to only suffice (i.e., take care of “well enough”) instead of optimize as much as possible, it’ll be the case that our optimizing goal, by its infinite nature, wants to negotiate to itself as much resources as possible, and its reasons for earning its living within me are independently convincing (that’s why it’s an infinite goal of mine in the first place).
So an infinite goal of preventing suffering wants to understand why my conflicting other goals require a certain amount of resources (time, attention, energy, money) for them to be sufficed, and in practice this feels to me like an irreconcilable conflict unless they can negotiate by speaking a common language, i.e., one which the infinite goal can understand.
In the case of my other goals wanting resources from an {infinite, universal, all-encompassing, impartial, uncompromising} compassion, my so-called other goals start to be conceptualized through the language of self-compassion, which the larger, universal compassion understands as a practical limitation worth spending resources on – not for the other goals’ independent sake, but because they play a necessary and valuable role in the context of self-compassion aligned with omnicompassion. In practice, it also feels most sustainable and long-term wise to usually if not always err on the side of self-compassion, and to only gradually attempt moving resources from self-compassionate sub-goals and mini-games towards the infinite goal. Eventually, omnicompassion may expect less and less attachment to the other goals as independent values, acknowledging only their interdependent value in terms of serving the infinite goal, but it is patient and understands human limitations and growing pains and the counterproductive nature of pushing its infinite agenda too much too quickly.
If others have found ways to reconcile infinite optimizing goals with sufficing goals without a common language to mediate negotiations between them, I’d be very interested in hearing about them, although this already works for me, and I’m working on becoming able to write more about this, because it has felt like an all-around unified “operating system”, replacing utilitarianism. :)