If you're Open Phil, you can hedge yourself against the risk that your worldview might be wrong by diversifying. But the rest of us are just going to have to figure out which worldview is actually right. Do you feel like you've made progress on this? Tell me the story!


In this post, I’ll use “worldview” to refer to a set of highly debatable (and perhaps impossible to evaluate) beliefs that favor a certain kind of giving.

PPS: don't anchor yourself just to open phil's examples (long-term vs short tem or animals vs humans). There are lots of other worldview axes. (Random examples: trust in inside vs outside view, how much weight you put on different types of arguments, general level of skepticism, moral realism, which morals you're realist about, ...)

New Answer
Ask Related Question
New Comment

2 Answers

I feel like I've made progress on this. (Caveat with confirmation bias, Dunning-Kruger, etc.)

Areas where I feel like I've made progress:

  • Used to not think very much about cluelessness when doing cause area comparison; now it's one of my main cause-comparison frameworks.
  • Have become more solidly longtermist, after reading some of Reasons & Persons and Nick Beckstead's thesis.
  • Have gotten clearer on the fuzzies vs. utilons distinction, and now weight purchasing fuzzies much more highly than I used to. (See Giving more won't make you happier.)
  • Have reduced my self-deception around fuzzies & utilons. I used to do a lot more "altruistic" stuff where my actual motivations were about fulfilling some internal narrative but I thought I was acting altruistically (i.e. I was thought I was purchasing utilons whereas on reflection I see that I was purchasing fuzzies). I do this less now.
  • Now believe that it's very important to pay good salaries to people who have developed a track record of doing high-quality, altruistic work. (I used to think this wasn't a leveraged use of funds, because these people would probably continue doing their good work in the counterfactual. My former view wasn't thinking clearly about incentive effects.)
  • Have become less confident in how we construct life satisfaction metrics. (I was naïvely overconfident before.)
  • Now believe that training up one's attention & focus is super important; I was previously treating those as fixed quantities / biological endowments.

Some areas that seem important, where I don't feel like I've made much progress yet:

  • Whether to focus more on satisfying preferences or provisioning (hedonic) utility.
  • Stuff about consciousness & where to draw the line for extending moral patienthood. (Rabbit hole 1, rabbit hole 2)
  • What being a physicalist / monist implies about morality.
  • What believing that I live in a deterministic system (wherein the current state is entirely the result of the preceding state) implies about morality.
  • Whether to be a moral realist or antirealist. (And if antirealist, how to reconcile that with some notion of objectivity such that it's not just "whatever I want" / "everything is permitted," which seem to impoverish what we mean by being moral.)
  • How contemplative Eastern practices (mainly Soto Zen, Tibetan, and Theravada practices) mesh with Western analytic frameworks.
  • Really zoomed-out questions like "Why is there something rather than nothing?"
  • What complexity theory implies about effective action.

Wow, thanks for the great in depth reply!

now weight purchasing fuzzies much more highly than I used to.

Do you mean charitable fuzzies specifically? What kinds of fuzzies do you purchase more of? Do you think this generalizes to more EAs?

What believing that I live in a deterministic system (wherein the current state is entirely the result of the preceding state) implies about morality.

Once upon a time, I read a Douglas Hofstadter book that convinced me that the answer was "nothing" (basically because determinism works at the level of basic physics... (read more)

Makes sense... this line of reasoning is part of why it feels like an open question for me. On the other side – I feel like if, at root, I'm the composite of deterministic systems, then the concept of being morally obligated to do things loses force. (An example of how this sort of thing could inform views about morality.)
Yeah, I've updated towards focusing more on doing things that are helpful to people around me & in the communities I operate in. In part motivated by complexity & cluelessness considerations, and in part by feeling good about helping my friends, family, and community. I think doing stuff like this is much more in the direction of purchasing fuzzies, though it has a utilon component.
Also I'm reminded of how Stephanie Wykstra (GiveWell alum) started donating to bail reform [https://ssir.org/articles/entry/a_case_for_giving_locally].

I used to think that things have independent positive value and that this value would aggregate and intercompare over time and space.

In other words, I used to believe in some sort of Cosmic Scoreboard where I could weigh, for example, someone’s lifetime happiness against their, or someone else’s, moment of suffering.

I now think that this clinging to things as independently valuable and aggregable contributes to theoretical problems like utility monsters, wireheading, the repugnant conclusion (and other intuitively grotesque outweighing scenarios), infinitarian paralysis, and disagreements in cause prioritization.

I now feel that my previous beliefs in independent positive value, intercomparable aggregates of experiences, and the Cosmic Scoreboard more generally were convenient fictions that helped me avoid ‘EA guilt’ from not helping to prevent the suffering I could—by believing that there could be more important things, or that the suffering could be outweighed instead of prevented.

I’d now say that no kind of outweighing helps the suffering, because the suffering is separate in spacetime; outweighing is a tool of thought we use to prioritize our decisions so as to not regret them later, not a physical process like mixing red and green liquids to see which color wins. We have limited attention, and each moment of suffering is worth preventing for its own sake.

We don’t minimize statistics of aggregate suffering on the Cosmic Scoreboard except as a tool of thought, while in actuality we arrange ourselves and the world so as to prevent as many moments of suffering as we can. Suffering is not a property of lives, populations, or worlds, but of phenomenally bound moments, and those moments are what (I think) we ultimately care about, are moved by, and want to long-term equalize and minimize, from behind the Veil of ignorance.

For more on the internal process that is leading me to let go of independent positive values, replacing them with their interdependent value (in terms of their causal relationships to preventing suffering), here is my comment on the recent post, You have more than one goal, and that's fine:

I don’t see a way to ultimately resolve conflicts between an (infinite) optimizing (i.e., maximizing or minimizing) goal and other goals if they’re conceptualized as independent from the optimizing goal. Even if we consider the independent goals as something to only suffice (i.e., take care of “well enough”) instead of optimize as much as possible, it’ll be the case that our optimizing goal, by its infinite nature, wants to negotiate to itself as much resources as possible, and its reasons for earning its living within me are independently convincing (that’s why it’s an infinite goal of mine in the first place).
So an infinite goal of preventing suffering wants to understand why my conflicting other goals require a certain amount of resources (time, attention, energy, money) for them to be sufficed, and in practice this feels to me like an irreconcilable conflict unless they can negotiate by speaking a common language, i.e., one which the infinite goal can understand.
In the case of my other goals wanting resources from an {infinite, universal, all-encompassing, impartial, uncompromising} compassion, my so-called other goals start to be conceptualized through the language of self-compassion, which the larger, universal compassion understands as a practical limitation worth spending resources on – not for the other goals’ independent sake, but because they play a necessary and valuable role in the context of self-compassion aligned with omnicompassion. In practice, it also feels most sustainable and long-term wise to usually if not always err on the side of self-compassion, and to only gradually attempt moving resources from self-compassionate sub-goals and mini-games towards the infinite goal. Eventually, omnicompassion may expect less and less attachment to the other goals as independent values, acknowledging only their interdependent value in terms of serving the infinite goal, but it is patient and understands human limitations and growing pains and the counterproductive nature of pushing its infinite agenda too much too quickly.
If others have found ways to reconcile infinite optimizing goals with sufficing goals without a common language to mediate negotiations between them, I’d be very interested in hearing about them, although this already works for me, and I’m working on becoming able to write more about this, because it has felt like an all-around unified “operating system”, replacing utilitarianism. :)

Really cool thought, this is persuasive to me.

If I can try to rephrase your beliefs: economic rationality tells us that tradeoffs do in fact exist, and therefore rational agents must be able to make a comparison in every case. There has to be some amount of every value that you'd trade for another amount of every other value, otherwise you'll end up paralyzed and decisionless.

You're saying that, although we'd like to have this coherent total utility function, realistically it's impossible to do so. We run into the theoretical prob... (read more)

3Teo Ajantaival3y
I’m curious, what more specifically do you find it persuasive of? I generally feel that people do not easily want to bite the bullets that experiences have no independent positive value (only interdependent value), and that value doesn’t aggregate, and that outweighing “does not compute” in a physical balancing kind of sense. (I haven’t yet read the 2016 Oxford Handbook of Hypo-egoic Phenomena [https://global.oup.com/academic/product/the-oxford-handbook-of-hypo-egoic-phenomena-9780199328079?cc=fi&lang=en&#] , but I expect that hypo-egoic systems, like many or most kinds of Buddhism, may be an angle from which it’d be easier to bite these bullets, though I don’t know much about the current intersection of Buddhism and EA.) Yes, we’ll unavoidably face quantitative resource splits between goals, none of which we can fully satisfy as long as we have even one infinitely hungry goal like “minimize suffering”, “maximize happiness”, or “maximize survival”. In practice, we can resolve conflicts between these goals by coming up with a common language to mediate trade between them, but how could they settle on an agreement if they were all independent and infinite goals? (My currently preferred solution is that happiness and survival are not such goals, compared to compassion.) Alternatively, they could split from being a unified agent into being three agents, each independently unified, but they’d eventually run into conflicts again down the line (if they’re competing over control, space, energy, etc.). I’m interested in internal unity from the “minimize suffering” perspective, because violent competition from non-negotiating splitting causes suffering. In other words, I suppose my self-compassion wants my goals to play in harmony, and “self-compassion aligned with omnicompassion” is the unification that results in that harmony in the most robust way I can imagine. More biological needs are more clearly in the domain of self-compassion, while omnicompassion is the theoreti
3Teo Ajantaival3y
As a parallel comment, here is more (from a previous discussion) of why I am gravitating towards suffering as the only independent (dis)value and everything else as interdependently valuable in terms of preventing suffering: –––– I experience all of the things quoted in Complexity of value [https://wiki.lesswrong.com/wiki/Complexity_of_value], but I don’t know how to ultimately prioritize between them unless they are commensurable. I make them commensurable by weighing their interdependent value in terms of the one thing we all(?) agree is an independent motivation: preventable suffering. (If preventable suffering is not worth preventing for its own sake, what is it worth preventing for, and is this other thing agreeable to someone undergoing the suffering as the reason for its motivating power?) This does not mean that I constantly think of them in these terms (that would be counterproductive), but in conflict resolution I do not assign them independent positive numerical values, which pluralism would imply one way or another. Any pluralist theory begs the question of outweighing suffering with enough of any independently positive value. If you think about it for five minutes, aggregate happiness (or any other experience) does not exist. If our first priority is to prevent preventable suffering, that alone is an infinite game; it does not help to make a detour to boost/copy positive states unless this is causally connected to preventing suffering. (Aggregates of suffering do not exist either, but each moment of suffering is terminally worth preventing, and we have limited attention, so aggregates and chain-reactions of suffering are useful tools of thought for preventing as many as we can. So are many other things without requiring our attaching them independent positive value, or else we would be tiling Mars with them whenever it outweighed helping suffering on Earth according to some formula.) My experience so far with this kind of unification is that it avo
3 comments, sorted by Click to highlight new comments since: Today at 1:16 PM
If you're Open Phil, you can hedge yourself against the risk that your worldview might be wrong by diversifying. But the rest of us are just going to have to figure out which worldview is actually right.

Minor/Meta aside: I don't think 'hedging' or diversification is the best way to look at this, whether one is an individual or a mega-funder.

On standard consequentialist doctrine, one wants to weigh things up 'from the point of view of the universe', and be indifferent as to 'who is doing the work'. Given this, it looks better to act in the way which best rebalances the humanity-wide portfolio of moral effort, rather than a more narrow optimisation of 'the EA community', 'OPs grants', or ones own effort.

This rephrases the 'neglectedness' consideration. Yet I think people don't often think enough about conditioning on the current humanity-wide portfolio, or see their effort as being a part of this wider whole, and this can mislead into moral paralysis (and, perhaps, insufficient extremising). If I have to 'decide what worldview is actually right', I'm screwed: many of my uncertainties I'd expect to be resilient to a lifetime of careful study. Yet I have better prospects of reasonably believing that "This issue is credibly important enough that (all things considered, pace all relevant uncertainties) in an ideal world humankind would address X people to work on this - given in fact there are Y, Y << X, perhaps I should be amongst them."

This is a better sketch for why I work on longtermism, rather than overall confidence in my 'longtermist worldview'. This doesn't make worldview questions irrelevant (there are lot of issues where the sketch above applies, and relative importance will be one of the ingredients that goes in the mix of divining which one to take), but it means I'm fairly sanguine about perennial uncertainty. My work is minuscule part of the already-highly-diversified corporate effort of humankind, and the tacit coordination strategy of people like me acting on our best guess of the optimal portfolio looks robustly good (a community like EA may allow better ones), even if (as I hope and somewhat expect) my own efforts transpire to have little value.

The reason I shouldn't 'hedge' but Open Phil should is not so much because they can afford to (given they play with much larger stakes, better resolution on 'worldview questions' has much higher value to them than to I), but because the returns to specialisation are plausibly sigmoid over the 'me to OP' range. For individuals, there's increasing marginal returns to specialisation: in the same way we lean against 'donation splitting' with money, so too with time (it seems misguided for me to spend - say - 30% on bio, 10% on AI, 20% on global health, etc.) A large funder (even though it still represents a minuscule fraction of the humanity-wide portfolio) may have overlapping marginal return curves between its top picks of (all things considered) most promising things to work on, and it is better placed to realise other 'portfolio benefits'.

I have a paper I've been kicking around in the back of my head for a couple years to formalize essentially this idea via economic/financial modern portfolio theory and economic collective action problem theory - but *someone* has me working on more important problems and papers instead... Gregory. ;)