Lukas_Gloor

Topic Contributions

Comments

The case to abolish the biology of suffering as a longtermist action

The total amount of global suffering is cut in half. However, nothing else about the algorithm changes, and nobody’s behavior changes.

This probably doesn't apply to Pearce's qualia realist view, but it's possible to have a functionalist notion of suffering where eliminating suffering would change people's behavior. 

For instance, I think of suffering as an experienced need to change something about one's current experience, something that by definition carries urgency to bring about change. If you get rid of that, it has behavioral consequences. If a person experiences pain asymbolia where they don't consider their "pain" bothersome in any way, I would no longer call it suffering. 

Against immortality?

One question is whether coalitions of pro-social people are better at deferring power to good successors than dictators are at ensuring that they have equally bad/dictatorial successors. If you believe that Democracies are unlikely to “turn bad,” shouldn’t you be in favor of reducing the variance to the lifetime of dictatorships?

The discussion here is very abstract, so I’m unsure if I disagree because I picture a different pathway to giving people extreme longevity or whether I disagree with your general world model and reasoning. In any case, here are some additional related thoughts:

  • You point to a trendline with the share of Democracies increasing, but that’s not the same as seeing improvements in leaders’ quality (some democracies may be becoming increasingly dysfunctional). I’m open to the idea that world leaders are getting better, but if I had to make an intuition-driven judgment based on the last few years, I wouldn’t say so.
  • It’s inherently easier to attain and keep power by any means necessary with zero ethics vs. gaining it to do something complicated and altruistic (and staying ethical along the way and keeping people alive, etc.)
  • There’s another asymmetry where it’s often easier to destroy/attack/kill than build something. Those brilliant people coordinating to keep potential dictators in check, they may not be enough. If having no ethics means you get to use super powers, then the people with ethics are in trouble (as Owen points out, they're the ones who will die first or have their families imprisoned). (Related: I think it’s ambiguous whether Putin supports your point. The world is in a very precarious situation now because of one tyrant. Lots of people will starve even if nuclear escalation can be avoided.)
  • Some personality pathologies like narcissism and psychopathy seem to be increasing lately, tracking urbanization rates and probably other factors. Evolutionarily, higher death rates at the hands of upset others seem to be “worth it” for these life-history strategies.
  • People can be “brilliant” on some cognitive dimensions but fail at defense against dark personality types. For instance, some otherwise brilliant people may be socially naive.
  • Outside of our EA bubble, it doesn’t look like the world is particularly sane or stable. Great/brilliant people cannot easily do much in a broken system. And maybe the few brilliant people who take heroic responsibility are outnumbered by too many merely mediocre people who are easily corrupted and easily self-deceive.

That said, I see some important points in favor of your more optimistic picture: 

  • There are highly influential EA orgs whose leadership and general culture I’m really impressed by. (This doesn’t go for all highly influential EA orgs.)
  • I expect EA to continue to gain more influence over time.
Against immortality?

I like the point that we should not only consider the individual case. I guess people's hope is that, 
in a future where everything goes maximally well, bad externalities like "old people's beliefs become ossified" can be addressed by some really clever governance structure. 

On the individual case:

In my post "The Life-Goals Framework: How I Reason About Morality as an Anti-Realist," I mention longevity/not wanting to die a couple of times as an example of a "life goal." ("Life goal" is a term I introduce that means something like "an objective you care about terminally, so much that you have formed [or want to form] an optimizing mindset around it.") In the post, I argue that it's a personal choice which (if any) life goals we adopt.

One point I make there is that by deciding that not wanting to die is immensely important to you, you adopt a new metric of scoring how well you're doing in life. That particular metric (not wanting to die) places a lot of demands on you. I think this point is related to your example where you dislike telling people (implicitly or explicitly) that they're failing if they've had a happy routine, watched their grandkids grow up and have kids of their own, and feel like they can let go rather than needing to do more in the world. 

Here some relevant quotes from my article (one theme is that the way we form life goals isn't too dissimilar from how we chose between leisure activities and adopt lifestyles or careers): 

In the same way different people feel the most satisfied with different lifestyles or careers, people’s intuitions may differ concerning how they’d feel with the type of identity (or mindset) implied by a given life goal.

[...]

For the objective “valuing longevity,” it’s worth noting how life-altering it would be to adopt the corresponding optimizing mindset. Instead of trusting your gut about how well life is going, you’d have to regularly remind yourself that perceived happiness over the next decades is entirely irrelevant in the grand scheme of things. What matters most is that you do your best to optimize your probability of survival. People with naturally high degrees of foresight and agency (or those with somewhat of a “prepper mentality”) may actively enjoy that type of mindset – even though it conflicts with common sense notions of living a fulfilled life. By contrast, the people who are happiest when they enjoy their lives moment-by-moment may find the future-focused optimizing mindset off-putting.

[...]

Earlier on, I wrote the following about how we choose leisure activities [this was in the context of discussing whether to go skiing or spend the weekend cozily at home]:

>[...] [W]e tend to have a lot of freedom in how we frame our decision options. We use this >freedom, this reframing capacity, to become comfortable with the choices we are about to >make. In case skiing wins out, then “warm and cozy” becomes “lazy and boring,” and “cold and >tired” becomes “an opportunity to train resilience / apply Stoicism.” This reframing ability is a >double-edged sword: it enables rationalizing, but it also allows us to stick to our beliefs and >values when we’re facing temptations and other difficulties.

The same applies to how we choose self-oriented life goals. On one side, there’s the appeal of the potential life-goal objective (e.g., “how good it would be to live forever” or “how meaningful it would be to have children”). On the other side, there are all the ways how the corresponding optimizing mindset would make our lives more complicated and demanding. Human psychology seems somewhat dynamic here because the reflective equilibrium can end up on opposite sides depending on each side’s momentum. Option one – by committing to the life goal in question, “complicated and demanding” can become “difficult but meaningful.” Alternatively, there’s option two. By deciding that we don’t care about the particular life-goal objective, we can focus on how much we value the freedom that comes with it. In turn, that freedom can become part of our terminal values. (For example, adopting a Buddhist/Epicurean stance toward personal death can feel liberating, and the same goes for some other major life choices, such as not wanting children.)

These quotes describe how people form their objectives, the standards by which they measure their lives. Of course, someone can now say "An objective being 'demanding' isn't necessarily a good reason to give up on it. What about the possibility that some people form ill-inspired life goals because they don't know/don't fully realize what they're giving?"

I talk about this concern ("ill-inspired life goals") in this section of the post. 

How much current animal suffering does longtermism let us ignore?

Point D. sounds li, but can be avoided just by thinking carefully at each step (it only applies to very naive implementations). And you mention other counter considerations yourself. Some more thoughts in reply:  

  • If we don't get longtermism right, we'll no longer be in a position to deliberately affect the course of the future (accordingly, "future neartermists" won't be in a position to do any good, either)
    • Even worse, if we get things especially wrong, we might accidentally lock in unusually bad futures
  • If we get longtermism right, we'd use the transition to TAI to gain better control over the future to no longer live in a state where the world is metaphorically burning (in other words, future near-termists won't be as important anymore)
  • Intermediates states where things continue as they are now (people can affect things but don't have sufficient control to get the world they want) seem unstable.

The last bullet point seems right to me because technological progress increases the damage of things going wrong and it "accelerates history" – the combination of these factors leads to massive instability. Technology progress also improves our potential reach for attaining control over things and make them stable, up to the point where  someone messes it up irreversibly

I'm pessimistic about attaining the high degrees of control it would require to make the future go really well. In my view, one argument for focusing on ongoing animal suffering is "maybe the long-term future will be out of our control eventually no matter what we do." (This point applies especially to people whose comparative advantage might be near-term suffering reduction.) However, other people are more optimistic.

Point E. seems true and important, but some of the texts you cite seem one-sided to me. (Here's a counter consideration I rarely see mentioned in these texts; it relates to what I said in reply to your point D.)

The other arguments/points you make sound like "longtermists might be biased/rationalizing/speciesists." 

I wonder where that's coming from? I think it would be more potentially persuasive to focus on direct reasons why reducing animal suffering is a good opportunity for impact.  We all might be biased in various ways, so appeals to biases/irrationality rarely do much. (Also, I don't think there's "one best cause" so different people will care about different causes depending on their moral views.) 

An uncomfortable thought experiment for anti-speciesist non-vegans

I'm just saying that you then also have to say "if I imagine myself in a world where it is mentally-challenged humans instead of animals, I would not stop eating the humans for the same reason X."

I agree with that. Some of your earlier comments seemed like they were setting up a slightly different argument.

Someone can have the following position: 
(1) They would continue to eat humans in the thought experiment world where one's psychological dispositions treat it as not a big deal (e.g., because it's normalized in that world and has become a habit)
(2) They wouldn't eat humans in the thought experiment world if they retained their psychological dispositions / reactive attitudes from the actual world – in that case, they'd finds the scenario abhorrent
(3) When they think about (1) and (2), they don't feel compelled to modify their dispositions / reactive attitudes toward not eating non-human animals (because of opportunity costs and because consequentialism doesn't have the concept of "appropriate reactions" – or, at least, the consequentialist concept for "appropriate reactions" is more nuanced)

I think you were arguing against (3) at one point, while I and other commenters were arguing in favor of (3).

An uncomfortable thought experiment for anti-speciesist non-vegans

Yes. Isn't it true that people who go vegan at one point in their life revert back to eating animal products? I remember this was the case based on data discussed in 2014 or so, when I last looked into it. Is it any different now? Those findings would strongly suggests that veganism isn't cost-free. Since the way you ask makes me think you believe the costs to be low, consider the possibility that you're committing the typical mind fallacy. (Similar to how a naturally skinny person might say "I don't understand obese people; isn't it easy to eat healthy." Well, no, most Americans are overweight and probably not thrilled about it, so if they could change it at low cost, they would. So, for some people, it' isn't easy to stay skinny.)

Maybe we disagree on what to count as "low costs." If their lives depended on it, I'd say almost everyone would be capable of going vegan. However, many people prefer prison to suicide, but that doesn't mean it's "low cost" to go to prison. Maybe you're thinking the cost of going vegan is low compared to the suffering at stake for animals. And I basically agree with that – the suffering is horrible and our culinary pleasures or potential health benefits appear trivial by comparison. However, this applies only if we think about it as a direct comparison in an "all else equal" situation. If you compare the animal suffering you can reduce via personal veganism vs. the good you can do from focusing your daily work on having the biggest positive impact, it's often the suffering from your food consumption that pales in comparison (though it may depend on a person's situation). People have made estimates of this (e.g., here)! Again, the previous point relates to the same disagreement we discussed in the comment thread above. If someone does important altruistic work, everything that increases their productivity or priorization by 1% is vastly more important than going vegan. You might say, "Okay, but why not go vegan in addition to those things?" Sure, that would be the ideal, in theory. But in practice, there are dozens of things that a person isn't currently doing that could improve their productivity or prioritization by 1%, and those 1%-improvements would be a bigger deal in terms of reducing suffering (or doing good in other ways). So, unless one first implements all those other things, it doesn't make sense, on consequentialist morality, to prioritize personal veganism. 

An uncomfortable thought experiment for anti-speciesist non-vegans

My understanding is that it does have such a concept in that we should react similarly to different acts that are equally good/bad to each other in terms of their consequences.

This is only the case in an "all else equal" situation! It is very much not the case when changing one's reactive attitudes comes at some cost and where that cost competes with other, bigger opportunities to do good. 

It's similar in flavour to Singer's drowning child thought experiment - he draws parallels between walking past a drowning child and not donating to help those in severe poverty. If you think to yourself "I would save the child", then you should probably donate more. If you think to yourself "I would walk past the child but would feel extreme guilt" then you should probably feel that same guilt not donating. Does that make sense?

Same reply here: Singer's thought experiment only works in an "all else equal" situation. Depending on their circumstances, maybe someone should do EA direct work and not donate at all. Or maybe donate somewhere other than poverty reduction. 

An uncomfortable thought experiment for anti-speciesist non-vegans

On a consequentialist morality, feelings of moral outrage, horror or disgust are not what matters. (Instead, what matters on it is how to allocate attention/willpower/dedication to reduce the most suffering given one's psychology, opportunity costs, etc.) In the original post, you say "These are just biases though, and all they show is that we don’t react badly enough to animal farming." Consequentialist morality doesn't have a concept for "reacting appropriately." (This is why, in Thomas Kwa's answer, he talks about what he'd do conditional on having a disgust response vs. what he'd do without the disgust response. Because the animal suffering in question isn't quite bad enough to compete with alternative ways of using attention or willpower, going vegan isn't thought to be worth it under all social and psychological circumstances – e.g., it isn't thought to be worth it if it's costly convenience-wise and/or health-wise, if there's no disgust reaction, and if the social environment tolerates it.) 

Since you're primarily addressing consequentialists here, I recommend explaining why "reacting badly enough"/"reacting appropriately to moral horrors" is an important tenet of the morality that should matter to us (important enough that it can compete  with things like optimizing one's impact-oriented career). 

Without those missing arguments, I think it'll seem to people like you're operating under some rigid framework and can't understand it when other people don't share your assumptions (prompting downvotes). 

For what it's worth, I do feel the force of your intuition pump (though I doubt it's new to most people) and I think it's true that consequentialist morality is uncanny here, and maybe that speaks in favor of going (more) vegan. Personally, I've been vegan in the past but currently at the stage where I mostly buy the consequentialist arguments against it (provided I am really trying to reduce a lot of suffering), but still feel like there's some dissonance/a feeling like I'm doing something I don't want to do. I don't really endorse that on reflection, but the feeling doesn't go away, either. 

Ben Jamin's Shortform

Superforecasters can predict more accurately if they make predictions at 1% increments rather than 2% increments. It either hasn't been studied, or they've found negative evidence, whether they can make predictions at lower % increments. 0.01% increments are way below anything that people regularly predict on; there's no way to develop the calibration for that. In my comment, I meant to point out that anyone who thinks they're calibrated enough to talk about 0.01% differences, or even just things close to that, is clearly not a fantastic researcher and we probably shouldn't give them lots of money.

A separate point that makes me uneasy about your specific example (but not about generally spending more money on some people with the rationale that impact is likely extremely heavy-tailed) is the following. I think  even people with comparatively low dark personality traits are susceptible to corruption by power. Therefore, I'd want people to have mental inhibitions from developing taste that's too extravagant. It's a fuzzy argument because one could say the same thing about spending $50 on an uber eats order, and on that sort of example, my intuition is  "Obviously it's easy to develop this sort of taste and if it saves people time, they should do it rather than spend willpower on changing their food habits." 
 But on a scale from $50 uber eats orders to spending $150,000 on a sports car, there's probably a point somewhere where someone's conduct too dissimilar to the archetype of "person on a world-saving mission." I think someone you can trust with a lot of money and power would be wise enough that, if they ever form the thought "I should get a sports car because I'd be more productive if I had one," they'd sound a mental alarm and start worrying they got corrupted. (And maybe they'll end up buying the sports car anyway, but they certainly won't be thinking "this is good for impact.") 


 

Ben Jamin's Shortform

I agree with the sentiment, but I wouldn't put it quite as drastically. (If someone actually talked about things that make them 0.01% more productive, that suggests they have lost the plot.) Also, "(and I trust their judgment and value alignment)"  does a lot of work. I assume you wouldn't say this about any researcher who self-describes as working on longtermism. If some grantmakers have poor judgment, they may give away large sums of money to other grantmakers for regranting who may have even worse judgment or could be corrupt, then you get a pretty bad ecosystem where it's easy for the wrong people to attain more influence within EA. 

Load More