All of Jack Malde's Comments + Replies

Longtermist slogans that need to be retired

I think the existence of investing for the future as a meta option to improve the far future essentially invalidates both of your points. Investing money in a long-term fund won’t hit diminishing returns anytime soon. I think of it as the “Give Directly of longtermism”.

3Michael_Wiebe9d
I'd be interested to see the details. What's the expected value of a rainy day fund, and what factors does it depend on?
Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Certainly agree there is something weird there! 

Anyway I don't really think there was too much disagreement between us, but it was an interesting exchange nonetheless!

Should we buy coal mines?

I’ve read your overview and skimmed the rest. You say there will probably be better ways to limit coal production or consumption, but I was under the impression this wasn’t the main motivation for buying a coal mine. I thought the main motivation was to ensure we have the energy resources to be able to rebuild society in case we hit some sort of catastrophe. Limiting coal production and consumption was just an added bonus. Am I wrong?

EDIT: appreciate you do argue the coal may stay in the ground even if we don’t buy the mine which is very relevant to my question

EDIT2: just realised limiting consumption is important to preserve energy stores, but limiting production perhaps not

1Max Clarke19d
Buying coal mines to secure energy production post-global-catastrophe is a much more interesting question. Seems to me that buying coal, rather than mines, is a better idea in that case.
Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?

A longtermist career spans decades, as would going vegan for life or donating regularly for decades. So it was mostly a temporal thing, trying to somewhat equalise the commitment associated with different altruistic choices.

but why should the locus of agency be the individual? Seems pretty arbitrary.

Hmm well aren't we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?

If you agree that voting i

... (read more)
2Rohin Shah19d
We're all particular brain cognitions that only exist for ephemeral moments before our brains change and become a new cognition that is similar but not the same. (See also "What counts as death?" [https://www.cold-takes.com/what-counts-as-death/].) I coordinate both with the temporally-distant (i.e. future) brain cognitions that we typically call "me in the past/future" and with the spatially-distant brain cognitions that we typically call "other people". The temporally-distant cognitions are more similar to current-brain-cognition than the spatially-distant cognitions but it's fundamentally a quantitative difference, not a qualitative one. By "fanatical" I want to talk about the thing that seems weird about Pascal's mugging and the thing that seems weird about spending your career searching for ways to create infinitely large baby universes, on the principle that it slightly increases the chance of infinite utility. If you agree there's something weird there and that longtermists don't generally reason using that weird thing and typically do some other thing instead, that's sufficient for my claim (b).
Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

That's fair enough, although when it comes to voting I mainly do it for personal pleasure / so that I don't have to lie to people about having voted!

When it comes to something like donating to GiveWell charities on a regular basis / going vegan for life I think one can probably have greater than 50% belief they will genuinely save lives / avert suffering. Any single donation or choice to avoid meat will have far lower probability, but it seems fair to consider doing these things over a longer period of time as that is typically what people do (and what someone who chooses a longtermist career essentially does).

5Rohin Shah20d
Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate? Given that you seem to agree voting is fanatical, I'm guessing you want to consider the probability that an individual's actions are impactful, but why should the locus of agency be the individual? Seems pretty arbitrary. If you agree that voting is fanatical, do you also agree that activism is fanatical? The addition of a single activist is very unlikely to change the end result of the activism.
Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Probabilities are on a continuum. It’s subjective at what point fanaticism starts. You can call those examples fanatical if you want to, but the probabilities of success in those examples are probably considerably higher than in the case of averting an existential catastrophe.

I think the probability that my personal actions avert an existential catastrophe is higher than the probability that my personal vote in the next US presidential election would change its outcome.

I think I'd plausibly say the same thing for my other examples; I'd have to think a bit more about the actual probabilities involved.

Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Hmm I do think it's fairly fanatical. To quote this summary:

For example, it might seem fanatical to spend $1 billion on ASI-alignment for the sake of a 1-in-100,000 chance of preventing a catastrophe, when one could instead use that money to help many people with near-certainty in the near-term.

The probability that any one longtermist's actions will actually prevent a catastrophe is very small. So I do think longtermist EAs are acting fairly fanatically.

Another way of thinking about it is that, whilst the probability of x-risk may be fairly high, the x-ris... (read more)

9Rohin Shah20d
By this logic it seems like all sorts of ordinary things are fanatical: 1. Buying less chicken from the grocery store is fanatical (this only reduces the number of suffering chickens if you buying less chicken was the tipping point that caused the grocery store to order one less shipment of chicken, and that one fewer order was the tipping point that caused the factory farm to reduce the number of chickens it aimed to produce; this seems very low probability) 2. Donating small amounts to AMF is fanatical (it's very unlikely that your $25 causes AMF to do another distribution beyond what it would have otherwise done) 3. Voting is fanatical (the probability of any one vote swinging the outcome is very small) 4. Attending a particular lecture of a college course is fanatical (it's highly unlikely that missing that particular lecture will make a difference to e.g. your chance of getting the job you want). Generally I think it's a bad move to take a collection of very similar actions and require that each individual action within the collection be reasonably likely to have an impact. I don't know of anyone who (a) is actively working reducing the probability of catastrophe and (b) thinks we only reduce the probability of catastrophe by 1-in-100,000 if we spend $1 billion on it. Maybe Eliezer Yudkowsky and Nate Soares, but probably not even them. The summary is speaking theoretically; I'm talking about what happens in practice.
Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Yeah that's fair. As I said I'm not entirely sure on the motivation point. 

I think in practice EAs are quite fanatical, but only to a certain point. So they probably wouldn't give in to a Pascal's mugging but many of them are willing to give to a long-term future fund over GiveWell charities -  which is quite a bit fanaticism! So justifying fanaticism still seems useful to me, even if EAs put their fingers in their ears with regards to the most extreme conclusion...

2Rohin Shah21d
It really doesn't seem fanatical to me to try to reduce the chance of everyone dying, when you have a specific mechanism by which everyone might die that doesn't seem all that unlikely! That's the right action according to all sorts of belief systems, not just longtermism! (See also these [https://forum.effectivealtruism.org/posts/rFpfW2ndHSX7ERWLH/simplify-ea-pitches-to-holy-shit-x-risk] posts [https://forum.effectivealtruism.org/posts/KDjEogAqWNTdddF9g/long-termism-vs-existential-risk] .)
To fund research, or not to fund research, that is the question

Hi Michael, thanks for your reply! I apologise I didn’t check with you before saying that you have ruled out research a priori. I will put a note to say that this is inaccurate. Prioritising based on self-reports of wellbeing does preclude funding research, but I’m glad to hear that you may be open to assessing research in the future.

Sorry to hear you struggled to follow my analysis. I think I may have over complicated things, but it did help me to work through things in my own head! I haven’t really looked at the literature into VOI.

In a nutshell my model... (read more)

Consider Changing Your Forum Username to Your Real Name

FYI you can contact the EA Forum team to get your profile hidden from search engines (see here).

Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Yes I disagree with b) although it's a nuanced disagreement.

I think the EA longtermist movement is currently choosing the actions that most increase probability of infinite utility, by reducing existential risk.

What I'm less sure of is that achieving infinite utility is the motivation for reducing existential risk. It might just be that achieving "incredibly high utility" is the motivation for reducing existential risk. I'm not too sure on this.

My point about the long reflection was that when we reach this period it will be easier to see the fanatics from the non-fanatics. 

2Rohin Shah21d
This is not in conflict with my claim (b). My claim (b) is about the motivation or reasoning by which actions are chosen. That's all I rely on for the inferences in claims (c) and (d). I think we're mostly in agreement here, except that perhaps I'm more confident that most longtermists are not (currently) motivated by "highest probability of infinite utility".
Consider Changing Your Forum Username to Your Real Name

I’ve reversed an earlier decision and have settled on using my real name. Wish me luck!

Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

I'm super excited for you to continue making these research summaries! I have previously written about how I want to see more accessible ways to understand important foundational research - you've definitely got a reader in me.

I also enjoy the video summaries. It would be great if GPI video and written summaries were made as standard. I appreciate it's a time commitment, but in theory there's quite a wide pool of people who could do the written summaries and I'm sure you could get funding to pay people to do them.

As a non-academic I don't think I can assis... (read more)

Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

you should be striving to increase the probability of making something like this happen. (Which, to be clear, could be the right thing to do! But it's not how longtermists tend to reason in practice.)

As you said in your previous comment we essentially are increasing the probability of these things happening by reducing x-risk. I'm not convinced we don't tend to reason fanatically in practice - after all Bostrom's astronomical waste argument motivates reducing x-risk by raising the possibility of achieving incredibly high levels of utility (in a footnote he... (read more)

2Rohin Shah21d
I'm not sure whether you are disagreeing with me or not. My claims are (a) accepting fanaticism implies choosing actions that most increase probability of infinite utility, (b) we are not currently choosing actions based on how much they increase probability of infinite utility, (c) therefore we do not currently accept fanaticism (though we might in the future), (d) given we don't accept fanaticism we should not use "fanaticism is fine" as an argument to persuade people of longtermism. Is there a specific claim there you disagree with? Or were you riffing off what I said to make other points?
Effective altruism’s odd attitude to mental health

I think the point Caleb is making is that your EAG London story doesn't necessarily show the tension that you think it does. And for what it's worth I'm sceptical this tension is very widespread.

Effective altruism’s odd attitude to mental health

I don't know for sure that we have prioritised mental health over other productivity interventions, although we may have. Effective Altruism Coaching doesn't have a sole mental health focus (also see here for 2020 annual review) but I think that is just one person doing the coaching so may not be representative of wider productivity work in EA.

It's worth noting that it's plausible that mental health may be proportionally more of a problem within EA than outside, as EAs may worry more about the state of the world and if they're having impact etc. - which ma... (read more)

Effective altruism’s odd attitude to mental health

Pretty much this. I don’t think discussions on improving mental health in the EA community are motivated by improving wellbeing, but instead by allowing us to be as effective as a community as possible. Poor mental health is a huge drain on productivity.

If the focus on EA community mental health was based on direct wellbeing benefits I would be quite shocked. We’re a fairly small community and it’s likely to be far more cost-effective to improve the mental health of people living in lower income countries (as HLI’s StrongMinds recommendation suggests).

8BarryGrimes24d
Has anyone done the analysis to determine the most cost-effective ways to increase the productivity of the EA community? It's not obvious to me that focussing on mental health would be the best option. If that is the case, I feel confused about the rationale for prioritising the mental health of EAs over other productivity interventions.
1Fai18d
Wow thank you! Very relevant!
My GWWC donations: Switching from long- to near-termist opportunities?

Sorry it’s not entirely clear to me if you think good longtermist giving opportunities have dried up, or if you think good opportunities remain but your concern is solely about the optics of giving to them.

On the optics point, I would note that you don’t have to give all of your donations to the same thing. If you’re worried about having to tell people about your giving to LTFF, you can also give a portion of your donations to global health (even if small), allowing you to tell them about that instead, or tell them about both.

You could even just give every... (read more)

3Tom Gardiner1mo
To clarify, my position could be condensed to "I'm not convinced small scale longtermist donations are presently more impactful than neartermist ones, nor am I convinced of the reverse. Given this uncertainty, I am tempted to opt for neartermist donations to achieve better optics." The point you make seems very sensible. If I update strongly back towards longtermist giving I will likely do as you suggest.
How much current animal suffering does longtermism let us ignore?

I'm just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose. 

Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn't right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I'm sceptical of this.

How much current animal suffering does longtermism let us ignore?

Am I missing something basic here?

No you're not missing anything that I can see. When OP says:

Does longtermism mean ignoring current suffering until the heat death of the universe?

I think they're really asking:

Does longtermism mean ignoring current suffering until near the heat death of the universe?

Certainly the closer an impartial altruist is to heat death the less forward-looking the altruist needs to be.

How much current animal suffering does longtermism let us ignore?

I agree with all of that. I was objecting to the implication that longtermists will necessarily reduce suffering. Also (although I'm unsure about this), I think that the EA longtermist community will increase expected suffering in the future, as it looks like they will look to maximise the number of beings in the universe.

1Matthew_Barnett1mo
What I view as the Standard Model of Longtermism is something like the following: * At some point we will develop advanced AI capable of "running the show" for civilization on a high level * The values in our AI will determine, to a large extent, the shape of our future cosmic civilization * One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad. * To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines. This model doesn't predict that longtermists will make the future much larger than it otherwise would . It just predicts that they'll make it look a bit different than it otherwise would look like. Of course, there are other existential risks that longtermists care about. Avoiding those will have the effect of making the future larger in expectation, but most longtermists seem to agree that non-AI x-risks are small by comparison to AI.
How much current animal suffering does longtermism let us ignore?

I upvoted OP because I think comparison to humans is a useful intuition pump, although I agree with most of your criticism here. One thing that surprised me was:

Obviously not? That means you never reduced suffering? What the heck was the point of all your longtermism?

Surprised to hear you say this. It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare. Remember one of the founding texts of longtermism says we ... (read more)

4Rohin Shah1mo
But at the time of the heat death of the universe, the future is not vast in expectation? Am I missing something basic here? (I'm ignoring weird stuff which I assume the OP was ignoring like acausal trade / multiverse cooperation, or infinitesimal probabilities of the universe suddenly turning infinite, or already being infinite such that there's never a true full heat death and there's always some pocket of low entropy somewhere, or believing that the universe's initial state was selected such that at heat death you'll transition to a new low-entropy state from which the universe starts again.) Oh, yes, that's plausible; just making a larger future will tend to increase the total amount of suffering (and the total amount of happiness), and this would be a bad trade in the eyes of a negative utilitarian. In the context of the OP, I think that section was supposed to mean that longtermism would mean ignoring current utility until the heat death of the universe -- the obvious axis of difference is long-term vs current, not happiness vs suffering (for example, you can have longtermist negative utilitarians). I was responding to that interpretation of the point, and accidentally said a technically false thing in response. Will edit.
3Matthew_Barnett1mo
I have an issue with your statement that longtermists neglect suffering, because they just maximize total (symmeric) welfare. I think this statement isn't actually true, though I agree if you just mean pragmatically, most longtermists aren't suffering focused. Hilary Greaves and William MacAskill loosely define [https://globalprioritiesinstitute.org/wp-content/uploads/The-Case-for-Strong-Longtermism-GPI-Working-Paper-June-2021-2-2.pdf] strong longtermism as, "the view that impact on the far future is the most important feature of our actions today." Longtermism is therefore completely agnostic about whether you're a suffering-focused altruist, or a traditional welfarist in line with Jeremy Bentham. It's entirely consistent to prefer to minimize suffering over the long-run future, and be a longtermist. Or put another way, there are no major axiological commitments involved with being a longtermist, other than the view that we should treat value in the far-future similar to the way we treat value in the near-future. Of course, in practice, longtermists are more likely to advocate a Benthamite utility function than a standard negative utilitarian. But it's still completely consistent to be a negative utilitarian and a longtermist, and in fact I consider myself one.
How much current animal suffering does longtermism let us ignore?

However, it seems to me that at least some parts of longtermist EA , some of the time, to some extent, disregard the animal suffering opportunity cost almost entirely.

I'm not sure how you come to this conclusion, or even what it would mean to "disregard the opportunity cost". 

Longtermist EAs generally know their money could go towards reducing animal suffering and do good. They know and generally acknowledge that there is an opportunity cost of giving to longtermist causes. They simply think their money could do the most good if given to longtermist causes.

How much current animal suffering does longtermism let us ignore?

even though I just about entirely buy the longtermist thesis

If you buy into the longtermist thesis why are you privileging the opportunity cost of giving to longtermist causes and not the opportunity cost of giving to animal welfare?

Are you simply saying you think the marginal value of more money to animal welfare is greater than to longtermist causes?

1aaronb501mo
I'm not intending to, although it's possible I'm using the term "opportunity cost" incorrectly or in a different way than you. The opportunity cost of giving a dollar to animal welfare is indeed whatever that dollar could have bought in the longtermist space (or whatever else you think is the next best option). However, it seems to me that at least some parts of longtermist EA , some of the time, to some extent, disregard the animal suffering opportunity cost almost entirely. Surely the same error is committed in the opposite direction by hardcore animal advocates, but the asymmetry comes from the fact that this latter group controls a way smaller share of financial pie.
1Michael_Wiebe1mo
Note that with diminishing returns, marginal utility per dollar (MU/$) is a function of the level of spending. So it could be the case that the MU/$ for the next $1M to Faunalytics is really high, but drops off above $1M. So I would rephrase your question as: >do you think the marginal value of more money to animal welfare right now is greater than to longtermist causes?
How much current animal suffering does longtermism let us ignore?

Thanks for writing this! I like the analogy to humans. I did something like this recently with respect to dietary choice. My thought experiment specified that these humans had to be mentally-challenged so that they have similarity capacities for welfare as non-human animals which isn’t something you have done here, but I think is probably important. I do note that you have been conservative in terms of the number of humans however.

Your analogy has given me pause for thought!

How much current animal suffering does longtermism let us ignore?

There's a crude inductive argument that the future will always outweigh the present, in which case we could end up like Aesop's miser, always saving for the future until eventually we die.

I would just note that, if this happens, we’ve done longtermism very badly. Remember longtermism is (usually) motivated by maximising expected undiscounted welfare over the rest of time.

Right now, longtermists think they are improving the far future in expectation. When we actually get to this far future it should (in expectation) be better than it otherwise would ha... (read more)

1Jacob Eliosoff1mo
Yeah, this wasn't my strongest/most serious argument here. See my response to @Lukas_Gloor.
Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

Yeah I think that’s true if you only have the term “longtermist”. If you have both “longtermist” and “non-longtermist” I’m not so sure.

4david_reinstein1mo
maybe we just say "not longermist" rather than trying to make "non-longermist" a label? Either way, I think we can agree to get rid of 'neartermist'.
Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

I don’t think it’s negative either although, as has been pointed out, many interpret it as meaning that one has a high discount rate which can be misleading

Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

I believe the majority of "neartermist" EAs don't have a high discount rate. They usually prioritise near-term effects because they don't think we can tractably influence the far future (i.e. cannot improve the far future in expectation). You might find the 80,000 Hours podcast episode with Alexander Berger interesting.

EDIT: neartermists may also be concerned by longtermist fanatical thinking or may be driven by a certain population axiology e.g. person-affecting view. In the EA movement though high discount rates are virtually unheard of.

4david_reinstein1mo
I agree with JackM. As somewhat an aside I think one might only justify discount rates over well-being as an indirect and probably inadequate proxy for something else, such * as a belief that 'intervention $'s go less far to help people in the future because we don't know much about how to help them' * a belief that the future may not exist, and if it's lower probability it enters less unto our welfare function. There is very little direct justification of 'people in the future' or 'welfare in the future' itself matters less.'
Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

"Not longtermist" doesn't seem great to me. It implies being longtermist is the default EA position. I'd say I'm a longtermist, but I don't think we should normalise longtermism as the default EA position. This could be harmful for growth of the movement.

Maybe as Berger says "Global Health and Wellbeing" is the best term.

FWIW my intuition is that if you have a name for a thing, it means the opposite of that is the default. If there's a special term for "longtermist", that means people are not longtermists by default (which I think is basically true—most people are not longtermists, and longtermism is kind of a weird position (although I do happen to agree with it)). Sort of like how EAs are called EAs, but there's no word for people who aren't EAs, because being not-EA is the default.

7Linch1mo
As a soft counterpoint, I usually find [https://www.facebook.com/linchuan.zhang/posts/2470586746365428] "definition by exclusion" in other areas to be weirder when the more "natural" position is presented as a standalone in an implicit binary, as opposed to adding a "non" in front of it. Sorry if that's confusing. Here are some examples:
3david_reinstein1mo
I don't see how it necessarily implies that. Maybe "long-termist-EA" and "non-long-termist-EA"? Global Health and Wellbeing is not too bad (even considering my quibbles with this). The "wellbeing" part could encompass animal welfare ... and even encompass avoiding global disasters 'for the sake of the people who would suffer', rather than 'for the sake of extinction and potential future large civilizations' etc. (Added): But I guess GH&W may be too broad in the other direction. Don't LT-ists also prioritize global well-being?
An uncomfortable thought experiment for anti-speciesist non-vegans

I admit I'm getting confused. I think you've moved into arguing that going vegan has low relative value or may not even make sense for a maximising consequentialist. In my thought experiment I was trying to be agnostic on these points and simply draw a parallel between eating mentally-challenged humans and animals. 

If you want to say that going vegan doesn't make consequentialist sense for 'reason X' that is fine. I'm just saying that you then also have to say "if I imagine myself in a world where it is mentally-challenged humans instead of animals, I... (read more)

2Lukas_Gloor1mo
I agree with that. Some of your earlier comments seemed like they were setting up a slightly different argument. Someone can have the following position: (1) They would continue to eat humans in the thought experiment world where one's psychological dispositions treat it as not a big deal (e.g., because it's normalized in that world and has become a habit) (2) They wouldn't eat humans in the thought experiment world if they retained their psychological dispositions / reactive attitudes from the actual world – in that case, they'd finds the scenario abhorrent (3) When they think about (1) and (2), they don't feel compelled to modify their dispositions / reactive attitudes toward not eating non-human animals (because of opportunity costs and because consequentialism doesn't have the concept of "appropriate reactions" – or, at least, the consequentialist concept for "appropriate reactions" is more nuanced) I think you were arguing against (3) at one point, while I and other commenters were arguing in favor of (3).
An uncomfortable thought experiment for anti-speciesist non-vegans

I’m not sure what the cost of changing one’s reactive attitude is. Do you mean the cost of going vegan? If so what do you see as the main costs?

2Lukas_Gloor1mo
Yes. Isn't it true that people who go vegan at one point in their life revert back to eating animal products? I remember this was the case based on data discussed in 2014 or so, when I last looked into it. Is it any different now? Those findings would strongly suggests that veganism isn't cost-free. Since the way you ask makes me think you believe the costs to be low, consider the possibility that you're committing the typical mind fallacy. [https://www.lesswrong.com/tag/typical-mind-fallacy#:~:text=The%20typical%20mind%20fallacy%20is,unusually%20specific%20to%20a%20few.] (Similar to how a naturally skinny person might say "I don't understand obese people; isn't it easy to eat healthy." Well, no, most Americans are overweight and probably not thrilled about it, so if they could change it at low cost, they would. So, for some people, it' isn't easy to stay skinny.) Maybe we disagree on what to count as "low costs." If their lives depended on it, I'd say almost everyone would be capable of going vegan. However, many people prefer prison to suicide, but that doesn't mean it's "low cost" to go to prison. Maybe you're thinking the cost of going vegan is low compared to the suffering at stake for animals. And I basically agree with that – the suffering is horrible and our culinary pleasures or potential health benefits appear trivial by comparison. However, this applies only if we think about it as a direct comparison in an "all else equal" situation. If you compare the animal suffering you can reduce via personal veganism vs. the good you can do from focusing your daily work on having the biggest positive impact, it's often the suffering from your food consumption that pales in comparison (though it may depend on a person's situation). People have made estimates of this (e.g., here [https://www.jefftk.com/p/why-im-not-vegan])! Again, the previous point relates to the same disagreement we discussed in the comment thread above. If someone does important altruistic work,
An uncomfortable thought experiment for anti-speciesist non-vegans

Consequentialist morality doesn't have a concept for "reacting appropriately."

My understanding is that it does have such a concept in that we should react similarly to different acts that are equally good/bad to each other in terms of their consequences. My thought experiment was simply designed to remind anti-speciesists that there is no clear moral difference between eating mentally-challenged humans and eating animals. So however you react to one (whether it be with indifference, moral disgust causing you to abstain, or moral disgust that doesn't cause ... (read more)

2Lukas_Gloor1mo
This is only the case in an "all else equal" situation! It is very much not the case when changing one's reactive attitudes comes at some cost and where that cost competes with other, bigger opportunities to do good. Same reply here: Singer's thought experiment only works in an "all else equal" situation. Depending on their circumstances, maybe someone should do EA direct work and not donate at all. Or maybe donate somewhere other than poverty reduction.
An uncomfortable thought experiment for anti-speciesist non-vegans

All my thought experiment is designed to do is to remind anti-speciesists that there is no clear moral difference between eating mentally-challenged humans and eating animals. If we feel differently about the two that is likely to be due to various biases that are not morally relevant.

This might cause some people to rethink eating animals, as they wouldn't eat the humans. If you would eat the humans however then this thought experiment is unlikely to have an affect on you - I wasn't intending for this thought experiment to be relevant to everyone anyway.

An uncomfortable thought experiment for anti-speciesist non-vegans

Realistically, I might eat the humans in this thought experiment, if this were as widely accepted as eating pigs and I'd been raised with the custom.

I'm sure you would, but this isn't actually relevant. The point is that from your current standpoint - where you haven't been raised to think eating humans is OK - you think the act is beyond the pale. This implies that when you are thinking clearly and without bias, you think eating other sentient beings is abhorrent. This in turn implies the only reason you eat meat now is that you're not thinking clearly and without bias!

5Lukas_Gloor1mo
On a consequentialist morality, feelings of moral outrage, horror or disgust are not what matters. (Instead, what matters on it is how to allocate attention/willpower/dedication to reduce the most suffering given one's psychology, opportunity costs, etc.) In the original post, you say "These are just biases though, and all they show is that we don’t react badly enough to animal farming." Consequentialist morality doesn't have a concept for "reacting appropriately." (This is why, in Thomas Kwa's answer, he talks about what he'd do conditional on having a disgust response vs. what he'd do without the disgust response. Because the animal suffering in question isn't quite bad enough to compete with alternative ways of using attention or willpower, going vegan isn't thought to be worth it under all social and psychological circumstances – e.g., it isn't thought to be worth it if it's costly convenience-wise and/or health-wise, if there's no disgust reaction, and if the social environment tolerates it.) Since you're primarily addressing consequentialists here, I recommend explaining why "reacting badly enough"/"reacting appropriately to moral horrors" is an important tenet of the morality that should matter to us (important enough that it can compete with things like optimizing one's impact-oriented career). Without those missing arguments, I think it'll seem to people like you're operating under some rigid framework and can't understand it when other people don't share your assumptions (prompting downvotes). For what it's worth, I do feel the force of your intuition pump (though I doubt it's new to most people) and I think it's true that consequentialist morality is uncanny here, and maybe that speaks in favor of going (more) vegan. Personally, I've been vegan in the past but currently at the stage where I mostly buy the consequentialist arguments against it (provided I am really trying to reduce a lot of suffering), but still feel like there's some dissonance/a feeli
8Thomas Kwa1mo
I don't think eating human flesh is beyond the pale or abhorrent. Eating human flesh that was produced with, say, 10 hours of suffering seems basically morally equivalent to eating flesh from humans who consent and are treated well, plus buying clothes that took 10 hours of slave labor to produce. And doing these separately seems morally okay as long as the clothes allow you to have more positive impact with your career. Current-me just wouldn't do the first one because it's disgusting and becomes more disgusting when associated with suffering. It seems like there's a taboo on eating human flesh, and also a harm, and the argument is conflating the disgust response from the taboo with the immorality of the harm. Disgust should not always be extended to general moral principles!
What is a neutral life like?

I can imagine it being the case that cardinal hedonistic intensity assessments are created by a part of the brain that isn't responsible for the hedonistic component of experience, rather than "read off", and judgements would differ between people who differ only in the parts of the brain resposponsible just for the cardinal assessments.

Would love to see more research on this!

What is a neutral life like?

We rarely will sample people in the last months of their lives, or who are deeply ill and suffering but incapacitated.

I'd like us to do this more! We could also do small children. It won't work for babies of course.

What is a neutral life like?

Thanks this is useful pushback. I didn't want to go into this detail in the blog post to stop it being too long, but perhaps a separate post could go into this as it is important!

My main response is that understanding the neutral level might be useful to determine how many people currently do / will live above/below the neutral level. For example this paper sets a critical (neutral) level in terms of per capita yearly consumption and then uses this level to judge if global social welfare is increasing, by understanding if additional people have been/are li... (read more)

What is a neutral life like?

Don't buy that this is important? Will MacAskill raises it as an important research question in his EA Global Fireside chat at the 31-minute mark. Can't say I think his proposal of asking people how good their lives are is the best approach though...

2anonymous_ea1mo
I don't buy that what a neutral life is like is an important question. I listened to a few minutes of the timestamp you linked but unless I missed something, Will is talking about his interest in finding out what proportion of people have lives above and below zero, not what a neutral life is like. I don't see any tight connections between the value of finding out more about neutral lives and what implications that might have for efforts to reduce existential risk or other longtermist efforts. It's more related to the important question of saving lives vs reducing suffering, but I don't see any clear implications here either. If you spell out what connections you see I might be more convinced. It seems to me that the ethics of having children and the question of antinatalism are swamped by many considerations besides what a neutral life is like. Again, if you spell out the connections you see here I might be more interested. I hope this is useful feedback!
Free-spending EA might be a big problem for optics and epistemics

Congrats on having the most upvoted EA Forum post of all time!

"Long-Termism" vs. "Existential Risk"

Hypothetically, if I have time preference and other people don't then I would agree to coordinate on a compromise. In practice, I suspect that everyone have time preference.

Most people do indeed have pure time preference in the sense that they are impatient and want things earlier rather than later. However, this says nothing about their attitude to future generations.

Being impatient means you place more importance on your present self than your future self, but it doesn't mean you care more about the wellbeing of some random dude alive now than another ra... (read more)

"Long-Termism" vs. "Existential Risk"

Because, ceteris paribus I care about things that happen sooner more than about things that happen latter.

This is separate to the normative question of whether or not people should have zero pure time preference when it comes to evaluating the ethics of policies that will affect future generations. Surely the fact that I'd rather have some cake today rather than tomorrow cannot be relevant when I'm considering whether or not I should abate carbon emissions so my great grandchildren can live in a nice world - these simply seem separate considerations with n... (read more)

3Vanessa1mo
I am a moral anti-realist. I don't believe in ethics the way utilitarians (for example) use the word. I believe there are certain things I want, and certain things other people want, and we can coordinate on that. And coordinating on that requires establishing social norms, including what we colloquially refer to as "ethics". Hypothetically, if I have time preference and other people don't then I would agree to coordinate on a compromise. In practice, I suspect that everyone have time preference. You can avoid this kind of conclusions if you accept my decision rule of minimax regret over all discount timescales from some finite value to infinity.
New: use The Nonlinear Library to listen to the top EA Forum posts of all time

Kind of cool to see that two of my posts made it into the top EA Forum posts playlist. A small point - the audio says for both of them that they were posted to the AI Alignment Forum which is a mistake. I don’t really care, but thought you might like to know.

Next week I'm interviewing Will MacAskill — what should I ask?
  • What do you think is the best approach to achieving existential security and how confident are you on this?
  • Which chapter/part of "What We Owe The Future" do you think most deviates from the EA mainstream?
  • In what way(s) would you change the focus of the EA longtermist community if you could?
  • Do you think more EAs should be choosing careers focused on boosting economic growth/tech progress?
  • Would you rather see marginal EA resources go towards reducing specific existential risks or boosting economic growth/tech progress?
  • The Future Fund website highlights immig
... (read more)
1Baptiste Roucau1mo
Great set of questions! I'm personally very interested in the question about educational interventions.
"Long-Termism" vs. "Existential Risk"

I think there is a key difference between longtermists and thoughtful shorttermists which is surprisingly under-discussed.

Longtermists don’t just want to reduce x-risk, they want to permanently reduce x-risk to a low level I.e achieve existential security. Without existential security the longtermist argument just doesn’t go through. A thoughtful shorttermist who is concerned about x-risk probably won’t care about this existential security, they probably just want to reduce x-risk to the lowest level possible in their lifetime.

Achieving existential securit... (read more)

3timunderwood1mo
Maybe, I mean I've been thinking about this a lot lately in the context of Phil Torres argument about messianic tendencies in long termism, and I think he's basically right that it can push people towards ideas that don't have any guard rails. A total utilitarian long termist would prefer a 99 percent chance of human extinction with a 1 percent of a glorious transhuman future stretching across the lightcone to a 100 percent chance of humanity surviving for 5 billion years on earth. That after all is what shutting up and multiplying tells you -- so the idea that long termism makes luddite solutions to X-risk (which to be clear, would also be incredibly difficult to impliment and maintain) extra unappealing relative to how a short termist might feel abou them seems right to me. Of course there is also the other direction: If there was a 1/1 trillion chance that activating this AI would kill us all, and a 999 billion/ 1 trillion chance it would be awesome, but if you wait a hundred years you can have an AI that has only a 1/ 1 quadrillion chance of killing us all, a short termist pulls the switch, while the long termist waits. Also, of course, model error, and any estimate where someone actually uses numbers like '1/1 trillion' that something will happen in the real world that is in the slightest interesting is a nonsense and bad calculation.
"Long-Termism" vs. "Existential Risk"

I never actually said we should switch, but if we knew from the start “oh wow we live at the most influential time ever because x-risk is so high” we probably would have created an x-risk community not an EA one.

And to be clear I’m not sure where I personally come out on the hinginess debate. In fact I would say I’m probably more sympathetic to Will’s view that we currently aren’t at the most influential time than most others are.

4timunderwood1mo
My feeling is that it was a bit that people who wanted to attack global poverty efficiently decided to call themselves effective altruists, and then a bunch of Less Wrongers came over and convinced (a lot of) them that 'hey, going extinct is an even biggler deal', but the name still stuck, because names are sticky things.
"Long-Termism" vs. "Existential Risk"

My point is that you could engage in "x-risk community building" which may more effectively get people working on reducing x-risk than "EA community building" would.

2Stefan_Schubert2mo
There is a bunch of consideration affecting that, including that we already do EA community building and that big switches tend to be costly. However that pans out in aggregate I think "doesn't make much sense" is an overstatement.
"Long-Termism" vs. "Existential Risk"

I think if we’re at the most influential point in history “EA community building” doesn’t make much sense. As others have said it would probably make more sense to be shouting about why we’re at the most influential point in history i.e. do “x-risk community building” or of course do more direct x-risk work.

I suspect we’d also do less global priorities research (although perhaps we don’t do that much as it is). If you think we’re at the most influential time you probably have a good reason for thinking that (x-risk abnormally high) which then informs wha... (read more)

7Stefan_Schubert2mo
I'm not sure I agree with that. It seems to me that EA community building is channelling quite a few people to direct existential risk reduction work.
7Jay Bailey2mo
That also depends on how wide you consider a "point". A lot of longtermists talk of this as the "most important century", not the most important year, or even decade. Considering EA as a whole is less than twenty years old, investing in EA and global priorities research might still make sense, even under a simplified model where 100% of the impact EA will ever have occurs by 2100, and then we don't care any more. Given a standard explore/exploit algorithm, we should spend around 37% of the space exploring, so if we assume EA started around 2005, we should still be exploring until 2040 or so before pivoting and going all-in on the best things we've found.
Load More