I think the existence of investing for the future as a meta option to improve the far future essentially invalidates both of your points. Investing money in a long-term fund won’t hit diminishing returns anytime soon. I think of it as the “Give Directly of longtermism”.
Certainly agree there is something weird there!
Anyway I don't really think there was too much disagreement between us, but it was an interesting exchange nonetheless!
I’ve read your overview and skimmed the rest. You say there will probably be better ways to limit coal production or consumption, but I was under the impression this wasn’t the main motivation for buying a coal mine. I thought the main motivation was to ensure we have the energy resources to be able to rebuild society in case we hit some sort of catastrophe. Limiting coal production and consumption was just an added bonus. Am I wrong?
EDIT: appreciate you do argue the coal may stay in the ground even if we don’t buy the mine which is very relevant to my question
EDIT2: just realised limiting consumption is important to preserve energy stores, but limiting production perhaps not
Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?
A longtermist career spans decades, as would going vegan for life or donating regularly for decades. So it was mostly a temporal thing, trying to somewhat equalise the commitment associated with different altruistic choices.
but why should the locus of agency be the individual? Seems pretty arbitrary.
Hmm well aren't we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?
... (read more)If you agree that voting i
That's fair enough, although when it comes to voting I mainly do it for personal pleasure / so that I don't have to lie to people about having voted!
When it comes to something like donating to GiveWell charities on a regular basis / going vegan for life I think one can probably have greater than 50% belief they will genuinely save lives / avert suffering. Any single donation or choice to avoid meat will have far lower probability, but it seems fair to consider doing these things over a longer period of time as that is typically what people do (and what someone who chooses a longtermist career essentially does).
Probabilities are on a continuum. It’s subjective at what point fanaticism starts. You can call those examples fanatical if you want to, but the probabilities of success in those examples are probably considerably higher than in the case of averting an existential catastrophe.
I think the probability that my personal actions avert an existential catastrophe is higher than the probability that my personal vote in the next US presidential election would change its outcome.
I think I'd plausibly say the same thing for my other examples; I'd have to think a bit more about the actual probabilities involved.
Hmm I do think it's fairly fanatical. To quote this summary:
For example, it might seem fanatical to spend $1 billion on ASI-alignment for the sake of a 1-in-100,000 chance of preventing a catastrophe, when one could instead use that money to help many people with near-certainty in the near-term.
The probability that any one longtermist's actions will actually prevent a catastrophe is very small. So I do think longtermist EAs are acting fairly fanatically.
Another way of thinking about it is that, whilst the probability of x-risk may be fairly high, the x-ris... (read more)
Yeah that's fair. As I said I'm not entirely sure on the motivation point.
I think in practice EAs are quite fanatical, but only to a certain point. So they probably wouldn't give in to a Pascal's mugging but many of them are willing to give to a long-term future fund over GiveWell charities - which is quite a bit fanaticism! So justifying fanaticism still seems useful to me, even if EAs put their fingers in their ears with regards to the most extreme conclusion...
Hi Michael, thanks for your reply! I apologise I didn’t check with you before saying that you have ruled out research a priori. I will put a note to say that this is inaccurate. Prioritising based on self-reports of wellbeing does preclude funding research, but I’m glad to hear that you may be open to assessing research in the future.
Sorry to hear you struggled to follow my analysis. I think I may have over complicated things, but it did help me to work through things in my own head! I haven’t really looked at the literature into VOI.
In a nutshell my model... (read more)
FYI you can contact the EA Forum team to get your profile hidden from search engines (see here).
Yes I disagree with b) although it's a nuanced disagreement.
I think the EA longtermist movement is currently choosing the actions that most increase probability of infinite utility, by reducing existential risk.
What I'm less sure of is that achieving infinite utility is the motivation for reducing existential risk. It might just be that achieving "incredibly high utility" is the motivation for reducing existential risk. I'm not too sure on this.
My point about the long reflection was that when we reach this period it will be easier to see the fanatics from the non-fanatics.
I’ve reversed an earlier decision and have settled on using my real name. Wish me luck!
I'm super excited for you to continue making these research summaries! I have previously written about how I want to see more accessible ways to understand important foundational research - you've definitely got a reader in me.
I also enjoy the video summaries. It would be great if GPI video and written summaries were made as standard. I appreciate it's a time commitment, but in theory there's quite a wide pool of people who could do the written summaries and I'm sure you could get funding to pay people to do them.
As a non-academic I don't think I can assis... (read more)
you should be striving to increase the probability of making something like this happen. (Which, to be clear, could be the right thing to do! But it's not how longtermists tend to reason in practice.)
As you said in your previous comment we essentially are increasing the probability of these things happening by reducing x-risk. I'm not convinced we don't tend to reason fanatically in practice - after all Bostrom's astronomical waste argument motivates reducing x-risk by raising the possibility of achieving incredibly high levels of utility (in a footnote he... (read more)
I think the point Caleb is making is that your EAG London story doesn't necessarily show the tension that you think it does. And for what it's worth I'm sceptical this tension is very widespread.
I don't know for sure that we have prioritised mental health over other productivity interventions, although we may have. Effective Altruism Coaching doesn't have a sole mental health focus (also see here for 2020 annual review) but I think that is just one person doing the coaching so may not be representative of wider productivity work in EA.
It's worth noting that it's plausible that mental health may be proportionally more of a problem within EA than outside, as EAs may worry more about the state of the world and if they're having impact etc. - which ma... (read more)
Pretty much this. I don’t think discussions on improving mental health in the EA community are motivated by improving wellbeing, but instead by allowing us to be as effective as a community as possible. Poor mental health is a huge drain on productivity.
If the focus on EA community mental health was based on direct wellbeing benefits I would be quite shocked. We’re a fairly small community and it’s likely to be far more cost-effective to improve the mental health of people living in lower income countries (as HLI’s StrongMinds recommendation suggests).
Seems relevant: SpaceX: Can meat be grown in space?
A test to see if we can grow cultivated meat in space.
Sorry it’s not entirely clear to me if you think good longtermist giving opportunities have dried up, or if you think good opportunities remain but your concern is solely about the optics of giving to them.
On the optics point, I would note that you don’t have to give all of your donations to the same thing. If you’re worried about having to tell people about your giving to LTFF, you can also give a portion of your donations to global health (even if small), allowing you to tell them about that instead, or tell them about both.
You could even just give every... (read more)
I'm just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose.
Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn't right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I'm sceptical of this.
Am I missing something basic here?
No you're not missing anything that I can see. When OP says:
Does longtermism mean ignoring current suffering until the heat death of the universe?
I think they're really asking:
Does longtermism mean ignoring current suffering until near the heat death of the universe?
Certainly the closer an impartial altruist is to heat death the less forward-looking the altruist needs to be.
I agree with all of that. I was objecting to the implication that longtermists will necessarily reduce suffering. Also (although I'm unsure about this), I think that the EA longtermist community will increase expected suffering in the future, as it looks like they will look to maximise the number of beings in the universe.
I upvoted OP because I think comparison to humans is a useful intuition pump, although I agree with most of your criticism here. One thing that surprised me was:
Obviously not? That means you never reduced suffering? What the heck was the point of all your longtermism?
Surprised to hear you say this. It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare. Remember one of the founding texts of longtermism says we ... (read more)
However, it seems to me that at least some parts of longtermist EA , some of the time, to some extent, disregard the animal suffering opportunity cost almost entirely.
I'm not sure how you come to this conclusion, or even what it would mean to "disregard the opportunity cost".
Longtermist EAs generally know their money could go towards reducing animal suffering and do good. They know and generally acknowledge that there is an opportunity cost of giving to longtermist causes. They simply think their money could do the most good if given to longtermist causes.
even though I just about entirely buy the longtermist thesis
If you buy into the longtermist thesis why are you privileging the opportunity cost of giving to longtermist causes and not the opportunity cost of giving to animal welfare?
Are you simply saying you think the marginal value of more money to animal welfare is greater than to longtermist causes?
Thanks for writing this! I like the analogy to humans. I did something like this recently with respect to dietary choice. My thought experiment specified that these humans had to be mentally-challenged so that they have similarity capacities for welfare as non-human animals which isn’t something you have done here, but I think is probably important. I do note that you have been conservative in terms of the number of humans however.
Your analogy has given me pause for thought!
There's a crude inductive argument that the future will always outweigh the present, in which case we could end up like Aesop's miser, always saving for the future until eventually we die.
I would just note that, if this happens, we’ve done longtermism very badly. Remember longtermism is (usually) motivated by maximising expected undiscounted welfare over the rest of time.
Right now, longtermists think they are improving the far future in expectation. When we actually get to this far future it should (in expectation) be better than it otherwise would ha... (read more)
Yeah I think that’s true if you only have the term “longtermist”. If you have both “longtermist” and “non-longtermist” I’m not so sure.
I don’t think it’s negative either although, as has been pointed out, many interpret it as meaning that one has a high discount rate which can be misleading
I believe the majority of "neartermist" EAs don't have a high discount rate. They usually prioritise near-term effects because they don't think we can tractably influence the far future (i.e. cannot improve the far future in expectation). You might find the 80,000 Hours podcast episode with Alexander Berger interesting.
EDIT: neartermists may also be concerned by longtermist fanatical thinking or may be driven by a certain population axiology e.g. person-affecting view. In the EA movement though high discount rates are virtually unheard of.
"Not longtermist" doesn't seem great to me. It implies being longtermist is the default EA position. I'd say I'm a longtermist, but I don't think we should normalise longtermism as the default EA position. This could be harmful for growth of the movement.
Maybe as Berger says "Global Health and Wellbeing" is the best term.
FWIW my intuition is that if you have a name for a thing, it means the opposite of that is the default. If there's a special term for "longtermist", that means people are not longtermists by default (which I think is basically true—most people are not longtermists, and longtermism is kind of a weird position (although I do happen to agree with it)). Sort of like how EAs are called EAs, but there's no word for people who aren't EAs, because being not-EA is the default.
I admit I'm getting confused. I think you've moved into arguing that going vegan has low relative value or may not even make sense for a maximising consequentialist. In my thought experiment I was trying to be agnostic on these points and simply draw a parallel between eating mentally-challenged humans and animals.
If you want to say that going vegan doesn't make consequentialist sense for 'reason X' that is fine. I'm just saying that you then also have to say "if I imagine myself in a world where it is mentally-challenged humans instead of animals, I... (read more)
I’m not sure what the cost of changing one’s reactive attitude is. Do you mean the cost of going vegan? If so what do you see as the main costs?
Consequentialist morality doesn't have a concept for "reacting appropriately."
My understanding is that it does have such a concept in that we should react similarly to different acts that are equally good/bad to each other in terms of their consequences. My thought experiment was simply designed to remind anti-speciesists that there is no clear moral difference between eating mentally-challenged humans and eating animals. So however you react to one (whether it be with indifference, moral disgust causing you to abstain, or moral disgust that doesn't cause ... (read more)
All my thought experiment is designed to do is to remind anti-speciesists that there is no clear moral difference between eating mentally-challenged humans and eating animals. If we feel differently about the two that is likely to be due to various biases that are not morally relevant.
This might cause some people to rethink eating animals, as they wouldn't eat the humans. If you would eat the humans however then this thought experiment is unlikely to have an affect on you - I wasn't intending for this thought experiment to be relevant to everyone anyway.
Realistically, I might eat the humans in this thought experiment, if this were as widely accepted as eating pigs and I'd been raised with the custom.
I'm sure you would, but this isn't actually relevant. The point is that from your current standpoint - where you haven't been raised to think eating humans is OK - you think the act is beyond the pale. This implies that when you are thinking clearly and without bias, you think eating other sentient beings is abhorrent. This in turn implies the only reason you eat meat now is that you're not thinking clearly and without bias!
I can imagine it being the case that cardinal hedonistic intensity assessments are created by a part of the brain that isn't responsible for the hedonistic component of experience, rather than "read off", and judgements would differ between people who differ only in the parts of the brain resposponsible just for the cardinal assessments.
Would love to see more research on this!
We rarely will sample people in the last months of their lives, or who are deeply ill and suffering but incapacitated.
I'd like us to do this more! We could also do small children. It won't work for babies of course.
Thanks this is useful pushback. I didn't want to go into this detail in the blog post to stop it being too long, but perhaps a separate post could go into this as it is important!
My main response is that understanding the neutral level might be useful to determine how many people currently do / will live above/below the neutral level. For example this paper sets a critical (neutral) level in terms of per capita yearly consumption and then uses this level to judge if global social welfare is increasing, by understanding if additional people have been/are li... (read more)
Don't buy that this is important? Will MacAskill raises it as an important research question in his EA Global Fireside chat at the 31-minute mark. Can't say I think his proposal of asking people how good their lives are is the best approach though...
Congrats on having the most upvoted EA Forum post of all time!
Hypothetically, if I have time preference and other people don't then I would agree to coordinate on a compromise. In practice, I suspect that everyone have time preference.
Most people do indeed have pure time preference in the sense that they are impatient and want things earlier rather than later. However, this says nothing about their attitude to future generations.
Being impatient means you place more importance on your present self than your future self, but it doesn't mean you care more about the wellbeing of some random dude alive now than another ra... (read more)
Because, ceteris paribus I care about things that happen sooner more than about things that happen latter.
This is separate to the normative question of whether or not people should have zero pure time preference when it comes to evaluating the ethics of policies that will affect future generations. Surely the fact that I'd rather have some cake today rather than tomorrow cannot be relevant when I'm considering whether or not I should abate carbon emissions so my great grandchildren can live in a nice world - these simply seem separate considerations with n... (read more)
Kind of cool to see that two of my posts made it into the top EA Forum posts playlist. A small point - the audio says for both of them that they were posted to the AI Alignment Forum which is a mistake. I don’t really care, but thought you might like to know.
I think there is a key difference between longtermists and thoughtful shorttermists which is surprisingly under-discussed.
Longtermists don’t just want to reduce x-risk, they want to permanently reduce x-risk to a low level I.e achieve existential security. Without existential security the longtermist argument just doesn’t go through. A thoughtful shorttermist who is concerned about x-risk probably won’t care about this existential security, they probably just want to reduce x-risk to the lowest level possible in their lifetime.
Achieving existential securit... (read more)
I never actually said we should switch, but if we knew from the start “oh wow we live at the most influential time ever because x-risk is so high” we probably would have created an x-risk community not an EA one.
And to be clear I’m not sure where I personally come out on the hinginess debate. In fact I would say I’m probably more sympathetic to Will’s view that we currently aren’t at the most influential time than most others are.
My point is that you could engage in "x-risk community building" which may more effectively get people working on reducing x-risk than "EA community building" would.
I think if we’re at the most influential point in history “EA community building” doesn’t make much sense. As others have said it would probably make more sense to be shouting about why we’re at the most influential point in history i.e. do “x-risk community building” or of course do more direct x-risk work.
I suspect we’d also do less global priorities research (although perhaps we don’t do that much as it is). If you think we’re at the most influential time you probably have a good reason for thinking that (x-risk abnormally high) which then informs wha... (read more)
Founders Pledge's Investing to Give report is an accessible resource on this.
I wrote a short overview here.