All of Jack Malde's Comments + Replies

What reason is there NOT to accept Pascal's Wager?

It does seem to me, if you think the general reasoning of the wager is sound, that the most rational thing to do is to pick one of the cards and hope for the best, as opposed to not picking any of them.

You could for example pick Christianity or Islam, but also regularly pray to the “one true god” whoever he may be, and respectfully ask for forgiveness if your faith is misplaced. This might be a way of minimising the chances of going to hell, although there could be even better ways on further reflection.

Having said all that I’m atheist and never pray. But I’m not necessarily sure that’s the best way to be…

On the Vulnerable World Hypothesis

I looked through your post very quickly (and wrote this very quickly) so I may have missed things, but my main critical thoughts are around the “costs probably outweigh the benefits” argument as I don’t think you have adequately considered the benefits.

Surveillance is really shit, most people would accept that, but perhaps even more shit is the destruction of humanity or humanity entering a really bad persistent state (e.g. AI torturing humans for the rest of time). If we really want to avoid these existential catastrophes a solution that limits free thoug... (read more)

9Catherine8d
Hey, thanks for commenting! I think this is a good criticism, and despite most of my post arguing that surveillance would probably be bad, I agree that in some cases it could still be worth it. I think my crux is whether the decrease of risk from malicious actors due to surveillance is greater than the increase in totalitarianism and misuse risk (plus general harms to free speech and so on). It seems like surveillance must be global and very effective to greatly decrease the risk from malicious actors, and furthermore that it's really hard to reduce misuse risk of global and effective surveillance. I'm sceptical that we could make the risks associated with surveillance sufficiently small to make surveillance an overall less risky option, even supposing the risks surveillance helps decrease are worse than the ones it increases. (I don't think I share this intuition, but it definitely seems right from a utilitarian perspective). I agree though that in principle, despite increasing other risks, it might be sometimes better to surveil.
Longtermism as Effective Altruism

I'm not sure who is saying longtermism is an alternative to EA but it seems a bit nonsensical to me as longtermism is essentially the view that we should focus on positively influencing the longterm future to do the most good. It's therefore quite clearly a school of thought within EA.

Also I have a minor(ish) bone to pick with your claim that "Longtermism says to calculate expected value while treating lives as morally equal no matter when they occur. Longtermists do not discount the lives of future generations."  Will MacAskill defines longtermism as... (read more)

Confused about "making people happy" vs. "making happy people"
  • Presumably you're not neutral about creating someone who you know will live a dreadful life? If so it seems there's no fundamental barrier to comparing existence and non-existence, and it would analogously seem you should not be neutral about creating someone you know will live a great life. You can get around this by introducing an asymmetry, but this seems ad hoc.
  • I used to hold a person-affecting view but I found the transitivity argument against being neutral about making happy people quite compelling. Similar to the money pump argument I think. Worth n
... (read more)
Why Effective Altruists Should Put a Higher Priority on Funding Academic Research

Not sure how useful this is but I tried to develop a model to help us decide between carrying out our best existing interventions and carrying out research into potentially better interventions: https://forum.effectivealtruism.org/posts/jp3yaQczFWk7yiNXz/to-fund-research-or-not-to-fund-research-that-is-the

My key takeaway was that the longer the timescale we care about doing good over, the better research is relative to carrying out existing interventions. This is because there is a greater period over which we would gain from a better intervention.

As someo... (read more)

What should I ask Alan Hájek, philosopher of probability, Bayesianism, expected value and counterfatuals?
  • Would he pay the mugger in a Pascal's mugging? Generally does he think acting fanatically is an issue?
  • How does he think we should set a prior for the question of whether or not we are living at the most influential time? Uniform prior or otherwise?
  • What are his key heuristics for doing good philosophy, and how does he spot bad philosophical arguments?


 

Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies

Thanks for writing this Michael! I think economists are too quick to jump to the conclusion that economic growth will mean more happiness. This is a really clear and useful summary on where we currently are. I do have a few half-baked critical thoughts:

  • Easterlin’s long run view is still much too short: most people in EA, and I assume the progress studies community, don’t discount the future much, if at all. This means they will care about timescales of millions and even billions of years. The compounding nature of economic growth means that increased growt
... (read more)
Critiques of EA that I want to read

When it comes to comparisons of values between PAVs and total views I don't really see much of a problem as I'm not sure the comparison is actually inter-theoretic. Both PAVs and total views are additive, consequentialist views in which welfare is what has intrinsic value. It's just the case that some things count under a total view that don't under (many) PAVs i.e. the value of a new life. So accounting for both PAVs and a total view in a moral uncertainty framework doesn't seem too much of a problem to me.

What about genuine inter-theoretic comparisons e.... (read more)

5MichaelStJules1mo
PAVs and total views are different theories, so the comparisons are intertheoretic, by definition. Even if they agree on many rankings (in fixed population cases, say), they do so for different reasons. The value being compared is actually of a different kind, as total utilitarian value is non-comparative, but PA value is comparative. These vague categories might be useful and they do seem kind of intuitive to me, but 1. "Astronomically bad" effectively references the size of an affected population and hints at aggregation, so I'm not sure it's a valid category at all for intertheoretic comparisons. Astronomically bad things are also not consistently worse than things that are not astronomically bad under all views, especially lexical views and some deontological views. You can have something which is astronomically bad on leximin (or another lexical view) due to an astronomically large (sub)population made worse off, but which is dominated by effects limited to a small (sub)population in another outcome that's not astronomically bad. Astronomically bad might still be okay to use for person-affecting utilitarianism (PAU) vs total utilitarianism, though. 2. "Infinitely bad" (or "infinitely bad of a certain cardinality") could be used to a similar effect, making lexical views dominate over classical utilitarianism (unless you use lexically "amplified" versions of classical utilitarianism, too). Things can break down if we have infinitely many different lexical thresholds, though, since there might not be a common scale to put them on if the thresholds' orders are incompatible, but if we allow pairwise comparisons at least where there are only finitely many thresholds, we'd still have classical utilitarianism dominated by lexical threshold utilitarian views with finitely many lexical thresholds, and when considering them all together, this (I would guess) effectively gives us leximin, anywa
Critiques of EA that I want to read

I'm looking forward to reading these critiques! A few thoughts from me on the person-affecting views critique:

  1. Most people, myself included, find existence non-comparativism a bit bonkers. This is because most people accept that if you could create someone who you knew with certainty would live a dreadful life, that you shouldn't create them, or at least that it would be better if you didn't (all other things equal). So when you say that existence non-comparativism is highly plausible, I'm not so sure that is true...
  2. Arguing that existence non-comparativism
... (read more)
6MichaelStJules1mo
Also maximizing expected choice-worthiness with intertheoretic comparisons can lead to fanaticism focusing on quantum branching actually increasing the number of distinct moral patients (rather aggregating over the quantum measure and effectively normalizing), and that can have important consequences. See this discussion [https://forum.effectivealtruism.org/posts/sEnkD8sHP6pZztFc2#1___Quantum_Branching_4_] and my comment [https://forum.effectivealtruism.org/posts/sEnkD8sHP6pZztFc2/fanatical-eas-should-support-very-weird-projects?commentId=6yCfCco6Cz8o6PQGN] .
2MichaelStJules1mo
On 3, I actually haven't read the paper yet, so should probably do that, but I have a few objections: 1. Intertheoretic comparisons seem pretty arbitrary and unjustified. Why should there be any fact of the matter about them? If you choose some values to identify across different theories, you have to rule out alternative choices. 2. The kind of argument they use would probably support widespread value lexicality over a continuous total view. Consider lexical threshold total utilitarianism with multiple thresholds. For any such view (including total utilitarianism without lexical thresholds), if you add a(nother) greater threshold past the others and normalize by values closer to 0 than the new threshold, then the new view and things past the threshold will dominate the previous one view and things closer to 0, respectively. I think views like maximin/leximin and maximax/leximax would dominate all forms of utilitarianism, including lexical threshold utilitarianism, because they're effectively lexical threshold utilitarianim with lexical thresholds at every welfare level. 3. Unbounded utility functions, like risk-neutral expected value maximizing total utilitarianism, are vulnerable to Dutch books and money pumps, and violate the sure-thing principle, due to finite-valued lotteries with infinite or undefined expectations, like St. Petersburg lotteries. See, e.g. Paul Christiano's comment here: https://www.lesswrong.com/posts/gJxHRxnuFudzBFPuu/better-impossibility-result-for-unbounded-utilities?commentId=hrsLNxxhsXGRH9SRx [https://www.lesswrong.com/posts/gJxHRxnuFudzBFPuu/better-impossibility-result-for-unbounded-utilities?commentId=hrsLNxxhsXGRH9SRx] So, if we think it's rationally required to avoid Dutch books or money pumps in principle, or satisfy the sure-thing principle, and finite value but infinite expectated value lotteries can't be ruled out with certainty, then
5Lukas_Gloor1mo
FWIW, I've comprehensively done this in my moral anti-realism sequence [https://forum.effectivealtruism.org/posts/ZysrrTzMipZJor6EW/moral-anti-realism-introduction-and-summary] . In the post Moral Realism and Moral Uncertainty Are in Tension [https://forum.effectivealtruism.org/posts/SotZAFkGbgBEFBnQX/moral-uncertainty-and-moral-realism-are-in-tension] , I argue that you cannot be morally uncertain and a confident moral realist. Then, in The "Moral Uncertainty" Rabbit Hole, Fully Excavated [https://forum.effectivealtruism.org/posts/6STzb6XBAyu3Xxxka/the-moral-uncertainty-rabbit-hole-fully-excavated] , I explain how moral uncertainty works if it comes with metaethical uncertainty and I discuss wagers in favor of moral realism and conditions where they work and where they fail. (I posted the latter post on April 1st thinking people would find it a welcome distraction to read something serious next to all the silly posts, but it got hardly any views, sadly.) The post ends with a list of pros and cons for "good vs. bad reasons for deferring to (more) moral reflection." I'll link to that section here [https://forum.effectivealtruism.org/posts/6STzb6XBAyu3Xxxka/the-moral-uncertainty-rabbit-hole-fully-excavated#Selected_takeaways__good_vs__bad_reasons_for_deferring_to__more__moral_reflection] because it summarizes under which circumstances you can place zero or virtually zero credence in some view that other sophisticated reasoners consider appealing.
2Guy Raveh2mo
About the non-identity problem: Arden Koehler wrote a review [https://forum.effectivealtruism.org/posts/AWGwNWnMiTxPDJY39/critical-summary-of-meacham-s-person-affecting-views-and] a while ago about a paper that attempts to solve it (and other problems) for person-affecting views. I don't remember if I read the review to the end, but the idea is interesting. About the correct way to deal with moral uncertainty: Compare with Richard Ngo's comment [https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates?commentId=WictxPHdTPrrqAaTD] on a recent thread, in a very different context.
On Deference and Yudkowsky's AI Risk Estimates

I'm confused by the fact Eliezer's post was posted on April Fool's day. To what extent does that contribute to conscious exaggeration on his part?

4Guy Raveh2mo
Right? Up to reading this post, I was convinced it was an April Fool's post.
Questions to ask Will MacAskill about 'What We Owe The Future' for 80,000 Hours Podcast (possible new audio intro to longtermism)

My comment on your previous post should have been saved for this one. I copy the questions below:

  • What do you think is the best approach to achieving existential security and how confident are you on this?
  • Which chapter/part of "What We Owe The Future" do you think most deviates from the EA mainstream?
  • In what way(s) would you change the focus of the EA longtermist community if you could?
  • Do you think more EAs should be choosing careers focused on boosting economic growth/tech progress?
  • Would you rather see marginal EA resources go towards reducing specific exi
... (read more)
Longtermist slogans that need to be retired

Well I’d say that funding lead elimination isn’t longtermist all other things equal. It sounds as if FTX’s motivation for funding it was for community health / PR reasons in which case it may have longtermist benefits through those channels.

Whether longtermists should be patient or not is a tricky, nuanced question which I am unsure about, but I would say I’m more open to patience than most.

Critiques of EA that I want to read

Broad longtermist interventions don't seem so robustly positive to me, in case the additional future capacity is used to do things that are in expectation bad or of deeply uncertain value according to person-affecting views, which is plausible if these views have relatively low representation in the future.

Fair enough. I shouldn't really have said these broad interventions are robust to person-affecting views because that is admittedly very unclear. I do find these broad interventions to be robustly positive overall though as I think we will get closer to ... (read more)

Critiques of EA that I want to read

AI safety's focus would probably shift significantly, too, and some of it may already be of questionable value on person-affecting views today. I'm not an expert here, though.

I've heard the claim that optimal approaches to AI safety may depend on one's ethical views, but I've never really seen a clear explanation how or why. I'd like to see a write-up of this.

Granted I'm not as read up on AI safety as many, but I've always got the impression that the AI safety problem really is "how can we make sure AI is aligned to human interests?", which seems pretty ro... (read more)

6MichaelStJules2mo
I would recommend CLR's and CRS's writeups for what more s-risk-focused work looks like: https://longtermrisk.org/research-agenda [https://longtermrisk.org/research-agenda] https://www.alignmentforum.org/posts/EzoCZjTdWTMgacKGS/clr-s-recent-work-on-multi-agent-systems [https://www.alignmentforum.org/posts/EzoCZjTdWTMgacKGS/clr-s-recent-work-on-multi-agent-systems] https://centerforreducingsuffering.org/open-research-questions/ [https://centerforreducingsuffering.org/open-research-questions/] (especially the section Agential s-risks)
Critiques of EA that I want to read

And, if there was a convincing version of a person-affecting view, it probably would change a fair amount of longtermist prioritization.

This is an interesting question in itself that I would love someone to explore in more detail. I don't think it's an obviously true statement. Two give a few counterpoints:

  • People have justified work on x-risk only thinking about the effects an existential catastrophe would have on people alive today (see here, here and here).
  • The EA longtermist movement has a significant focus on AI risks which I think stands up to a person
... (read more)
6MichaelStJules2mo
I think a person-affecting approach like the following is promising, and it and the others you've cited have received little attention in the EA community, parhaps in part because of their technical nature: https://globalprioritiesinstitute.org/teruji-thomas-the-asymmetry-uncertainty-and-the-long-term/ [https://globalprioritiesinstitute.org/teruji-thomas-the-asymmetry-uncertainty-and-the-long-term/] I wrote a short summary here: https://www.lesswrong.com/posts/Btqex9wYZmtPMnq9H/debating-myself-on-whether-extra-lives-lived-are-as-good-as?commentId=yidnhcNqLmSGCsoG9 [https://www.lesswrong.com/posts/Btqex9wYZmtPMnq9H/debating-myself-on-whether-extra-lives-lived-are-as-good-as?commentId=yidnhcNqLmSGCsoG9] Human extinction in particular is plausibly good or not very important relative to other things on asymmetric person-affecting views, especially animal-inclusive ones, so I think we would see extinction risk reduction relatively deemphasized. Of course, extinction is also plausibly very bad on these views, but the case for this is weaker without the astronomical waste argument. AI safety's focus would probably shift significantly, too, and some of it may already be of questionable value on person-affecting views today. I'm not an expert here, though. Broad longtermist interventions don't seem so robustly positive to me, in case the additional future capacity is used to do things that are in expectation bad or of deeply uncertain value according to person-affecting views, which is plausible if these views have relatively low representation in the future.
2abrahamrowe2mo
Yeah those are fair - I guess it is slightly less clear to me that adopting a person-affecting view would impact intra-longtermist questions (though I suspect it would), but it seems more clear that person-affecting views impact prioritization between longtermist approaches and other approaches. Some quick things I imagine this could impact on the intra-longtermist side: * Prioritization between x-risks that cause only human extinction vs extinction of all/most life on earth (e.g. wild animals). * EV calculations become very different in general, and probably global priorities research / movement building become higher priority than x-risk reduction? But it depends on the x-risk. Yeah, I'm not actually sure that a really convincing person-affecting view can be articulated. But I'd be excited to see someone with a strong understanding of the literature really try. I also would be interested in seeing someone compare the tradeoffs on non- views vs person-affecting. E.g. person affecting views might entail X weirdness, but maybe X weirdness is better to accept than the repugnant conclusion, etc.
How to dissolve moral cluelessness about donating mosquito nets

Ok, although it’s probably worth noting that climate change is generally not considered to be an existential risk so I’m not sure considerations of emissions/net zero are all that relevant here. I think population change is more relevant in terms of impacts on economic growth / tech stagnation which in turn should have an impact on existential risk.

1ben.smith2mo
Ord (2020) listed climate change as an x-risk. Though, on reflection, he may have said that 1/1000 was an absolute upper bound and he thought the actual risk was lower than that. I have a hard time understanding stories not mediated through climate change or resource shortage (which seems closely linked to climate change, in that many resource limits boil down to carbon emissions) about how population growth in Africa could lead to higher existential risk--particularly in a context where global population seems like it will hit a peak and then decline sometime in the second half of the 21st century [https://www.nature.com/articles/d41586-021-02522-6]. Most of the pathways I can imagine would point to lower existential risk. If the starting point is that bednet distribution leads to lower existential risk, there isn't really a dilemma, and so that case seemed less interesting to analyse. So that's probably one reason I saw more value in starting my analysis with the climate change angle. However, there are probably causal possibilities I've missed. I'd be interested to hear what you think they might be. I do think someone should try to examine those more closely in order to try and put reasonable probabilistic bounds around them. I certainly don't think the analysis above is complete. As I said in the post, the intent was to demonstrate how we could "dissolve" or reduce some moral cluelessness to ordinary probabilistic uncertainty using careful reason and evidence to evaluate possible causal pathways. I think the analysis above is a start and a demonstration that we can reduce uncertainty through reasoned analysis of evidence. But we'd definitely need a more extended analysis to act. Then, we can take an expected value approach to work out the likely benefit of our actions.
How to dissolve moral cluelessness about donating mosquito nets

To a donor who would like to save lives in the present without worsening the long-term future, however, we may just have reduced moral cluelessness enough for them to feel comfortable donating bednets.

I have to admit I find this slightly bizarre. Such a person would accept that we can improve/worsen the far future in expectation and that the future has moral value. At the same time, such a person wouldn't actually care about improving the far future, they would simply not want to worsen it. I struggle to understand the logic of such a view.

3MichaelStJules2mo
They might not be willing to commit 100% to EV maximization no matter how low the probability of making a difference, but entertain EV maximization as one of multiple views over which they have decision-theoretic (normative) uncertainty. Then they want to ensure their actions look good across views they find plausible. That being said, I think it's the entire portfolio that matters and you would want to be robustly positive over the combined portfolio, not on each individual act in it. Also, they might think no far future-targeted option looks robustly positive in expectation.
How to dissolve moral cluelessness about donating mosquito nets

I appreciate this attempt - I do think trying to understand the impact of reduced mortality on population sizes is pretty key (considering this paper and this paper together implies that population size could be quite crucial for a longtermist perspective). I'm not quite sure you've given this specific point enough attention though. You seem to acknowledge that whilst population should increase in the short-term that it could cause a population decline in several generations - but you don't really discuss how to weight these two points against each other, ... (read more)

1ben.smith2mo
You're right that I didn't discuss it much. Perhaps I should have. I have a head model that world per capita net GHG emissions will begin to decline at some point before 2050, and reach net zero some time between 2050 and 2100. The main relevance for population here was that higher population would increase emissions. But once the world reaches net zero per capita emissions, additional people might not produce more emissions. I think it's quite plausible that population decline due to economic growth induced in 2022 won't show up for a couple of generations--potentially after we reach net zero. So I didn't include it in the model. If I had done, we'd get a result more in favour of donating bednets.
What YouTube channels do you watch?

I guess you can easily collect answers to multiple questions through a form. You can also see correlations e.g. if people watch a certain YT channel are they also more likely to listen to a certain podcast. Plus upvotes on Forum can be strong/weak which you may not want and people may simply upvote existing options rather than adding new ones, biasing what was put up early.

Longtermist slogans that need to be retired

I think the existence of investing for the future as a meta option to improve the far future essentially invalidates both of your points. Investing money in a long-term fund won’t hit diminishing returns anytime soon. I think of it as the “Give Directly of longtermism”.

2Michael_Wiebe2mo
Do you think FTX funding lead elimination [https://forum.effectivealtruism.org/posts/vnhFqfPq3bjAXdZoP/michael_wiebe-s-shortform?commentId=QKHqarKX7odtm3Cav] is a mistake, and that they should do patient philanthropy instead?
3Michael_Wiebe3mo
I'd be interested to see the details. What's the expected value of a rainy day fund, and what factors does it depend on?
Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Certainly agree there is something weird there! 

Anyway I don't really think there was too much disagreement between us, but it was an interesting exchange nonetheless!

Should we buy coal mines?

I’ve read your overview and skimmed the rest. You say there will probably be better ways to limit coal production or consumption, but I was under the impression this wasn’t the main motivation for buying a coal mine. I thought the main motivation was to ensure we have the energy resources to be able to rebuild society in case we hit some sort of catastrophe. Limiting coal production and consumption was just an added bonus. Am I wrong?

EDIT: appreciate you do argue the coal may stay in the ground even if we don’t buy the mine which is very relevant to my question

EDIT2: just realised limiting consumption is important to preserve energy stores, but limiting production perhaps not

2Max Clarke3mo
Buying coal mines to secure energy production post-global-catastrophe is a much more interesting question. Seems to me that buying coal, rather than mines, is a better idea in that case.
Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?

A longtermist career spans decades, as would going vegan for life or donating regularly for decades. So it was mostly a temporal thing, trying to somewhat equalise the commitment associated with different altruistic choices.

but why should the locus of agency be the individual? Seems pretty arbitrary.

Hmm well aren't we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?

If you agree that voting i

... (read more)
2Rohin Shah3mo
We're all particular brain cognitions that only exist for ephemeral moments before our brains change and become a new cognition that is similar but not the same. (See also "What counts as death?" [https://www.cold-takes.com/what-counts-as-death/].) I coordinate both with the temporally-distant (i.e. future) brain cognitions that we typically call "me in the past/future" and with the spatially-distant brain cognitions that we typically call "other people". The temporally-distant cognitions are more similar to current-brain-cognition than the spatially-distant cognitions but it's fundamentally a quantitative difference, not a qualitative one. By "fanatical" I want to talk about the thing that seems weird about Pascal's mugging and the thing that seems weird about spending your career searching for ways to create infinitely large baby universes, on the principle that it slightly increases the chance of infinite utility. If you agree there's something weird there and that longtermists don't generally reason using that weird thing and typically do some other thing instead, that's sufficient for my claim (b).
Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

That's fair enough, although when it comes to voting I mainly do it for personal pleasure / so that I don't have to lie to people about having voted!

When it comes to something like donating to GiveWell charities on a regular basis / going vegan for life I think one can probably have greater than 50% belief they will genuinely save lives / avert suffering. Any single donation or choice to avoid meat will have far lower probability, but it seems fair to consider doing these things over a longer period of time as that is typically what people do (and what someone who chooses a longtermist career essentially does).

5Rohin Shah3mo
Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate? Given that you seem to agree voting is fanatical, I'm guessing you want to consider the probability that an individual's actions are impactful, but why should the locus of agency be the individual? Seems pretty arbitrary. If you agree that voting is fanatical, do you also agree that activism is fanatical? The addition of a single activist is very unlikely to change the end result of the activism.
Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Probabilities are on a continuum. It’s subjective at what point fanaticism starts. You can call those examples fanatical if you want to, but the probabilities of success in those examples are probably considerably higher than in the case of averting an existential catastrophe.

I think the probability that my personal actions avert an existential catastrophe is higher than the probability that my personal vote in the next US presidential election would change its outcome.

I think I'd plausibly say the same thing for my other examples; I'd have to think a bit more about the actual probabilities involved.

Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Hmm I do think it's fairly fanatical. To quote this summary:

For example, it might seem fanatical to spend $1 billion on ASI-alignment for the sake of a 1-in-100,000 chance of preventing a catastrophe, when one could instead use that money to help many people with near-certainty in the near-term.

The probability that any one longtermist's actions will actually prevent a catastrophe is very small. So I do think longtermist EAs are acting fairly fanatically.

Another way of thinking about it is that, whilst the probability of x-risk may be fairly high, the x-ris... (read more)

9Rohin Shah3mo
By this logic it seems like all sorts of ordinary things are fanatical: 1. Buying less chicken from the grocery store is fanatical (this only reduces the number of suffering chickens if you buying less chicken was the tipping point that caused the grocery store to order one less shipment of chicken, and that one fewer order was the tipping point that caused the factory farm to reduce the number of chickens it aimed to produce; this seems very low probability) 2. Donating small amounts to AMF is fanatical (it's very unlikely that your $25 causes AMF to do another distribution beyond what it would have otherwise done) 3. Voting is fanatical (the probability of any one vote swinging the outcome is very small) 4. Attending a particular lecture of a college course is fanatical (it's highly unlikely that missing that particular lecture will make a difference to e.g. your chance of getting the job you want). Generally I think it's a bad move to take a collection of very similar actions and require that each individual action within the collection be reasonably likely to have an impact. I don't know of anyone who (a) is actively working reducing the probability of catastrophe and (b) thinks we only reduce the probability of catastrophe by 1-in-100,000 if we spend $1 billion on it. Maybe Eliezer Yudkowsky and Nate Soares, but probably not even them. The summary is speaking theoretically; I'm talking about what happens in practice.
Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Yeah that's fair. As I said I'm not entirely sure on the motivation point. 

I think in practice EAs are quite fanatical, but only to a certain point. So they probably wouldn't give in to a Pascal's mugging but many of them are willing to give to a long-term future fund over GiveWell charities -  which is quite a bit fanaticism! So justifying fanaticism still seems useful to me, even if EAs put their fingers in their ears with regards to the most extreme conclusion...

2Rohin Shah3mo
It really doesn't seem fanatical to me to try to reduce the chance of everyone dying, when you have a specific mechanism by which everyone might die that doesn't seem all that unlikely! That's the right action according to all sorts of belief systems, not just longtermism! (See also these [https://forum.effectivealtruism.org/posts/rFpfW2ndHSX7ERWLH/simplify-ea-pitches-to-holy-shit-x-risk] posts [https://forum.effectivealtruism.org/posts/KDjEogAqWNTdddF9g/long-termism-vs-existential-risk] .)
To fund research, or not to fund research, that is the question

Hi Michael, thanks for your reply! I apologise I didn’t check with you before saying that you have ruled out research a priori. I will put a note to say that this is inaccurate. Prioritising based on self-reports of wellbeing does preclude funding research, but I’m glad to hear that you may be open to assessing research in the future.

Sorry to hear you struggled to follow my analysis. I think I may have over complicated things, but it did help me to work through things in my own head! I haven’t really looked at the literature into VOI.

In a nutshell my model... (read more)

Consider Changing Your Forum Username to Your Real Name

FYI you can contact the EA Forum team to get your profile hidden from search engines (see here).

Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Yes I disagree with b) although it's a nuanced disagreement.

I think the EA longtermist movement is currently choosing the actions that most increase probability of infinite utility, by reducing existential risk.

What I'm less sure of is that achieving infinite utility is the motivation for reducing existential risk. It might just be that achieving "incredibly high utility" is the motivation for reducing existential risk. I'm not too sure on this.

My point about the long reflection was that when we reach this period it will be easier to see the fanatics from the non-fanatics. 

2Rohin Shah3mo
This is not in conflict with my claim (b). My claim (b) is about the motivation or reasoning by which actions are chosen. That's all I rely on for the inferences in claims (c) and (d). I think we're mostly in agreement here, except that perhaps I'm more confident that most longtermists are not (currently) motivated by "highest probability of infinite utility".
Consider Changing Your Forum Username to Your Real Name

I’ve reversed an earlier decision and have settled on using my real name. Wish me luck!

Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

I'm super excited for you to continue making these research summaries! I have previously written about how I want to see more accessible ways to understand important foundational research - you've definitely got a reader in me.

I also enjoy the video summaries. It would be great if GPI video and written summaries were made as standard. I appreciate it's a time commitment, but in theory there's quite a wide pool of people who could do the written summaries and I'm sure you could get funding to pay people to do them.

As a non-academic I don't think I can assis... (read more)

Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

you should be striving to increase the probability of making something like this happen. (Which, to be clear, could be the right thing to do! But it's not how longtermists tend to reason in practice.)

As you said in your previous comment we essentially are increasing the probability of these things happening by reducing x-risk. I'm not convinced we don't tend to reason fanatically in practice - after all Bostrom's astronomical waste argument motivates reducing x-risk by raising the possibility of achieving incredibly high levels of utility (in a footnote he... (read more)

2Rohin Shah3mo
I'm not sure whether you are disagreeing with me or not. My claims are (a) accepting fanaticism implies choosing actions that most increase probability of infinite utility, (b) we are not currently choosing actions based on how much they increase probability of infinite utility, (c) therefore we do not currently accept fanaticism (though we might in the future), (d) given we don't accept fanaticism we should not use "fanaticism is fine" as an argument to persuade people of longtermism. Is there a specific claim there you disagree with? Or were you riffing off what I said to make other points?
Effective altruism’s odd attitude to mental health

I think the point Caleb is making is that your EAG London story doesn't necessarily show the tension that you think it does. And for what it's worth I'm sceptical this tension is very widespread.

Effective altruism’s odd attitude to mental health

I don't know for sure that we have prioritised mental health over other productivity interventions, although we may have. Effective Altruism Coaching doesn't have a sole mental health focus (also see here for 2020 annual review) but I think that is just one person doing the coaching so may not be representative of wider productivity work in EA.

It's worth noting that it's plausible that mental health may be proportionally more of a problem within EA than outside, as EAs may worry more about the state of the world and if they're having impact etc. - which ma... (read more)

Effective altruism’s odd attitude to mental health

Pretty much this. I don’t think discussions on improving mental health in the EA community are motivated by improving wellbeing, but instead by allowing us to be as effective as a community as possible. Poor mental health is a huge drain on productivity.

If the focus on EA community mental health was based on direct wellbeing benefits I would be quite shocked. We’re a fairly small community and it’s likely to be far more cost-effective to improve the mental health of people living in lower income countries (as HLI’s StrongMinds recommendation suggests).

9BarryGrimes3mo
Has anyone done the analysis to determine the most cost-effective ways to increase the productivity of the EA community? It's not obvious to me that focussing on mental health would be the best option. If that is the case, I feel confused about the rationale for prioritising the mental health of EAs over other productivity interventions.
1Fai3mo
Wow thank you! Very relevant!
My GWWC donations: Switching from long- to near-termist opportunities?

Sorry it’s not entirely clear to me if you think good longtermist giving opportunities have dried up, or if you think good opportunities remain but your concern is solely about the optics of giving to them.

On the optics point, I would note that you don’t have to give all of your donations to the same thing. If you’re worried about having to tell people about your giving to LTFF, you can also give a portion of your donations to global health (even if small), allowing you to tell them about that instead, or tell them about both.

You could even just give every... (read more)

3Tom Gardiner4mo
To clarify, my position could be condensed to "I'm not convinced small scale longtermist donations are presently more impactful than neartermist ones, nor am I convinced of the reverse. Given this uncertainty, I am tempted to opt for neartermist donations to achieve better optics." The point you make seems very sensible. If I update strongly back towards longtermist giving I will likely do as you suggest.
How much current animal suffering does longtermism let us ignore?

I'm just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose. 

Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn't right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I'm sceptical of this.

How much current animal suffering does longtermism let us ignore?

Am I missing something basic here?

No you're not missing anything that I can see. When OP says:

Does longtermism mean ignoring current suffering until the heat death of the universe?

I think they're really asking:

Does longtermism mean ignoring current suffering until near the heat death of the universe?

Certainly the closer an impartial altruist is to heat death the less forward-looking the altruist needs to be.

How much current animal suffering does longtermism let us ignore?

I agree with all of that. I was objecting to the implication that longtermists will necessarily reduce suffering. Also (although I'm unsure about this), I think that the EA longtermist community will increase expected suffering in the future, as it looks like they will look to maximise the number of beings in the universe.

1Matthew_Barnett4mo
What I view as the Standard Model of Longtermism is something like the following: * At some point we will develop advanced AI capable of "running the show" for civilization on a high level * The values in our AI will determine, to a large extent, the shape of our future cosmic civilization * One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad. * To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines. This model doesn't predict that longtermists will make the future much larger than it otherwise would . It just predicts that they'll make it look a bit different than it otherwise would look like. Of course, there are other existential risks that longtermists care about. Avoiding those will have the effect of making the future larger in expectation, but most longtermists seem to agree that non-AI x-risks are small by comparison to AI.
How much current animal suffering does longtermism let us ignore?

I upvoted OP because I think comparison to humans is a useful intuition pump, although I agree with most of your criticism here. One thing that surprised me was:

Obviously not? That means you never reduced suffering? What the heck was the point of all your longtermism?

Surprised to hear you say this. It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare. Remember one of the founding texts of longtermism says we ... (read more)

4Rohin Shah4mo
But at the time of the heat death of the universe, the future is not vast in expectation? Am I missing something basic here? (I'm ignoring weird stuff which I assume the OP was ignoring like acausal trade / multiverse cooperation, or infinitesimal probabilities of the universe suddenly turning infinite, or already being infinite such that there's never a true full heat death and there's always some pocket of low entropy somewhere, or believing that the universe's initial state was selected such that at heat death you'll transition to a new low-entropy state from which the universe starts again.) Oh, yes, that's plausible; just making a larger future will tend to increase the total amount of suffering (and the total amount of happiness), and this would be a bad trade in the eyes of a negative utilitarian. In the context of the OP, I think that section was supposed to mean that longtermism would mean ignoring current utility until the heat death of the universe -- the obvious axis of difference is long-term vs current, not happiness vs suffering (for example, you can have longtermist negative utilitarians). I was responding to that interpretation of the point, and accidentally said a technically false thing in response. Will edit.
3Matthew_Barnett4mo
I have an issue with your statement that longtermists neglect suffering, because they just maximize total (symmeric) welfare. I think this statement isn't actually true, though I agree if you just mean pragmatically, most longtermists aren't suffering focused. Hilary Greaves and William MacAskill loosely define [https://globalprioritiesinstitute.org/wp-content/uploads/The-Case-for-Strong-Longtermism-GPI-Working-Paper-June-2021-2-2.pdf] strong longtermism as, "the view that impact on the far future is the most important feature of our actions today." Longtermism is therefore completely agnostic about whether you're a suffering-focused altruist, or a traditional welfarist in line with Jeremy Bentham. It's entirely consistent to prefer to minimize suffering over the long-run future, and be a longtermist. Or put another way, there are no major axiological commitments involved with being a longtermist, other than the view that we should treat value in the far-future similar to the way we treat value in the near-future. Of course, in practice, longtermists are more likely to advocate a Benthamite utility function than a standard negative utilitarian. But it's still completely consistent to be a negative utilitarian and a longtermist, and in fact I consider myself one.
How much current animal suffering does longtermism let us ignore?

However, it seems to me that at least some parts of longtermist EA , some of the time, to some extent, disregard the animal suffering opportunity cost almost entirely.

I'm not sure how you come to this conclusion, or even what it would mean to "disregard the opportunity cost". 

Longtermist EAs generally know their money could go towards reducing animal suffering and do good. They know and generally acknowledge that there is an opportunity cost of giving to longtermist causes. They simply think their money could do the most good if given to longtermist causes.

How much current animal suffering does longtermism let us ignore?

even though I just about entirely buy the longtermist thesis

If you buy into the longtermist thesis why are you privileging the opportunity cost of giving to longtermist causes and not the opportunity cost of giving to animal welfare?

Are you simply saying you think the marginal value of more money to animal welfare is greater than to longtermist causes?

1Aaron Bergman4mo
I'm not intending to, although it's possible I'm using the term "opportunity cost" incorrectly or in a different way than you. The opportunity cost of giving a dollar to animal welfare is indeed whatever that dollar could have bought in the longtermist space (or whatever else you think is the next best option). However, it seems to me that at least some parts of longtermist EA , some of the time, to some extent, disregard the animal suffering opportunity cost almost entirely. Surely the same error is committed in the opposite direction by hardcore animal advocates, but the asymmetry comes from the fact that this latter group controls a way smaller share of financial pie.
1Michael_Wiebe4mo
Note that with diminishing returns, marginal utility per dollar (MU/$) is a function of the level of spending. So it could be the case that the MU/$ for the next $1M to Faunalytics is really high, but drops off above $1M. So I would rephrase your question as: >do you think the marginal value of more money to animal welfare right now is greater than to longtermist causes?
How much current animal suffering does longtermism let us ignore?

Thanks for writing this! I like the analogy to humans. I did something like this recently with respect to dietary choice. My thought experiment specified that these humans had to be mentally-challenged so that they have similarity capacities for welfare as non-human animals which isn’t something you have done here, but I think is probably important. I do note that you have been conservative in terms of the number of humans however.

Your analogy has given me pause for thought!

How much current animal suffering does longtermism let us ignore?

There's a crude inductive argument that the future will always outweigh the present, in which case we could end up like Aesop's miser, always saving for the future until eventually we die.

I would just note that, if this happens, we’ve done longtermism very badly. Remember longtermism is (usually) motivated by maximising expected undiscounted welfare over the rest of time.

Right now, longtermists think they are improving the far future in expectation. When we actually get to this far future it should (in expectation) be better than it otherwise would ha... (read more)

1Jacob Eliosoff3mo
Yeah, this wasn't my strongest/most serious argument here. See my response to @Lukas_Gloor.
Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

Yeah I think that’s true if you only have the term “longtermist”. If you have both “longtermist” and “non-longtermist” I’m not so sure.

3david_reinstein4mo
maybe we just say "not longermist" rather than trying to make "non-longermist" a label? Either way, I think we can agree to get rid of 'neartermist'.
Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

I don’t think it’s negative either although, as has been pointed out, many interpret it as meaning that one has a high discount rate which can be misleading

Load More