Hide table of contents

These are some initial reflections written quickly mostly to elicit discussions (before potentially diving more into the topic).

Abstract/Introduction

Longtermists aim to have an overall positive impact on the long-term future, which requires them to form beliefs about what actions would do more good than harm in this regard. One could question the reliability of such beliefs by defending the following evolutionary argument for cluelessness:

  • P1: One’s beliefs about what actions positively influence the long-term future (in expectation) decisively rely on judgment calls (by which I mean intuitions that are neither clearly supported nor clearly undermined by the available evidence).
  • P2: There has been no evolutionary pressure (not even indirect) against individuals unable to make the correct judgment calls regarding what actions do more good than harm (in expectation) considering their far future effects.
  • Conclusion: Having non-agnostic beliefs about what actions longtermism recommends is unwarranted.

Assuming no misunderstanding of what I mean, I expect everyone to agree the conclusion logically follows from these two premises.[1] However, I expect skepticism towards P1 and especially P2 and am interested in people giving me compelling objections to these in the comments. My aim in the rest of the post is to help commenters do this by first addressing objections that I think are bad or unconvincing if not accompanied by further justification that I couldn't (yet) really come up with.

Objections to P1

Recall P1: One’s beliefs about what actions positively influence the long-term future (in expectation) decisively rely on judgment calls (by which I mean intuitions that are neither clearly supported nor clearly undermined by the available evidence).

  “Surely there’s at least the longtermist argument for X-risk reduction that is robustly justified without judgment call.”

Here’s a list of crucial factors you have to consider while assessing whether X-risk reduction is good from a (full)[2] longtermist perspective:

  • what is (dis)valuable and what kind of value ethically compensates for what kind of disvalue
  • what animals are moral subjects and to what extent
  • whether and/or when humanity will stop farming (some of) the animals with miserable lives we would not wish to live
  • whether we will spread wildlife to other celestial bodies and how much
  • the overall welfare of animals living in the wild
  • whether digital sentience is possible and practically feasible and how its development goes if it is
  • whether humanity colonizes outer space, how much, and the actors involved
  • how multipolar the development of transformative artificial intelligence will be and the values of the expected stakeholders
  • the likelihood of a conflict or malevolent actors causing astronomical amounts of suffering
  • the probability of another civilization ever taking over humanity if it goes extinct or gets disempowered and when this would happen
  • the overall impact of potential inter-civilizational conflict
  • the gains from potential peaceful inter-civilizational trade
  • how all the above factors interact with one another
  • what to make of unknown unknowns
  • how to weigh all these factors against one another to reach a final verdict.

I believe it is uncontroversial that you cannot estimate how these crucial factors should influence your overall credence without making judgment calls. You can very well argue that we should trust our judgment calls on these questions or those of whatever longtermist(s) you think make the correct calls. But you can’t say they’re not judgment calls. You can’t say that any rational agent with similar intellectual capacities to yours, and exposed to the same evidence as you, should end up with the same verdict on whether X-risk reduction does more good than harm in the long run. Unless you make the move I address next?

  “What about the option value argument for X-risk reduction? It seems more robust.”

Here are the assumptions you need to make for the argument to pan out:

  • Whoever are the stakeholders in the future will be more confident than us in their beliefs regarding whether X-risk reduction in their time is overall good OR overall bad.
  • Their overall verdict will be the correct one.
  • If these same descendants correctly find that X-risk reduction is overall good, we would have had (by reducing X-risks in our times) a positive impact greater than the negative one we would have had in the scenario where they correctly find that X-risk reduction is overall bad.[3]

I think it’s clear there are at least a few judgment calls needed to back these three assumptions. Unless you make the following assumption?

  “The good and bad consequences of an action we can’t estimate without judgment calls cancel each other out, such that judgment calls are unnecessary”

This sort of assumption is so common that it is known as “the cancelation postulate” in the cluelessness literature. However, it has been rejected by every single academic paper assessing this postulate that I’m aware of (see the references cited in the rest of this section). There are two different ways some authors have tried to justify it.

The first one is by straightforwardly appealing to a principle of indifference (POI from here on), which “states that in the absence of any relevant evidence, a rational agent will distribute their credence (or `degrees of belief') equally amongst all the possible outcomes under consideration” (Eva 2019). In the context of the cancelation postulate, such an appeal has been made and endorsed implicitly by Kagan (1998, p.64); Dorsey (2012), Cowen (2006), as well as more explicitly by Burch-Brown (2014) and Greaves (2016; §III) although only in situations Greaves estimate to be cases of what she calls “simple cluelessness”. However, at least in cases of what Greaves calls “complex cluelessness”, POI seems to totally lose its potential appeal (Burch-Brown 2014; Greaves 2016, §V; Yim 2019; Thorstad & Mogensen 2020; Mogensen 2021; Williamson 2022, Chapter 1; Tarsney et al. 2024, §3). For instance, when wondering whether the civilization that might take over ours would affect overall welfare more or less positively than humanity, we are not in total absence of any relevant evidence. There are systematic reasons why such a new civilization would do better[4] and systematic reasons to believe it would do worse[5]. And it seems exceedingly likely that some of these systematic reasons have not even occurred to us, worsening our cluelessness problem with that of unawareness (Roussos 2021; Tarsney et al. 2024, §3; Bostrom 2007, p.23-24). The problem is not that reasons pointing in one way or the other do not exist but our incapacity to sensibly weigh these reasons against one another.[6] Applying POI here would require us to have a determinate 50% credence in the proposition that the civilization that might take over ours would increase overall welfare better than humanity, the same way we hold a 50% credence in a fair coin landing on heads after being tossed. Such an application of POI seems misguided.[7] In the fair-coin-toss case, if we were to bet money, we can genuinely and rationally expect to win half of the time. In the counterfactual-civilization case, we have no clue whether we would win in expected value. The expectation we genuinely (should) have is not 50%. Rather, it is indeterminate.

The second way some have tried to justify the cancelation postulate is by appealing to an extrapolation of the parameters estimable without judgment calls. While these authors might concede that POI cannot straightforwardly justify this postulate, they claim that the parameters we can easily estimate are meaningfully representative of those we can’t, such that considering the latter is irrelevant in practice. For example, Brian Tomasik (2015) writes:

That we can anticipate something about UUs [(unknown unknowns)] despite not knowing what they will be can be seen more clearly in a case where the current UUs are more lopsided. For example, suppose the action under consideration is "start fights with random people on the street". While this probably has a few considerations in its favor, almost all of the crucial considerations that one could think of argue against the idea, suggesting that most new UUs will point against it as well.

In Tomasik’s example, the problem lies between a) “almost all of the crucial considerations that one could think of argue against the idea” and b) the claim that this “suggest[s] that most new UUs will point against it as well.” How does (a) suggest (b)? Why should we assume that the considerations that happen to have occurred to us and that we happen to be able to estimate without judgment calls are representative of those we can’t? This would be an incredibly fortunate coincidence. Surely there are systematic biases in the crucial considerations we uncover and can estimate. There probably is something unusual about the parameters we can easily estimate compared to those we can’t. And while we do not know in which direction the bias is, applying POI still seems unwarranted, here. There surely are systematic reasons why the bias would be one way and systematic reasons why it would be the other.[8] We have no idea how to weigh up such reasons against one another, just like earlier when we were wondering whether to straightforwardly apply POI to our beliefs regarding the civilization that might take over humanity if it goes extinct or gets disempowered. This does not mean that one must be fine with “starting fights with random people on the street". However, this means that, if it is not fine, this cannot be because good and bad inestimable indirect consequences of doing so “cancel out”. The commonsensical intuition why starting such fights would be bad, in particular, has nothing to do with estimates of the overall consequences this would have from now until the end of time. Hence, such commonsense cannot be directly used as evidence that not starting fights with random people on the street must be good from a purely impartial consequentialist perspective.

Objections to P2

Recall P2: There has been no evolutionary pressure (not even indirect) against individuals unable to make the correct judgment calls regarding what actions do more good than harm (in expectation) considering their far future effects.

  “The correct judgment calls on how to predictably influence the future have an evolutionary advantage since they lead to lasting influence.”

Sure but P2 doesn’t imply we cannot predictably influence the far future. Rather, it implies we cannot predictably influence the far future in a way that makes its trajectory morally more desirable. Of course those who know how to influence the far future will influence it more. But this doesn’t mean they will influence it positively.

  “Correct longtermist judgment calls are a systematic by-product of evolutionary advantageous traits that were directly selected for.”[9]

For this objection to hold, we need to suppose that there is a systematic correlation between the traits that guarantee survival and reproductive success and the quality of one’s judgment calls on how to positively influence the long-term future. This appears dubious to me. What a lucky coincidence.

One may say “well, our intuitions regarding which longtermist jugdment calls are correct must come from somewhere. Why would we have them if they don’t track the truth? Why wouldn’t our intuitions be silent on tricky questions relevant to how to positively influence the long-term future?”. I have two responses:

  1. I don’t need to know where longtermists’ judgment calls come from to tentatively suppose that they don’t come from a selection pressure towards correct longtermist judgment calls. This is for similar reasons why I don’t need to know why people have historically pictured aliens as small green creatures to tentatively suppose that correct intuitions about what aliens look like are not a trait that has been selected for among humans.
  2. There are many alternative explanations as to why we have intuitions about which longtermist jugdment calls are correct that don’t require assuming evolution selected for the correct ones. Examples include:
    1. Misgeneralization due to a distribution shift in our environment.
    2. Some random glitch of evolution.
    3. An evolutionary pressure towards longtermist judgment calls that simply made people more likely to want to survive and reproduce in the present and near future (e.g., optimism about the long-term future of humanity).[10]

  “Assessing P2 itself unavoidably requires questionable judgment calls, making P2 self-defeating”

Assessing P2 requires the ability to understand some basic logical implications of the theory of evolution. To the extent that there are “judgment calls” involved in one’s evaluation of P2, these are the judgment calls behind the very laws of logic. It makes perfect sense that evolution favored (whether directly or not) abilities to discover this kind of things (FitzPatrick & William 2021, §4.1). Historically, you die if you’re not able to reason logically on a basic level. You don’t die if you don’t have the correct beliefs about whether your actions do more good than harm considering all their overall effects on the far future.
 

But well, what do you think?

References

Bostrom, Nick. 2007. “Technological Revolutions: Ethics and Policy in the Dark.” In Nanoscale, 129–52. John Wiley & Sons, Ltd. https://doi.org/10.1002/9780470165874.ch10.

Bradley, Richard. 2017. Decision Theory with a Human Face. Cambridge: Cambridge University Press. https://doi.org/10.1017/9780511760105.

Burch-Brown, Joanna M. 2014. “Clues for Consequentialists.” Utilitas 26 (1): 105–19. https://doi.org/10.1017/S0953820813000289.

Cowen, Tyler. 2006. “The Epistemic Problem Does Not Refute Consequentialism.” Utilitas 18 (4): 383–99. https://doi.org/10.1017/S0953820806002172.

Dorsey, Dale. 2012. “Consequentialism, Metaphysical Realism and the Argument From Cluelessness.” Philosophical Quarterly 62 (246): 48–70. https://doi.org/10.1111/j.1467-9213.2011.713.x.

Eva, Benjamin. 2019. “Principles of Indifference.” Journal of Philosophy 116 (7): 390–411. https://doi.org/10.5840/jphil2019116724.

FitzPatrick, William. 2021. “Morality and Evolutionary Biology.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Spring 2021. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2021/entries/morality-biology/.

Greaves, Hilary. 2016. “XIV—Cluelessness.” Proceedings of the Aristotelian Society 116 (3): 311–39. https://doi.org/10.1093/arisoc/aow018.

Kagan, Shelly. 1998. Normative Ethics. Routledge.

Mogensen, Andreas L. 2021. “Maximal Cluelessness.” The Philosophical Quarterly 71 (1): 141–62. https://doi.org/10.1093/pq/pqaa021.

Roussos, Joe. 2021. “Unawareness for Longtermists.” https://joeroussos.org/wp-content/uploads/2021/11/210624-Roussos-GPI-Unawareness-and-longtermism.pdf.

Schwitzgebel, Eric. 2024. “The Washout Argument Against Longtermism.” Accessed March 3, 2025. https://faculty.ucr.edu/~eschwitz/SchwitzAbs/WashoutLongtermism.htm.

Tarsney, Christian. 2023. “The Epistemic Challenge to Longtermism.” Synthese 201 (6): 195. https://doi.org/10.1007/s11229-023-04153-y.

Tarsney, Christian, Teruji Thomas, and William MacAskill. 2024. “Moral Decision-Making Under Uncertainty.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta and Uri Nodelman, Spring 2024. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2024/entries/moral-decision-uncertainty/.

Thorstad, David. Forthcoming. “The Scope of Longtermism.” Australasian Journal of Philosophy.

Thorstad, David, and Andreas Mogensen. 2020. “Heuristics for Clueless Agents: How to Get Away with Ignoring What Matters Most in Ordinary Decision-Making.” https://globalprioritiesinstitute.org/david-thorstad-and-andreas-mogensen-heuristics-for-clueless-agents-how-to-get-away-with-ignoring-what-matters-most-in-ordinary-decision-making/.

Tomasik, Brian. 2015. “Charity Cost-Effectiveness in an Uncertain World.” Center on Long-Term Risk (blog). August 29, 2015. https://longtermrisk.org/charity-cost-effectiveness-in-an-uncertain-world/.

Williamson, Patrick. 2022. “On Cluelessness.” https://doi.org/10.25911/ZWK2-T508.

Yim, Lok Lam. 2019. “The Cluelessness Objection Revisited.” Proceedings of the Aristotelian Society 119 (3): 321–24. https://doi.org/10.1093/arisoc/aoz016.

  1. ^

    [Edit a few hours after posting] This implicitly assumes a few other (uncontroversial) premises, some of which fleshed out by Michael St. Jules in this comment. Thanks to him for that.

  2. ^

     Most of them apply even if you endorse a bounded version of longtermism where you ignore what happens beyond a certain number of years (see, e.g., Tarsney 2023; Schwitzgebel 2024)

  3. ^

     This requires, among other things, taking into account their capacity to try to “make up for our mistake” by increasing X-risks.

  1. ^

     E.g., the alien civilization that might eventually take over ours surely is more advanced and “wiser” by many aspects, which is – all else equal – one reason to believe it would be a better civilization when it comes to its impact on overall welfare, under some additional assumptions.

  2. ^

     E.g., the alien civilization that might eventually take over ours is likely to hold values unusually conducive to effective space colonization, for obvious selection pressure reasons (see my post The Grabby Values Selection Thesis). Concern for the suffering one might incidentally and/or indirectly cause when colonizing is not evolutionary adaptive in such a context, all else equal (see my post Why we may expect our successors not to care about suffering), which, under some additional assumptions, is one reason to believe such alien civilization would be worse than ours, welfare-wise.

  3. ^

     Interestingly, Thorstad & Mogensen (2020) write: “The problem is not that we can say nothing about the potential future effects of our actions. Quite the opposite. There is often simply too much that we can say. Even the simplest among us can list a great number of potential future effects of our actions and produce some considerations bearing on their likelihoods[.]”. Relatedly, David Thorstad (Forthcoming) suggests that “[in long-range forecasting territory,] we are often in a situation of evidential paucity: although we have some new evidence bearing on long-term values, often our evidence is quite weak and undiagnostic.”

  4. ^

     Whether it would be similarly misguided to have a 50% credence that an unfair coin toss with unknown bias will land on heads is controversial (see, e.g., Bradley 2017, §13). Such a case, however, seems to match Greaves’ descriptions of examples of “simple cluelessness” and is irrelevant to the subject at hand. Greaves (2016, §V) writes: “There is an obvious and natural symmetry between the thoughts that (i) it’s possible that moving my hand to the left might disturb air molecules in a way that sets off a chain reaction leading to an additional hurricane in Bangladesh, which in turn renders many people homeless, which in turn sparks a political uprising, which in turn leads to widespread and beneficial democratic reforms, ..., and (ii) it’s possible that refraining from moving my hand to the left has all those effects. But there is no such natural symmetry between, for instance, the arguments for the claim that the world is overpopulated and those for the claim that it’s underpopulated [...] And, in contrast to the above relatively optimistic verdict on the Principle of Indifference, clearly there is no remotely plausible epistemic principle mandating equal credences in p and not-p whenever arguments for and against p are inconclusive.” Johanna Burch-Brown (2014) also makes an appealing case endorsing a similar distinction.

  5. ^

     For example, our commonsensical intuitions precluding starting fights for no reason may make us more likely to discover reasons why this would be bad from an impartial consequentialist perspective than reasons why this would be good. On the other hand, this commonsensical intuition may be due to us getting aware of these reasons rather than the other way around, and it may be that we are actually biased towards uncovering reasons to believe starting fights for no reason would have good overall consequences due to tribal instincts. Also, here again, it would be unwise to assume we are aware of all the reasons we must weight up against each other (Roussos 2021; Tarsney et al. 2024, §3; Bostrom 2007, p.23-24).

  6. ^

     This resembles some of the most popular objections to the general evolutionary debunking argument in metaethics (see, e.g., FitzPatrick & William 2021, §4.1).

  7. ^

     Indeed, maybe there is a selective evolutionary debunking argument (see FitzPatrick & William 2021, §4.2) to be made against some common beliefs among longtermists? I might propose such an argument and assess whether it pans out, and how strong/weak it is if it does, in a future essay.

Show all footnotes
Comments10


Sorted by Click to highlight new comments since:

Interesting, I think I would expect more objections to P1 than to P2. P2 seems pretty solid to me.

For P1, I agree X-risk being bad is not as trivial to show as most of us might naively think. But maybe there are other interventions that are more robustly good in expectation (or robustly slightly greater than 50% good at least). Eg what about these sort of interventions, which do not try to make claims about what the longterm future should be like, but rather try to improve civilisation wisdom:

  • Find the most altruistic person you know, and direct their attention towards crucial considerations about moral patienthood, population ethics, decision theory etc.
    • The effect size is probably pretty small, but having altruistic people learn more about longtermism seems good in expectation.
  • Find the most competent/powerful/intelligent person you know and try to make them more altruistic (especially regarding the far future).
    • Again, maybe not very tractable, but all else equal it seems better for agents to value reducing suffering and so forth.
  • Something to do with improved institutional decision-making or making humans more cooperative and pro-social generally?
    • Has fuzzy consequences, but seems positive in most worlds.

Nice, thanks for bringing this up! Let's take this example of yours

Find the most altruistic person you know, and direct their attention towards crucial considerations about moral patienthood, population ethics, decision theory etc. The effect size is probably pretty small, but having altruistic people learn more about longtermism seems good in expectation.

Say Alice (someone with similar intellectual capacities to yours and equivalent GPR knowledge) comes and tells you she thinks that "having altruistic people learn more about longtermism seems (slightly) bad in expectation" (and she gives plausible reasons -- I'll intentionally not give examples to avoid anchoring us on a particular one). The two of you talk and agree on all the relevant factors to consider here but reach different verdicts when weighing the reasons to believe preaching longtermism to altruists is good and those to believe the opposite. How would you explain your disagreement if not by the fact that the two of you have different intuitions and make different judgment calls?

I think a fair bit might come down to what we mean by 'judgement calls'.

Let's take an example of predicting who would win the US 2024 presidential election. Reasonable, well informed people can and did disagree about what the fair market price for such prediction contracts were. There are many important reasons on either side. If two people were perfect rationalist Bayesians, they would pool their collective evidence (including hard-to-explain intuitions) and both end up with the same joint probability estimate.

So to take it back to your example, maybe Alice and I are both reasonable people and after discussing thoroughly both update towards each other. But I don't see why we would need to end up at 50%. I suppose if by judgement call we mean 'there is room for reasonable disagreement' then I agree with you, but if we mean the far stronger 'rational predictors should be at 50% on the question' that seems unwarranted. And it seems to me for cluelessness to bind, we need the strong 50% version? As otherwise we can just act on the balance of probabilities, while also trying to gain more relevant information.

Let's imprecisely interpret judgment calls as "hard-to-explain intuitions" as you wrote, for simplicity. I think that's enough, here.

For the US 2024 presidential election, there are definitely such judgment calls involved. If one tries to make an evolutionary argument undermining our ability to predict US 2024 presidential election, P1 holds. P2 visibly doesn't however, at least for some good predictors. There is empirical evidence against P2. And presumably, the reason why P2 doesn't hold is that people who have decent hard-to-explain intuitions vis-a-vis "where the wind blows" in such socio-political contexts survived better. The same can't be said (at least, not obviously) for forecasting whether making altruistic people more longtermists does more good than harm, considering all the consequences on everything from now until the end of time.

> But I don't see why we would need to end up at 50%

Say you say 53% and Alice says 45%. The two of you can give me all the arguments you want. At the end of the day, you both undeniably made judgment calls when weighing the reasons to believe making altruistic people more longtermists does more good than harm, all things considered, and reasons to believe the opposite (including reasons, in both cases, that have to do with aliens, acausal reasoning, and how to deal with crucial unknown unknowns). I don't see why I should trust any of your two different judgment-cally "best guesses" any more than the other.

In fact, if I can't find a good objection to P2, I have no good reason to trust any of your best guesses any more than a dart-throwing chimp. If I had an opinion on the (dis)value of making altruistic people more longtermists without having a good reason to reject P2, I'd be blatantly inconsistent. [1]

Do you agree now that we've hopefully clarified what is a judgment call and what isn't, here? (I think P2 is definitely the crux for whether we should be clueless. Defending that we can identify positive longtermist causes without resorting to any sort of hard-to-explain intuitions seems really untenable. And I think there may be better objections to P2 than the ones I address in the post.)


[1] Btw, a bit tangential but a key popular assumption/finding in the literature on decision-making under deep uncertainty is that "not having an opinion" or "suspending judgment" =/= 50% credence -- see this post from DiGiovanni for a nice overview).

So if we take as given that I am at 53% and Alice is at 45% that gives me some reason to do longtermist outreach, and gives Alice some reason to try to stop me, perhaps by making moral trades with me that get more of what we both value. In this case, cluelessness doesn't bite as Alice and I are still taking action towards our longtermist ends.

However, I think what you are claiming, or at least the version of your position that makes most sense to me, is that both Alice and I would be making a failure of reasoning if we assign these specific credence, and that we should both be 'suspending judgement'. And if I grant that, then yes it seems cluelessness bites as neither Alice or I know at all what to do now.

So it seems to come down to whether we should be precise Bayesians.

Re judgment calls, yes I think that makes sense, though I'm not sure it is such a useful category. I would think there is just some spectrum of arguments/pieces of evidence from 'very well empirically grounded and justified' through 'we have some moderate reason to think so' to 'we have roughly no idea' and I think towards the far right of this spectrum is what we are labeling judgement calls. But surely there isn't a clear cut-off point.

FWIW, I don't think P1 and P2 together logically imply the conclusion as stated. I think you're probably leaving out some unstated premises (that might be uncontroversial, but should be checked).

For example, could anything other than evolutionary pressures (direct or indirect) work "against individuals unable to make the correct judgment calls regarding what actions do more good than harm (in expectation) considering how these impact the far future"?

Now you might say no, because our judgement calls are outputs of systems built up through evolution.

But I think an additional premise should capture that. It is not a tautology, but (possibly) an empirical fact.

 

Another: does making correct judgement calls enough to have warranted beliefs (in humans) about something require any (past) pressure against incorrect judgement calls (about those things in particular, or in domains from which there is enough generalization to the particular things)?

I think you'd say yes, but this is also an empirical claim, not a tautology.

could anything other than evolutionary pressures (direct or indirect) work "against individuals unable to make the correct judgment calls regarding what actions do more good than harm (in expectation) considering how these impact the far future"?

Fair! One could say it's not evolution but God or something that gave us such ability (or the ability to know we have such ability although for unknown reasons).

Another: does making correct judgement calls enough to have warranted beliefs (in humans) about something require any (past) pressure against incorrect judgement calls (about those things in particular, or in domains from which there is generalization)?

I don't understand how this differs from your first example. Can you think of a way one could argue for the negative on this? That'd probably help me spot the difference.

The second one is more about the grounds for justification (having warranted beliefs). Maybe judgement calls don't need to tend to be correct or for there to be the right kind of fit towards calibration for the resulting beliefs to be warranted. Maybe just the fact that something seems a certain way, e.g. even direct intuition about highly speculative things like the far future effects of interventions, can justify belief.

EDIT: This could be consistent with phenomenal conservatism.

Like maybe your beliefs don't need to track the truth better than random to be warranted? Fair. I was also implicitly assuming not that.

Yes, or we don't need to have any specific reason to believe they do better than random. I think this could be consistent with phenomenal conservatism.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LewisBollard
 ·  · 8m read
 · 
> How the dismal science can help us end the dismal treatment of farm animals By Martin Gould ---------------------------------------- Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- This year we’ll be sharing a few notes from my colleagues on their areas of expertise. The first is from Martin. I’ll be back next month. - Lewis In 2024, Denmark announced plans to introduce the world’s first carbon tax on cow, sheep, and pig farming. Climate advocates celebrated, but animal advocates should be much more cautious. When Denmark’s Aarhus municipality tested a similar tax in 2022, beef purchases dropped by 40% while demand for chicken and pork increased. Beef is the most emissions-intensive meat, so carbon taxes hit it hardest — and Denmark’s policies don’t even cover chicken or fish. When the price of beef rises, consumers mostly shift to other meats like chicken. And replacing beef with chicken means more animals suffer in worse conditions — about 190 chickens are needed to match the meat from one cow, and chickens are raised in much worse conditions. It may be possible to design carbon taxes which avoid this outcome; a recent paper argues that a broad carbon tax would reduce all meat production (although it omits impacts on egg or dairy production). But with cows ten times more emissions-intensive than chicken per kilogram of meat, other governments may follow Denmark’s lead — focusing taxes on the highest emitters while ignoring the welfare implications. Beef is easily the most emissions-intensive meat, but also requires the fewest animals for a given amount. The graph shows climate emissions per tonne of meat on the right-hand side, and the number of animals needed to produce a kilogram of meat on the left. The fish “lives lost” number varies significantly by
Neel Nanda
 ·  · 1m read
 · 
TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and requires different skills. But people often conflate these skills, leading to excessive deference to researchers in the field, without evidence that that person is good at strategic thinking specifically. I certainly try to have good strategic takes, but it's hard, and you shouldn't assume I succeed! Introduction I often find myself giving talks or Q&As about mechanistic interpretability research. But inevitably, I'll get questions about the big picture: "What's the theory of change for interpretability?", "Is this really going to help with alignment?", "Does any of this matter if we can’t ensure all labs take alignment seriously?". And I think people take my answers to these way too seriously. These are great questions, and I'm happy to try answering them. But I've noticed a bit of a pathology: people seem to assume that because I'm (hopefully!) good at the research, I'm automatically well-qualified to answer these broader strategic questions. I think this is a mistake, a form of undue deference that is both incorrect and unhelpful. I certainly try to have good strategic takes, and I think this makes me better at my job, but this is far from sufficient. Being good at research and being good at high level strategic thinking are just fairly different skillsets! But isn’t someone being good at research strong evidence they’re also good at strategic thinking? I personally think it’s moderate evidence, but far from sufficient. One key factor is that a very hard part of strategic thinking is the lack of feedback. Your reasoning about confusing long-term factors need to extrapolate from past trends and make analogies from things you do understand better, and it can be quite hard to tell if what you're saying is complete bullshit or not. In an empirical science like mechanistic interpretability, however, you can get a lot more fe