Bio

Participation
4

How others can help me

You can give me feedback here (anonymous or not). You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering and paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.

Comments
1364

Topic contributions
25

Thanks for the comment, Kaleem.

In a nutshell, both globally and in China:

  • Farmed cows and pigs (and other mammals) account for a tiny fraction of the disability of the farmed animals I analysed.
  • The annual disability of farmed animals is much larger than that of humans, even under the arguably very optimistic assumption of all farmed animals having neutral lives.
  • The annual funding helping farmed animals is much smaller than that helping humans.

I think the 1st point holds for most countries (not only China), and the 2nd and 3rd for basically all countries. I could have a title like "Farmed animals are neglected, both globally and China". However, I think this could be read as farmed animal welfare being more neglected relative to its scale in China than in other countries. I believe this is true[1], but it is not necessarily implied by the points above.

  1. ^

    I estimate China accounts for 17.1 % of the disability of farmed animals, but only 2.20 % of the philanthropic spending. I suppose one could argue these numbers are interesting, and the title could reflect them, but I am wary of communicating that more funding should to China without having looked into the respective cost-effectiveness (i.e. not only scale and neglectedness, but also tractability, which is arguably lower in China).

I think if we prevent an extinction event in the 21st century, the natural assumption is that probability mass is evenly distributed over all other futures, and we need to make arguments in specific cases as to why this isn't the case.

I make some specific arguments:

As far as I can tell, the (posterior) counterfactual impact of interventions whose effects can be accurately measured, like ones in global health and development, decays to 0 as time goes by, and can be modelled as increasing the value of the world for a few years or decades, far from astronomically.

[...]

Here are some intuition pumps for why reducing the nearterm risk of human extinction says practically nothing about changes to the expected value of the future. In terms of:

  • Human life expectancy:
    • I have around 1 life of value left, whereas I calculated an expected value of the future of 1.40*10^52 lives.
    • Ensuring the future survives over 1 year, i.e. over 8*10^7 lives (= 8*10^(9 - 2)) for a lifespan of 100 years, is analogous to ensuring I survive over 5.71*10^-45 lives (= 8*10^7/(1.40*10^52)), i.e. over 1.80*10^-35 seconds (= 5.71*10^-45*10^2*365.25*86400).
    • Decreasing my risk of death over such an infinitesimal period of time says basically nothing about whether I have significantly extended my life expectancy. In addition, I should be a priori very sceptical about claims that the expected value of my life will be significantly determined over that period (e.g. because my risk of death is concentrated there).
    • Similarly, I am guessing decreasing the nearterm risk of human extinction says practically nothing about changes to the expected value of the future. Additionally, I should be a priori very sceptical about claims that the expected value of the future will be significantly determined over the next few decades (e.g. because we are in a time of perils).
  • A missing pen:
    • If I leave my desk for 10 min, and a pen is missing when I come back, I should not assume the pen is equally likely to be in any 2 points inside a sphere of radius 180 M km (= 10*60*3*10^8) centred on my desk. Assuming the pen is around 180 M km away would be even less valid.
    • The probability of the pen being in my home will be much higher than outside it. The probability of being outside Portugal will be negligible, but the probability of being outside Europe even lower, and in Mars even lower still[5].
    • Similarly, if an intervention makes the least valuable future worlds less likely, I should not assume the missing probability mass is as likely to be in slightly more valuable worlds as in astronomically valuable worlds. Assuming the probability mass is all moved to the astronomically valuable worlds would be even less valid.
  • Moving mass:
    • For a given cost/effort, the amount of physical mass one can transfer from one point to another decreases with the distance between them. If the distance is sufficiently large, basically no mass can be transferred.
    • Similarly, the probability mass which is transferred from the least valuable worlds to more valuable ones decreases with the distance (in value) between them. If the world is sufficiently faraway (valuable), basically no mass can be transferred.

Thanks for the context, MHR.

Their ratings (higher = better) for their recommended charities are

Is there a single page with all the scores, or did you check the cost-effectiveness sheet of each recommended charity?

My personal advice would be that I think the EA Funds Animal Welfare Fund is probably the expected value maximizing option, while The Humane League is probably the best option if you're somewhat risk-averse.

I used to prefer the Animal Welfare Fund (AWF) too, but now think THL may well be the best option. It looks like AWF pays to little attention to cost-effectiveness. From Giving What We Can's evaluation of AWF (emphasis mine):

Fourth, we saw some references to the numbers of animals that could be affected if an intervention went well, but we didn’t see any attempt at back-of-the-envelope calculations to get a rough sense of the cost-effectiveness of a grant, nor any direct comparison across grants to calibrate scoring. We appreciate it won’t be possible to come up with useful quantitative estimates and comparisons in all or even most cases, especially given the limited time fund managers have to review applications, but we think there were cases among the grants we reviewed where this was possible (both quantifying and comparing to a benchmark) — including one case in which the applicant provided a cost-effectiveness analysis themselves, but this wasn’t then considered by the PI in their main reasoning for the grant.

Thanks, Joris, and welcome to the EA Forum!

What I'm worried about is the error bars - multiplying errors can cause wild differences between the estimated and actual numbers. If the error bars of the two funds (CCF and TCF) overlap significantly, it might be too soon to judge which one is best.

Agreed:

  • I estimated the cost-effectiveness of CCF is:
    • 3.28 times that of TCF, with a plausible range of 0.175 to 30.2 times. So it is unclear to me whether donors interested in improving nearterm human welfare had better donate to GiveWell’s funds or CCF.

Thanks SummaryBot!

Results are sensitive to the distribution type, but focusing on the far right tail is most relevant for extinction risk.

I guess reasonable distribution types will lead to astronomically low extinction risk as long as one focusses on the rightmost points of the tail distribution.

Extraordinary evidence would be needed to justify a meaningfully higher risk estimate.

To clarify:

  • Extraordinary evidence would be required to move up sufficiently many orders of magnitude for an AIbio or nuclear conflict to have a decent chance of causing human extinction. I think underweighting the outside view is a major reason leading to overly high risk.

Hi Oscar,

I would be curious to know your thoughts on my post Reducing the nearterm risk of human extinction is not astronomically cost-effective? (feel free to comment there).

Summary

  • I believe many in the effective altruism community, including me in the past, have at some point concluded that reducing the nearterm risk of human extinction is astronomically cost-effective. For this to hold, it has to increase the chance that the future has an astronomical value, which is what drives its expected value.
  • Nevertheless, reducing the nearterm risk of human extinction only obviously makes worlds with close to 0 value less likely. It does not have to make ones with astronomical value significantly more likely. A priori, I would say the probability mass is moved to nearby worlds which are just slightly better than the ones where humans go extinct soon. Consequently, interventions reducing nearterm extinction risk need not be astronomically cost-effective.
  • I wonder whether the conclusion that reducing the nearterm risk of human extinction is astronomically cost-effective may be explained by:

Thanks for the analysis, Hannah and William! How many hours does each shrimp live?

Thanks for the context. I think both your initial comment and reply, without further context (I personally did not have more context; I have not been following these discussions), lead to an innacurate picture of Hanania's views. The title is provocatory, but my understanding based solely on skimming that post would be that Hanania is not "someone who thinks that using they/them pronouns is worse than committing genocide". Hanania thinks genocide is worse, but then focusses on pronouns due to personal fit considerations? From the post:

Hearing about what the Current Thing in South Korea was ["a man had molested a little girl, a judge gave him a light sentence, and society was outraged"] gave me an idea for an article. I would talk about how deformed liberal morality is. Deep down, leftists care about racial slurs more than genocide, misgendering more than cancer, fake gender income gaps more than factory farms and torturing children. But it didn’t take long for me to realize I’m not all that different. As Scott Alexander recently wrote,

"sometimes pundits will, for example, make fun of excessively woke people by saying something like “in a world with millions of people in poverty and thousands of heavily-armed nuclear missiles, you’re really choosing to focus on whether someone said something slightly silly about gender?” Then they do that again. Then they do that again. Then you realize these pundits’ entire brand is making fun of people who say silly things (in a woke direction) about gender, even though there are millions of people in poverty and thousands of nuclear missiles. So they ought to at least be able to appreciate how strong the temptation can be. As Horace puts it, “why do you laugh? Change the name, and the joke’s on you!”"

Deep down, I know wokeness is not the most important issue facing humanity. I would contend it’s more important than most people think, say top 5-10 depending on how you count. Twice this year, there have been stories of women’s tears bringing down male scientists of unusual ability, one who had been working at MIT, the other running the “cancer moonshot” at the White House. I suspect that there might be some correlation between unique male talent and the likelihood of inspiring a PC mob to come after you (see also Roland Fryer). Regardless, wokeness is probably not as important as, for example, advancing anti-aging research. Part of my choice to write about it is that I feel like I have something unique and original to say on the topic. That means I can be most effective when talking about it, but that’s partly by design. I’ve hated wokeness so much, and so consistently over such a long period of my life, that I’ve devoted a large amount of time and energy to reading up on its history and legal underpinnings and thinking about how to destroy it. If I’d studied anti-aging research or space travel as much, I would probably have something interesting and useful to say about those topics.

Thanks for the update. Do you ever estimate the cost-effectiveness of potential grants? If not, why? From Giving What We Can's evaluation of AWF (emphasis mine):

Fourth, we saw some references to the numbers of animals that could be affected if an intervention went well, but we didn’t see any attempt at back-of-the-envelope calculations to get a rough sense of the cost-effectiveness of a grant, nor any direct comparison across grants to calibrate scoring. We appreciate it won’t be possible to come up with useful quantitative estimates and comparisons in all or even most cases, especially given the limited time fund managers have to review applications, but we think there were cases among the grants we reviewed where this was possible (both quantifying and comparing to a benchmark) — including one case in which the applicant provided a cost-effectiveness analysis themselves, but this wasn’t then considered by the PI in their main reasoning for the grant.

Load more