I am open to work. I see myself as a generalist quantitative researcher.
You can give me feedback here (anonymous or not).
You are welcome to answer any of the following:
Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering and paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.
Thanks for the post, Russel! Relatedly, readers may be interested in A Case for Voluntary Abortion Reduction by Ariel Simnegar.
I think the best case for prioritising helping animals over humans is that the best animal welfare interventions are way more cost-effective than the best human welfare interventions. I estimate:
Great point, Dillon! I strongly upvoted it. I very much agree a 100 % chance of full automation by 2103 is too high. This reminds me of a few "experts" and "superforecasters" in the Existential Risk Persuasion Tournament (XPT) having predicted a probability of human extinction from 2023 to 2100 of exactly 0. "Null values" below refers to values of exactly 0
In this case, people could be predicting an extinction risk of exactly 0 as representing a very low value. However, for the predictions about automation, it would be really strange if people replied 100 % to mean something like 90 %, so I assume they are just overconfident.
Thanks, Vojta. I made a remark about that in the post, which I bolded below (not in the post).
I think the bet would not change the impact of your donations, which is what matters if you also plan to donate the profits, if:
- Your median date of superintelligent AI as defined by Metaculus was the end of 2028. If you believe the median date is later, the bet will be worse for you.
- The probability of me paying you if you win was the same as the probability of you paying me if I win. The former will be lower than the latter if you believe the transfer is less likely given superintelligent AI, in which case the bet will be worse for you.
- The cost-effectiveness of your best donation opportunities in the month the transfer is made is the same whether you win or lose the bet. If you believe it is lower if you win the bet, this will be worse for you.
We can agree on another resolution date such that the bet is good for you accounting for the above.
The bet can still be beneficial with a later resolution date, as I propose just above, despite the higher risk of not receiving the transfer given superintelligent AI. The expected profit for the people betting on short AI timelines in January-2025-$ as a fraction of 10 k January-2025-$ is P("winning")*P("transfer is made"|"superintelligent AI") - P("losing")*P("transfer is made"|"no superintelligent AI"). If P("winning") = 60 %, P("transfer is made"|"superintelligent AI") = 80 %, P("losing") = 40 %, and P("transfer is made"|"no superintelligent AI") = 100 % (> 80 %), that fraction would be 8 % (= 0.6*0.8 - 0.4*1). So, if the bet's resolution date was the 60 th percentile date of superintelligent AI instead of the median, it would be profitable despite the chance of the transfer being made given superintelligent AI being 20 pp (= 1 - 0.8) lower than that given no superintelligent AI.
There is no resolution date that would make the bet profitable for someone with short AI timelines who is sufficiently pessimistic about the transfer being made given superintelligent AI. I made a bet along the lines you suggested. However, there may be people who are not so pessimistic for whom the bet may be worth it with a later resolution date.
Thanks, CB!
By the way, at some point, I wondered if the insecticide-soaked bednets used by the Against Malaria Foundation were causing a lot of animal suffering.
I checked, and if I understand correctly, they are using pyrethroid, which is among the fastest-to-kill insecticides. So it seems comparatively ok.
Thanks for sharing! That gave me an idea for a post. Somewhat relatedly, I estimated the effects on wild animal of GiveWell's top charities linked to changing land use are hundreds to thousands of times as large as those on humans. However, I do not know whether such effects on wild animals are beneficial or harmful, since it is super unclear whether wild animals have positive or negative lives. On the other hand, bednets painfully killing insects would be bad for insects if they would otherwise have neutral lives.
Great post, David! I strongly upvoted it. Thanks for standing up for your convictions.
To avoid compromising my convictions in an attempt to be strategic, I advocate for protecting the inviolable rights of animals. This is why IP28 seeks to ban the intentional injury, killing, and forced breeding of animals in Oregon including for slaughter, hunting, and experimentation.
I strongly endorse expectational total hedonistic utilitarianism. So I think having factory-farmed animals with positive lives, which I believe is possible, is better than not having any factory-farmed animals. However, I suppose more incremental welfarist interventions also benefit from your initiative due to the radical flank effect, so I am happy about it.
Thanks for sharing! What is the case for people joining this instead of The Introductory EA Program?
Thanks, Erich! I found your comment funny.