I'm a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.
Thanks for the compliment :)
When I write "skepticism of formal philosophy", I more precisely mean "skepticism that philosophical principles can capture all of what's intuitively important". Here's an example of skepticism of formal philosophy from Scott Alexander's review of What We Owe The Future:
I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity...I realize this is “anti-intellectual” and “defeating the entire point of philosophy”.
You make a good point regarding the relative niche-ness of animal welfare and AI x-risk. I agree that my post's analogy is crude and there are many reasons why people's dispositions might favor AI x-risk reduction over animal welfare.
Thanks Gage!
That's a good point I hadn't considered! I don't think that's OP's crux, but it is a coherent explanation of their neartermist cause prioritization.
This extra context makes the case much stronger.
Thanks for being charitable :)
On the percentile of a product of normal distributions, I wrote this Python script which shows that the 5th percentile of a product of normally distributed random variables will in general be a product of much higher percentiles (in this case, the 16th percentile):
import random
MU = 100
SIGMA = 10
N_SAMPLES = 10 ** 6
TARGET_QUANTILE = 0.05
INDIVIDUAL_QUANTILE = 83.55146375 # From Google Sheets NORMINV(0.05,100,10)samples = []
for _ in range(N_SAMPLES):
r1 = random.gauss(MU, SIGMA)
r2 = random.gauss(MU, SIGMA)
r3 = random.gauss(MU, SIGMA)
sample = r1 * r2 * r3
samples.append(sample)samples.sort()
# The sampled 5th percentile product
product_quantile = samples[int(N_SAMPLES * TARGET_QUANTILE)]
implied_individual_quantile = product_quantile ** (1/3)
implied_individual_quantile # ~90, which is the *16th* percentile by the empirical rule
I apologize for overstating the degree to which this reversion occurs in my original reply (which claimed an individual percentile of 20+ to get a product percentile of 5), but I hope this Python snippet shows that my point stands.
I did explicitly say that my calculation wasn't correct. And with the information on hand I can't see how I could've done better.
This is completely fair, and I'm sorry if my previous reply seemed accusatory or like it was piling on. If I were you, I'd probably caveat your analysis's conclusion to something more like "Under RP's 5th percentile weights, the cost-effectiveness of cage-free campaigns would probably be lower than that of the best global health interventions".
Hi Hamish! I appreciate your critique.
Others have enumerated many reservations with this critique, which I agree with. Here I'll give several more.
why isn't the "1000x" calculation actually spelled out?
As you've seen, given Rethink's moral weights, many plausible choices for the remaining "made-up" numbers give a cost-effectiveness multiple on the order of 1000x. Vasco Grilo conducted a similar analysis which found a multiple of 1.71k. I didn't commit to a specific analysis for a few reasons:
(Although I got the 5th and 95th percentiles of the output by simply multiplying the 5th and 95th percentiles of the inputs. This is not correct, but I'm not sure there's a better approach without more information about the input distributions.)
Sadly, I don't think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentiles---in fact, in general, it's going to be a product of much higher percentiles (20+).
To see this, imagine if a bridge is held up 3 spokes which are independently hammered in, and each spoke has a 5% chance of breaking each year. For the bridge to fall, all 3 spokes need to break. That's not the same as the bridge having a 5% chance of falling each year--the chance is actually far lower (0.01%). For the bridge to have a 5% chance of falling each year, each spoke would need to have a 37% chance of breaking each year.
As you stated, knowledge of distributions is required to rigorously compute percentiles of this product, but it seems likely that the 5th percentile case would still have the multiple several times that of GiveWell top charities.
let's not forget second order effects
This is a good point, but the second order effects of global health interventions on animals are likely much larger in magnitude. I think some second-order effects of many animal welfare interventions (moral circle expansion) are also positive, and I have no idea how it all shakes out.
Hi Emily,
Thanks so much for your engagement and consideration. I appreciate your openness about the need for more work in tackling these difficult questions.
our current estimates of the gap between marginal animal and human funding opportunities is very different from the one in your post – within one order of magnitude, not three.
Holden has stated that "It seems unlikely that the ratio would be in the precise, narrow range needed for these two uses of funds to have similar cost-effectiveness." As OP continues researching moral weights, OP's marginal cost-effectiveness estimates for FAW and GHW may eventually differ by several orders of magnitude. If this happens, would OP substantially update their allocations between FAW and GHW?
Our current moral weights, based in part on Luke Muehlhauser’s past work, are lower.
Along with OP's neartermist cause prioritization, your comment seems to imply that OP's moral weights are 1-2 orders of magnitude lower than Rethink's. If that's true, that is a massive difference which (depending upon the details) could have big implications for how EA should allocate resources between FAW charities (e.g. chickens vs shrimp) as well as between FAW and GHW.
Does OP plan to reveal their moral weights and/or their methodology for deriving them? It seems that opening up the conversation would be quite beneficial to OP's objective of furthering moral weight research until uncertainty is reduced enough to act upon.
I'd like to reiterate how much I appreciate your openness to feedback and your reply's clarification of OP's disagreements with my post. That said, this reply doesn't seem to directly answer this post's headline questions:
Though you have no obligation to directly answer these questions, I really wish you would. A transparent discussion could update OP, Rethink, and many others on this deeply important topic.
Thanks again for taking the time to engage, and for everything you and OP have done to help others :)
I didn't cite a single study--I cited a comment which referenced several studies, and quoted one of them.
I agree with your caveat about neuron counts, though I still think people should update upon an order of magnitude difference in neuron count. Do you have a better proposal for comparing the moral worth of a human fetus and an adult chicken?
I think the argument that abortion reduction doesn't measure up to animal welfare in importance is an isolated demand for rigor. I agree that the best animal welfare interventions are orders of magnitude more cost-effective than the best abortion reduction interventions. However, you could say the same for GiveWell top charities, Charity Entrepreneurship global health charities, or any other charity in global health.
A more precise reference class would be global health charities that reduce child mortality, like AMF.
Comparing area was intended :)
If it's unclear, I can add a note which says the circles should be compared by area.