Bio: Please see my personal website or EA Hub Profile.
Thanks for the suggestion, but I'm currently in college, so it's impossible for me to move :)
Great points! I agree that the longtermist community need to better internalize the anti-speciesist belief that we claim to hold, and explicitly include non-humans in our considerations.
On your specific argument that longtermist work doesn't affect non-humans:
From a consequentialist perspective, I think what matters more is how these options affect your psychology and epistemics (in particular, whether doing this will increase or decrease your speciesist bias, and whether doing this makes you uncomfortable), instead of the amount of suffering they directly produce or reduce. After all, your major impact on the world is from your words and actions, not what you eat.
That being said, I think non-consequentialist views deserve some considerations too, if only due to moral uncertainty. I'm less certain about what are their implications though, especially when taking into account things like WAS.
A few minor notes to your points:
In terms of monetary cost, I think the cost of buying vitamin supplements is approximately cancelled out by the cost of buying meat.
At least where I live, vitamin supplements can be super cheap if you go for the pharmaceutical products instead of those health products wrapped up in fancy packages. I'm taking 5 kinds of supplements simultaneously, and in total they cost me no more than (the RMB equivalent of) several dollars per month.
Also, I wouldn't eat any meat out of the house, so you can assume that the impact of my eating on my friends is irrelevant.
It might be hard to hide that from your friends if you are eating meat when being alone. All the time people mindlessly say things they aren't supposed to say. Also when your friends ask you about your eating habit you'll have to lie, which might be a bad thing even for consequentialists.
Currently, EA resources are not gained gradually year by year; instead, they're gained in big leaps (think of Openphil and FTX). Therefore it might not make sense to accumulate resources for several years and give them out all at once.
In fact, there is a call for megaprojects in EA, which echos your point 1 and 3 (though these megaprojects are not expected to funded by accumulating resources over the years, but by directly deploying existing resources). I'm not sure if I understand your second point though.
Thanks for the reply, your points make sense! There is certainly a problem of "degree" to each of the concerns I wrote about in the comment, so arguments both for and against it should be taken into account. (To be clear, I wasn't raising my points to dismiss your approach; Instead, they're things that I think need to be taken care of, if we're to take such approach.)
I have to say I'm not sure why the most influential time being in the future wouldn't imply investing for that time though - I'd be interested to hear your reasoning.
Caveat: I haven't spend much time thinking about this problem of investing vs direct work, so please don't take my views too seriously. I should have made this clear in my original comment, my bad.
My first consideration is that we need to distinguish between "this century is more important than any given century in the future" and "this century is more important than all centuries in the future combined". The latter argues strongly against investing for the future; But the former doesn't seem to, as by investing now (patient philanthropy, movement building, etc.) you can potentially benefit many centuries to come.
The second consideration is that there're many more factors than "how important this century is". The need of the EA movement is one (and is a particularly important consideration for movement building), personal fit is another, among others.
Interesting idea, thanks for doing this! I agree it's good to have more approachable cause prioritization models, but there're also associated risks to be careful about:
Also, I think the decision-tree-style framework used here has some inherent drawbacks:
A more powerful framework than decision trees might be favored, though I'm not sure what a better alternative would be. One might want to look at ML models for candidates, but one thing to note is that there's likely a tradeoff between expressiveness and interprettability.
In addition, some foundational assumptions common to EA are made, including a consequentialist view of ethics in which wellbeing is what has intrinsic value.
I think there have been some discussions going on about EA decoupling with consequantialism, which I consider worthy. Might be good to include non-consequentialist considerations too.
While, to my knowledge, an artificial neural network has not been used to distinguish between large numbers of species (the most I found was fourteen, by Ruff et al., 2021)
Here is one study distinguishing between 24 species using bioacoustic data. I stumbled upon this study totally by coincidence, and I don't know if there're other studies larger in scale.
The study was carried out by the bioacoustics lab at MSR. It seems like some of their other projects might also be relevant to what we're discussing here (low confidence, just speculating).
Maybe it would be better to mention less about "do good with your money" and instead more about "do good with your time"? (to counter the misconception that EA is all about E2G)
Also, agreed that the message should be short and simple.
Closely related, and also important, is the question of "which world gets precluded". Different possibilities include:
What will the rankings be like, if we sort the four precluded worlds in decreasing order of badness? I'm highly unsure either, but I would guess something like 4>2>3>1 (larger means worse).
After writing this down, I'm seeing a possible response to the argument above: