Before caring about longtermism, we should probably care more about making the world a place where humans are not causing more suffering than happiness (so no factory farming)
No, I'd argue longtermism merits significant attention right now. Just that factory farming also merits significant attention.
I agree with you that protecting the future (eg mitigating existential risks) needs to be accompanied by trying to ensure that the future is net positive rather than negative. But one argument I find pretty persuasive is, even if the present was hug...
This is great, thank you! I'm so behind...
Really pretty much everything Sam says in that section sounds reasonable to me, though I'd love to see some numbers/%s about what animal-related giving he/FTX are doing.
In general I don't think individuals should worry too much about their cause "portfolio": IMHO there are a lot of reasonable problems to work on (eg on the reliable-but-lower-EV to unreliable-higher-EV spectrum) - though also many other problems that are nowhere near that efficient frontier. But like it's fine for the deworming specialis...
As I read Bryan's point, it's that eg malaria is really unlikely to be a major problem of the future, but there are tailwinds to factory farming (though also headwinds) that could make it continue as a major problem. It is after all a much bigger phenomenon than a century ago, and malaria isn't.
But fwiw, although other people have addressed future/longtermist implications of factory farming (section E), and I take some of those arguments seriously, by contrast in this post I was focused on arguments for working on current animal suffering, for its own sake.
I don't take point D that seriously. Aesop's miser is worth keeping in mind; the "longevity researcher eating junk every day" is maybe a more relatable analogy. I'm ambivalent on hinginess because I think the future may remain wide-open and high-stakes for centuries to come, but I'm no expert on that. But anyway I think A, B and E are stronger.
Yeah, "Longtermists might be biased" pretty much sums it up. Do you not find examining/becoming more self-aware of biases constructive? To me it's pretty central to cause prioritization,...
My arguments B and C are both of the form "Hey, let's watch out for this bias that could lead us to misallocate our altruistic resources (away from current animal suffering)." For B, the bias (well, biases) is/are status quo bias and self-interest. For C, the bias is comfort. (Clearly "comfort" is related to "self-interest" - possibly I should have combined B and C, I did ponder this. Anyway...)
None of this implies we shouldn't do longtermist work! As I say in section F, I buy core tenets of longtermism, and "Giving future liv...
All right. Well, I know you're a good guy, just keep this stuff in mind.
Out of curiosity I ran the following question by our local EA NYC group's Slack channel and got the following six responses. In hindsight I wish I'd given your wording, not mine, but oh well, maybe it's better that way. Even if we just reasonably disagree at the object level, this response is worth considering in terms of optics. And this was an EA crowd, we can only guess how the public would react.
...Jacob: what do y'all think about the following claim: "before E
I did read it, and I agree it improves the tone of your post (helpfully reduces the strength of its claim). My criticism is partly optical, but I do think you should write what you sincerely think: perhaps not every single thing you think (that's a tall order alas in our society: "I say 80% of what I think, a hell of a lot more than any politician I know" - Gore Vidal), but sincerely on topics you do choose to opine on.
The main thrusts of my criticism are:
I come in peace, but I want to flag that this claim will sound breathtakingly arrogant to many people not fully immersed in the EA bubble, and to me:
I'm probably not phrasing this well, but to give a sense of my priors: I guess my impression is that my interactions with approximately every entity that perceives themself as directly doing good outside of EA* is that they are not seeking truth, and this systematically corrupts them in important ways.
Do you mean:
a) They don't make truth-seeking as high a priority as they should (relative to, say, hands-on wor...
I pretty much echo everything Aaron G said but in short it comes down to the impression left on the reader. "Effective Altruism" looks like a group one could try to join; "effective altruism" looks like a field of study or a topic of discussion. I think the latter is more the impression we want to cultivate. Remember the first rule of EA: WE ARE NOT A CULT!
Just a quick comment: I'd be wary of any answers to this that focus narrowly on the health impact (eg expected death toll) without trying to factor in other major impacts on well-being: economic (increased poverty and especially unemployment, reduced GDP, lost savings due to market drop), geopolitical (eg increased nationalism/protectionism, and even increased potential for war), and maybe more - even basic things like global anxiety! (Also some benefits, eg reduced carbon emissions, though I'd argue these are overrated.) These aren't easy to assess but I'd be very surprised if they didn't add up to more net impact than the deaths/illnesses themselves.
I gave it a shot