Jacob Eliosoff

74 karmaJoined May 2020


Before caring about longtermism, we should probably care more about making the world a place where humans are not causing more suffering than happiness (so no factory farming)

No, I'd argue longtermism merits significant attention right now.  Just that factory farming also merits significant attention.

I agree with you that protecting the future (eg mitigating existential risks) needs to be accompanied by trying to ensure that the future is net positive rather than negative.  But one argument I find pretty persuasive is, even if the present was hugely net negative, our power as a species is so great and still increasing (esp if you include AI), that it's quite plausible that in the future we could turn that balance positive - and, the future being such a big place, that could outweigh all present and near-term negativity.  Obviously there are big question marks here but the increasing power trend at least is convincing, and relevant.

This is great, thank you!  I'm so behind...

Really pretty much everything Sam says in that section sounds reasonable to me, though I'd love to see some numbers/%s about what animal-related giving he/FTX are doing.

In general I don't think individuals should worry too much about their cause "portfolio": IMHO there are a lot of reasonable problems to work on (eg on the reliable-but-lower-EV to unreliable-higher-EV spectrum) - though also many other problems that are nowhere near that efficient frontier.  But like it's fine for the deworming specialist (or donor) to mostly just stay focused on that rather than fret about how much to think about chickens, pandemics, AI...  100 specialists will achieve more than 100 generalists, etc.

This just becomes less true for a behemoth donor like Sam/FTX, or leaders like MacAskill & Ord.  They have such outsized influence that if they don't fine-tune their "portfolio" a bit, important issues can end up neglected.  And at the level of the EA movement, or broader society itself, the weighting of the portfolio becomes key.

My underlying thesis above is that the movement may be underweighting animals/factory farming right now, relative to longtermism, due to the biases I laid out.  I didn't explicitly argue this: my post is about "biases to be aware of," not "proof that these biases are currently resulting in misallocation" - perhaps another day.  But anyway even if this thesis is correct, it doesn't imply that a) risks like AI safety and pandemic prevention don't deserve a significant chunk of our portfolio (I think they do), or that b) broader society isn't hugely underweight those risks (I think it is).

As I read Bryan's point, it's that eg malaria is really unlikely to be a major problem of the future, but there are tailwinds to factory farming (though also headwinds) that could make it continue as a major problem.  It is after all a much bigger phenomenon than a century ago, and malaria isn't.

But fwiw, although other people have addressed future/longtermist implications of factory farming (section E), and I take some of those arguments seriously, by contrast in this post I was focused on arguments for working on current animal suffering, for its own sake.

Yeah, this wasn't my strongest/most serious argument here.  See my response to @Lukas_Gloor.

I don't take point D that seriously.  Aesop's miser is worth keeping in mind; the "longevity researcher eating junk every day" is maybe a more relatable analogy.  I'm ambivalent on hinginess because I think the future may remain wide-open and high-stakes for centuries to come, but I'm no expert on that.  But anyway I think A, B and E are stronger.

Yeah, "Longtermists might be biased" pretty much sums it up.  Do you not find examining/becoming more self-aware of biases constructive?  To me it's pretty central to cause prioritization, drowning children, rationalism, longtermism itself...  Couldn't we see cause prioritization as peeling away our biases one by one?  But yes, it would be reasonable to accompany "Here's why we might be biased against nonhumans" with "Here are some object-level arguments that animal suffering deserves attention."

My arguments B and C are both of the form "Hey, let's watch out for this bias that could lead us to misallocate our altruistic resources (away from current animal suffering)."  For B, the bias (well, biases) is/are status quo bias and self-interest.  For C, the bias is comfort.  (Clearly "comfort" is related to "self-interest" - possibly I should have combined B and C, I did ponder this.  Anyway...)

None of this implies we shouldn't do longtermist work!  As I say in section F, I buy core tenets of longtermism, and "Giving future lives proper attention requires turning our attention away from some current suffering.  It's just a question of where we draw the line."  The point is just to ensure these biases don't make us draw the line in the wrong place.

The question from A is meant as a sanity check.  If millions of humans were in conditions comparable to battery cages, and comparably tractable, how many of "our" (loosely, the EA movement's) resources should we devote to that - even insofar as that pulls away resources from longtermism?  I'd argue "A significant amount, more than we are now."  Some would probably argue "No, terrible though that is, the longtermist work is even more important" - OK, we can debate that.  The main stance I'd push back on is "The millions of humans would merit resources; the animals don't."

Btw none of this is meant as an argument for veganism (ie personal dietary/habit change), at all.  How best to help farmed animals, if we agreed to, is a whole nother topic (except yes, I am assuming it's quite tractable, happy to back that up).

Well, my point wasn't to prove you wrong.  It was to see what people thought about a strong version of what you wrote: I couldn't tell if that version was what you meant, which is why I asked for clarification.  Larks seemed to think that version was plausible anyway.

All right.  Well, I know you're a good guy, just keep this stuff in mind.

Out of curiosity I ran the following question by our local EA NYC group's Slack channel and got the following six responses.  In hindsight I wish I'd given your wording, not mine, but oh well, maybe it's better that way.  Even if we just reasonably disagree at the object level, this response is worth considering in terms of optics.  And this was an EA crowd, we can only guess how the public would react.

Jacob: what do y'all think about the following claim: "before EA the intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, was negligible"

Jacob: all takes welcome!

A: I think it's false 😛 as a lot of people are interested in the truth and trying hard to make the world a better place

B: also think it's false; wasn't this basically the premise of the enlightenment?

B: Thinking e.g. legal reforms esp. french revolution and prussian state, mexican cientificos, who were comteans

B: might steelman this by specifying the entire world i.e. a globalist outlook

B: even then, modernist projects c. 1920 onwards seemed to have a pretty strong alliance between proper reasoning on best evidence and genuine charitable impulses, even where ineffective or counterproductive

B: and, of course, before all the shit and social dynamics e.g. lysenkoism, marxism had a reasonably good claim at being scientific and materialist in its revolutionary aims

C: I find it plausible that one can be very concerned about what is true without being very good finding out the truth according to rationalists' standards. Science and philosophy are hard! (And, in some cases, rationalists probably just have weird standards.)

D: Disagree. Analogy: before evidence-based medicine, physicians were still concerned with what was true and trying to make the world a better place (through medical practice). They just had terrible methodology (e.g., theorizing that led to humors and leeches).

D: Likewise, I think EA is a step up in methodology, but it's not original in its simultaneous concern for welfare and truth.

E: Sounds crazy hubristic..

F: I think this isn’t right, but not necessarily because I think the intersection is all that common, it might be, I don’t know, but more because EA is small enough that its existence doesn’t provide much evidence of a large change in the number of people in this intersection. It could be a bunch them just talk to each other more now

I did read it, and I agree it improves the tone of your post (helpfully reduces the strength of its claim).  My criticism is partly optical, but I do think you should write what you sincerely think: perhaps not every single thing you think (that's a tall order alas in our society: "I say 80% of what I think, a hell of a lot more than any politician I know" - Gore Vidal), but sincerely on topics you do choose to opine on.

The main thrusts of my criticism are:

  1. Because of the optical risk, and also just generally because criticizing others merits care, you should have (and still can) clarify which of the significantly different meanings I listed (or others) of "they are not seeking truth" you intended.
  2. If you believe one of the stronger forms, eg "before EA the intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, was negligible," then I strongly disagree, and I think this is worth discussing further for both optical and substantive reasons. We would probably get lost in definition hairsplitting at some point, but I believe many, many people (activists, volunteers, missionaries, scientists, philanthropists, community leaders, ...) for at least hundreds of years have both been trying hard to make the world a better place and trying hard to be guided by an accurate understanding of reality while doing so. We can certainly argue any one of them got a lot wrong: but that's about execution, not intent.

    This is, again, partly optical and partly substantive: but it's worth realizing that to a lot of the world who predate EA or have read a lot about the world pre-EA, the quoted claim above is just laughable. I care about EA but I see it as a refinement, a sort of technical advance. Not an amazing invention.
Load more