All of Jacob Eliosoff's Comments + Replies

Before caring about longtermism, we should probably care more about making the world a place where humans are not causing more suffering than happiness (so no factory farming)

No, I'd argue longtermism merits significant attention right now.  Just that factory farming also merits significant attention.

I agree with you that protecting the future (eg mitigating existential risks) needs to be accompanied by trying to ensure that the future is net positive rather than negative.  But one argument I find pretty persuasive is, even if the present was hug... (read more)

This is great, thank you!  I'm so behind...

Really pretty much everything Sam says in that section sounds reasonable to me, though I'd love to see some numbers/%s about what animal-related giving he/FTX are doing.

In general I don't think individuals should worry too much about their cause "portfolio": IMHO there are a lot of reasonable problems to work on (eg on the reliable-but-lower-EV to unreliable-higher-EV spectrum) - though also many other problems that are nowhere near that efficient frontier.  But like it's fine for the deworming specialis... (read more)

As I read Bryan's point, it's that eg malaria is really unlikely to be a major problem of the future, but there are tailwinds to factory farming (though also headwinds) that could make it continue as a major problem.  It is after all a much bigger phenomenon than a century ago, and malaria isn't.

But fwiw, although other people have addressed future/longtermist implications of factory farming (section E), and I take some of those arguments seriously, by contrast in this post I was focused on arguments for working on current animal suffering, for its own sake.

Yeah, this wasn't my strongest/most serious argument here.  See my response to @Lukas_Gloor.

I don't take point D that seriously.  Aesop's miser is worth keeping in mind; the "longevity researcher eating junk every day" is maybe a more relatable analogy.  I'm ambivalent on hinginess because I think the future may remain wide-open and high-stakes for centuries to come, but I'm no expert on that.  But anyway I think A, B and E are stronger.

Yeah, "Longtermists might be biased" pretty much sums it up.  Do you not find examining/becoming more self-aware of biases constructive?  To me it's pretty central to cause prioritization,... (read more)

My arguments B and C are both of the form "Hey, let's watch out for this bias that could lead us to misallocate our altruistic resources (away from current animal suffering)."  For B, the bias (well, biases) is/are status quo bias and self-interest.  For C, the bias is comfort.  (Clearly "comfort" is related to "self-interest" - possibly I should have combined B and C, I did ponder this.  Anyway...)

None of this implies we shouldn't do longtermist work!  As I say in section F, I buy core tenets of longtermism, and "Giving future liv... (read more)

3
Mau
2y
Yup, I'm mostly sympathetic to your last three paragraphs. What I meant to argue is that biases like status quo bias, self-interest, and comfort are not biases that could lead us to (majorly) misallocate careers away from current animal suffering and toward future generations, because (I claim) work focused on future generations often involves roughly as much opposition to the status quo, self-sacrifice, and discomfort as work focused on animals. (That comparison doesn't hold for dietary distinctions, of course, so the effects of the biases you mention depend on what resources we're worried about misallocating / what decisions we're worried about messing up.)

Well, my point wasn't to prove you wrong.  It was to see what people thought about a strong version of what you wrote: I couldn't tell if that version was what you meant, which is why I asked for clarification.  Larks seemed to think that version was plausible anyway.

1
Linch
2y
I probably shouldn't resurrect this thread. But I was reminded of it by yet another egregious example of bad reasoning in an EA-adjacent industry (maybe made by EAs. I'm not sure). So I'm going to have one last go. To be clear, my issue with your phrasing isn't that you used a stronger version of what I wrote, it's that you used a weaker version of what I wrote, phrased in a misleading way that's quite manipulative. Consider the following propositions: I claim that A is a strictly stronger claim than B (in the sense that an ideal Bayesian reasoner will assign lower probability to A than B), but unless it's said is a epistemically healthy and socially safe context, B will get people much more angry in non-truth-seeking ways than A.  B is similar to using a phrasing like: instead of a more neutral (A-like) Note again that the less emotional phrasing is actually a strictly stronger claim than the more emotional one. Similarly, your initial question: was very clearly (unintentionally?) optimized to really want me to answer "oh no I just meant a)," (unwritten: since that's the socially safest thing to answer). Maybe this is unintentional, but this is how it came across to me.  A better person than me would have been able to successfully answered you accurately and directly despite that initial framing, but alas I was/am not mature enough. (I'm not optimistic that this will update you since I'm basically saying the same thing 3 times, but occasionally this has worked in the past. I do appreciate your attempts to defuse the situation at a personal level. Also I think it bears mentioning that I don't think this argument is particularly important, and I don't really think less of you or your work because of it; I like barely know you).

All right.  Well, I know you're a good guy, just keep this stuff in mind.

Out of curiosity I ran the following question by our local EA NYC group's Slack channel and got the following six responses.  In hindsight I wish I'd given your wording, not mine, but oh well, maybe it's better that way.  Even if we just reasonably disagree at the object level, this response is worth considering in terms of optics.  And this was an EA crowd, we can only guess how the public would react.

Jacob: what do y'all think about the following claim: "before E

... (read more)
3
Linch
3y
"Y" is a strictly stronger claim than "If X, then Y", but many people get more emotional with "If X, then Y." Consider "Most people around 2000 years ago had a lot of superstitions and usually believed wrong things" vs "Before Jesus Christ, people had a lot of superstitions and usually believed wrong things." Oh what an interesting coincidence.
7
Charles He
3y
I can see Jacob's perspective and how Linch's statement is very strong. For example, in developmental econ, in  just one or two top schools, the set of professors and their post-docs/staff might be larger and more impressive than the entire staff of Rethink Priorities and Open Phil combined. It's very very far from playpumps. So saying that they are not truth-seeking seems sort of questionable at least. At the same time, in another perspective I find reasonable, I think I can see how academic work can be swayed by incentives, trends and become arcane and wasteful. Separately and additionally, the phrasing Linch used originally, reduces the aggressive/pejorative tone for me, certainly viewed through "LessWrong" sort of culture/norms. I think I understand and have no trouble with this statement, especially since it seems to be a personal avowal: Again, I think there's two different perspectives here and a reasonable person could both take up both or either.  I think a crux is the personal meaning of the statement being made. Unfortunately, in his last response I'm replying to, it is now coming off as Jacob is sort of pursuing a point. This is less useful. For example, looking at his responses, it seems like people are just responding to "EA is much more truth seeking than everyone else", which is generating responses like "Sounds crazy hubristic..".  Instead, I think Jacob could have ended the discussion at Linch's comment here or  maybe asked for models and examples to get "gears-level" sense for Linch's beliefs (e.g. what's wrong with development econ, can you explain?).  I don't think impressing everyone into a rigid scout mentality is required, but it would have been useful here.  

I did read it, and I agree it improves the tone of your post (helpfully reduces the strength of its claim).  My criticism is partly optical, but I do think you should write what you sincerely think: perhaps not every single thing you think (that's a tall order alas in our society: "I say 80% of what I think, a hell of a lot more than any politician I know" - Gore Vidal), but sincerely on topics you do choose to opine on.

The main thrusts of my criticism are:

  1. Because of the optical risk, and also just generally because criticizing others merits care, you
... (read more)
7
Linch
3y
I tried answering your question on the object level a few times but I notice myself either trying to be reconciliatory or defensive, and I don't think I will endorse either response upon reflection. 

I come in peace, but I want to flag that this claim will sound breathtakingly arrogant to many people not fully immersed in the EA bubble, and to me:

I'm probably not phrasing this well, but to give a sense of my priors: I guess my impression is that my interactions with approximately every entity that perceives themself as directly doing good outside of EA* is that they are not seeking truth, and this systematically corrupts them in important ways.

Do you mean:
a) They don't make truth-seeking as high a priority as they should (relative to, say, hands-on wor... (read more)

3
Larks
3y
Seems pretty plausible to me this is true. Both categories are pretty small to start with, and their correlation isn't super high. Indeed, the fact that you think it would be bad optics to say this seems like evidence that most people are indeed not 'very concerned' about what is true.
5
Linch
3y
Hmm, did you read the asterisk in the quoted comment? (No worries if you haven't, I'm maybe too longwinded and it's probably unreasonable to expect people to carefully read everything on a forum post with 76 comments!) If you've read it and still believe that I "sound breathtakingly arrogant ", I'd be interested in whether you can clarify whether "breathtakingly arrogant" means either a) what I say is untrue or b) what I say is true but insufficiently diplomatic.  More broadly, I mostly endorse the current level of care and effort and caveats I put on the forum. (though I want to be more concise, working on it!)  I can certainly make my writing more anodyne and less likely to provoke offense, e.g. by defensive writing and pre-empting all objections I can think of,  by sprinkling the article heavily with caveats throughout, by spending 3x as much time on each sentence, or just by having much less public output (the last of which is empirically what most EAs tend to do). I suspect this will make my public writing worse however. 

I pretty much echo everything Aaron G said but in short it comes down to the impression left on the reader. "Effective Altruism" looks like a group one could try to join; "effective altruism" looks like a field of study or a topic of discussion. I think the latter is more the impression we want to cultivate. Remember the first rule of EA: WE ARE NOT A CULT!

3
Luke Freeman
3y

Ah I thought maybe this was like chess boxing

Just a quick comment: I'd be wary of any answers to this that focus narrowly on the health impact (eg expected death toll) without trying to factor in other major impacts on well-being: economic (increased poverty and especially unemployment, reduced GDP, lost savings due to market drop), geopolitical (eg increased nationalism/protectionism, and even increased potential for war), and maybe more - even basic things like global anxiety! (Also some benefits, eg reduced carbon emissions, though I'd argue these are overrated.) These aren't easy to assess but I'd be very surprised if they didn't add up to more net impact than the deaths/illnesses themselves.

2
Holly_Elmore
4y
Such an answer is exactly what I am looking for!