Seth Ariel Green 🔸

Research Scientist @ Humane and Sustainable Food Lab
1481 karmaJoined Working (6-15 years)New York, NY, USA
setharielgreen.com

Bio

Participation
1

I am a Research Scientist at the Humane and Sustainable Food Lab at  Stanford.

Here is my date-me doc. 

How others can help me

the lab I work at is seeking collaborators! More here.

How I can help others

If you want to write a meta-analysis, I'm happy to consult! I think I know something about what kinds of questions are good candidates, what your default assumptions should be, and how to delineate categories for comparisons

Comments
170

Topic contributions
1

That's interesting, but not what I'm suggesting. I'm suggesting something that would, e.g., explain why you tell people to "ignore the signs of my estimates for the total welfare" when you share posts with them. That is a particular style and it says something about whether one should take your work in a literal spirit or not, which falls under the meta category of why you write the way you write; and to my earlier point, you're sharing this suggestion here with me in a comment rather than in the post itself 😃 Finally, the fact that there's a lot of uncertainty about whether wild animals have positive or negative lives is exactly the point I raised about why I have trouble engaging with your work. The meta post I am suggesting, by contrast, motivate and justify this style of reasoning as a whole, rather than providing a particular example of it. The post you've shared is a link in a broader chain. I'm suggesting you zoom out and explain what you like about this chain and why you're building it.

By all means, show us the way by doing it better 😃 I'd be happy to read more about where you are coming from, I think your work is interesting and if you are right, it has huge implications for all of us. 

Observations:

  1. Echoing Richard's comment, EA is a community with communal norms, and a different forum might be a better fit for your style. Substack, for instance, is more likely to reward a confrontational approach. There is no moral valence to this observation, and likewise there is no moral valence to the EA community implicitly shunning you for not following its norms. We're talking about fit.
  2. Pointing out "the irony of debating “AI rights” when basic human rights are still contested" is contrary to EA communal norms in several ways, e.g. it's not intended to persuade but rather to end/substantially redirect a conversation, its philosophical underpinnings have extremely broad and (I think to us) self-evidently absurd implications (should we bombard the Game of Thrones subreddit with messages about how people shouldn't be debating fiction when people are starving?), its tone was probably out of step with how we talk, etc. Downvoting a comment like that amounts to “this is not to my tastes and I want to talk about something else.”
  3. "I started to notice a pattern — sameness in tone, sameness in structure, even sameness in thought. Ideas endlessly repackaged, reframed, and recycled. A sort of intellectual monoculture." This is a fairly standard EA criticism. Being an EA critic is a popular position. But I think you can trust that we've heard it before, responded before, etc. I am sympathetic to folks not wanting to do it again.

(Vasco asked me to take a look at this post and I am responding here.) 

Hi Vasco,

I've been taking a minute to reflect on what I want to say about this kind of project. A few different thoughts, at a few different levels of abstraction.

  1. In the realm of politics, I'm glad the ACLU and FIRE exist, even if I don't agree with them on everything, because I think they're useful poles in the ecosystem. I feel similarly about your work: I think this kind of detailed cost-benefit work on non-standard issues, or on standard issues but that leads to non-standard conclusions, is a healthy contribution to EA, separately from whether I agree with or even understand it.
  2. The main barrier to my engaging deeply with your work is that your analyses hinge on strong assumptions that I have no idea how to verify even in theory. The claim that nematodes live net-negative lives, for instance, which you believe with 55% confidence: I have no clue if this is true. I'm not even sure how many hours I would need to devote to it to form any belief on this whatsoever. (Hundreds?) In general, I have about 2-3 hours of good thinking per day.
  3. I notice that the the top comment on this post seems to express the "EA consensus" about your style of analysis; I believe that because it has gotten more upvotes and such than the post itself. One lesson we might draw from this is that perhaps there is some  persuasion work to be done get folks onboard with some of your assumptions, stylistic choices, and modes of analysis. Perhaps a post along the lines of "why I write the way I write" -- Nietzsche did this -- or "The moral philosophical assumptions underpinning my style of analysis" would go some of the way to bridging that gap.
    1. I get the sense that you are building an elaborate intellectual edifice whose many moving parts are distributed in many posts, comments, and external philosophical texts. That's well and good, I also have a "headcanon" about my work and ideas that I haven't fully systematized, e.g. I write almost exclusively about the results of randomized controlled trials without getting into the intellectual foundations of why I do that. But I think your intellectual foundations are more abstruse and counterintuitive. I'd I think folks might benefit from a meta post about them: a "start here to understand Vasco Grilo's writing" primer. Just an idea.
  4. I am generally on board with using the EA forum as an extended job interview, e.g. establishing a reputation as someone who can reason clearly about an arbitrary subject. I think you're doing a fine job of that. On the other hand, the interaction with Kevin Xia about whether this work is appropriate for Hive,  the downvotes that post received, and the fact that you are the only contributor to the soil animals topic here are face value evidence that writing about this topic as much as you do is not career-optimal. Perhaps it deserves its own forum: soilanimalsmatter.substack.com or something like that? And then you you can actually build up the whole intellectual edifice from foundations upwards. I do this (https://regressiontothemeat.substack.com/) and it is working for me. Just a thought.

I am amenable to this argument and generally skeptical of longtermism on practical grounds. (I have a lot of trouble thinking of someone 300-500 years ago plausibly doing anything with my interests in mind that actually makes a difference. Possible exceptions include folks associated with the Gloriois Revolution.)

I think the best counterargument is that it’s easier to set things on a good course than to course correct. Analogy: easier to found Google, capitalizing on advertisers’ complacency, than to fix advertising from within; easier to create Zoom than to get Microsoft to make Skype good. 

Im not saying this is right but I think that is how I would try to motivate working on longtermism if I did (work on longtermism).


 

I bought the 3M mask on your recc 😃 

Hi Ben, I agree that there are a lot of intermediate weird outcomes that I don't consider, in large part because I see them as less likely than (I think) you do. I basically think society is going to keep chugging along as it is, in the same way that life with the internet is certainly different than life without it but we basically all still get up, go to work, seek love and community, etc.

However I don't think I'm underestimating how transformative AI would be in the section on why my work continues to make sense to me if we assume AI is going to kill us all or usher in utopia, which I think could be fairly described as transformative scenarios ;) 

If McDonalds becomes human-labor-free, I am not sure what effect that would have on advocating for cage-free campaigns. I could see it going many ways, or no ways. I still think persuading people that animals matter, and they should give cruelty-free options a chance, is going to matter under basically every scenario I could think of, including that one.

I'd like to see a serious re-examination of the evidence underpinning GiveWell's core recommendations, focusing on

  • how recent is the evidence?
  • what are the core results on the primary outcomes of interest?
  • How much is GiveWell doing add-on analysis/theorizing to boost those results into something amenable, or do the results speak for themselves?
  • How reproducible/open-science-y/pre-registered/etc. are the papers under discussion?
  • Are there any working papers/in-progress things worth adding to the evidence base?

I did this for one intervention in GiveWell should fund an SMC replication  & @Holden Karnofsky did a version of it in Minimal-trust investigations, but I think these investigations are worth doing multiple times over the years from multiple parties. It's a lot of work though, so I see why it doesn't get done too often.

I wonder what the optimal protein intake is for trying to increase power to mass ratio, which is the core thing the sports I do (running, climbing, and hiking) ask for. I do not think that gaining mass is the average health/fitness goal, nor obviously the right thing for most people. I'd bet that most Americans would put losing weight and aerobic capacity a fair bit higher.

Load more