I am a Research Scientist at the Humane and Sustainable Food Lab at Stanford.
Here is my date-me doc.
the lab I work at is seeking collaborators! More here.
If you want to write a meta-analysis, I'm happy to consult! I think I know something about what kinds of questions are good candidates, what your default assumptions should be, and how to delineate categories for comparisons
(Vasco asked me to take a look at this post and I am responding here.)
Hi Vasco,
I've been taking a minute to reflect on what I want to say about this kind of project. A few different thoughts, at a few different levels of abstraction.
I am amenable to this argument and generally skeptical of longtermism on practical grounds. (I have a lot of trouble thinking of someone 300-500 years ago plausibly doing anything with my interests in mind that actually makes a difference. Possible exceptions include folks associated with the Gloriois Revolution.)
I think the best counterargument is that it’s easier to set things on a good course than to course correct. Analogy: easier to found Google, capitalizing on advertisers’ complacency, than to fix advertising from within; easier to create Zoom than to get Microsoft to make Skype good.
Im not saying this is right but I think that is how I would try to motivate working on longtermism if I did (work on longtermism).
Hi Ben, I agree that there are a lot of intermediate weird outcomes that I don't consider, in large part because I see them as less likely than (I think) you do. I basically think society is going to keep chugging along as it is, in the same way that life with the internet is certainly different than life without it but we basically all still get up, go to work, seek love and community, etc.
However I don't think I'm underestimating how transformative AI would be in the section on why my work continues to make sense to me if we assume AI is going to kill us all or usher in utopia, which I think could be fairly described as transformative scenarios ;)
If McDonalds becomes human-labor-free, I am not sure what effect that would have on advocating for cage-free campaigns. I could see it going many ways, or no ways. I still think persuading people that animals matter, and they should give cruelty-free options a chance, is going to matter under basically every scenario I could think of, including that one.
I'd like to see a serious re-examination of the evidence underpinning GiveWell's core recommendations, focusing on
I did this for one intervention in GiveWell should fund an SMC replication & @Holden Karnofsky did a version of it in Minimal-trust investigations, but I think these investigations are worth doing multiple times over the years from multiple parties. It's a lot of work though, so I see why it doesn't get done too often.
I wonder what the optimal protein intake is for trying to increase power to mass ratio, which is the core thing the sports I do (running, climbing, and hiking) ask for. I do not think that gaining mass is the average health/fitness goal, nor obviously the right thing for most people. I'd bet that most Americans would put losing weight and aerobic capacity a fair bit higher.
That's interesting, but not what I'm suggesting. I'm suggesting something that would, e.g., explain why you tell people to "ignore the signs of my estimates for the total welfare" when you share posts with them. That is a particular style and it says something about whether one should take your work in a literal spirit or not, which falls under the meta category of why you write the way you write; and to my earlier point, you're sharing this suggestion here with me in a comment rather than in the post itself 😃 Finally, the fact that there's a lot of uncertainty about whether wild animals have positive or negative lives is exactly the point I raised about why I have trouble engaging with your work. The meta post I am suggesting, by contrast, motivate and justify this style of reasoning as a whole, rather than providing a particular example of it. The post you've shared is a link in a broader chain. I'm suggesting you zoom out and explain what you like about this chain and why you're building it.