Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

Yes that's a good point, as Scott argues in the linked post:

The moral of the story is: if there's some kind of weird market failure that causes galaxies to be priced at $1, normal reasoning stops working; things that do incalculable damage can be fairly described as "only doing $1 worth of damage", and you will do them even if less damaging options are available.

Give Well notes that their analysis should only really be taken a relative measure of cost-effectiveness. But even putting that aside, you're right that it doesn't imply human lives are cheap or invaluable.

Actually, I pretty much agree with all your points. But a better analogy might be "is it okay to murder someone to prevent another murder?" That's a much fuzzier line, and you can extend this to all kinds of absurd trolly-esque scenarios. In the animal case, it's not that I'm murdering someone in cold blood and then donating some money. It's that I'm causing one animal to be produced, and then causing another animal not to be. So it is much closer to equivalent.

To be clear again, the specific question this analysis address is not "is it ethical to eat meat and then pay offsets". The question is "assuming you pay for offsets, is it better to eat chicken or beef?"

And of course, there are plenty of reasons murder seems especially repugnant. You wouldn't want rich people to be able to murder people effectively for free. You wouldn't want people getting revenge on their coworkers. You wouldn't want to allow a world where people have to life in fear, etc etc etc. So I don't think it's a particularly useful intuition pump.

Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

This is very specifically attempting to compile some existing analysis on whether it's better to eat chicken or beef, incorporating ethical and environmental costs, and assuming you choose to offset both harms through donations.

In the future, I would like to aggregate more analysis into a single model, including the one you link.

As I understand it (this might be wrong), what we have currently is a much of floating analyses, each mostly focused on the cost-effectiveness of a specific intervention. Donors can then compare those analyses and make a judgement about where best to give their money.

Where the Give Well style monolithic CEA succeed is in ensuring that a similar approach is used to produce analysis that is genuinely comparable, and in giving readers the opportunity to adjust subjective moral weights. That's my ultimate goal with this project, but it will likely take some time.

This was maybe a premature release, but so far the feedback has already been useful.

Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

Yeah, I'm hopeful that this is correct, and plan to incorporate other intervention impact estimates soon.

For that particular post, Saulius is talking about "lives affected". E.g chickens having more room as described here:

I don't yet have a good sense of how valuable this is v.s. the chicken not being produced in the first place, and I think this will end up being a major point of contention. My intuitive personal sense is that chicken lives are not "worth living" (i.e. ethically net positive) even if they are receiving the listed enrichments, but others would disagree:

But overall I'm optimistic that there are or could be much more cost-effective interventions than the one I looked at.

If true, this wouldn't change the cow/chicken analysis, but would make me much favorable towards eating meat + offsets as opposed to eating more expensive plant-based alternatives. As noted elsewhere, of course the optimific action is still to be vegan and also donate anyway.

Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

Yes good question! Cow lives are longer, and cows are probably more "conscious" (I'm using that term loosely), but their treatment is generally better than that of chickens.

For this particular calculation, the "offset" isn't just an abstract moral good, it's attempting to decrease cow/chicken production respectively. E.g. you eat one chicken, donate to a fund that reduces the numbers of chickens produced by one, the net ethical impact is 0 regardless of farming conditions.

That convenience is part of the reason I chose to start with this analysis, but it's certainly something I'll have to consider for future work.

Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

Sorry yes, "saving a life" means some kind of intervention that leads to fewer animals going through factory farming. The estimate I'm using is from:

And yes, it is definitely better to just be vegan and not eat meat at all. This analysis is purely aimed at answer the chicken vs cow question.

Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

Sorry about all that, changed the title to "Give Well-style".

Agreed on the other title as well. I made some notes on this in the follow up post and noted that I could have picked a better title.

Thanks for the feedback, I appreciate the note and will think more about this in the future. FWIW I typically spend a lot of time on the post, very little time on the title, even though the title is probably read by way more people. So it makes sense to re-calibrate that balance a bit.

My current impressions on career choice for longtermists

I mostly agree, though I would add: spending a couple years at Google is not necessarily going to be super helpful for starting a project independently. There's a pretty big difference between being good at using Google tooling and making incremental improvements on existing software versus building something end-to-end and from scratch. That's not to say it's useless, but if someone's medium-term goal is doing web development for EA orgs, I would push working at a small high-quality startup. Of course, the difficulty is that those are harder to identify.

Progress studies vs. longtermist EA: some differences

Thanks! I think that's a good summary of possible views.

FWIW I personally have some speculative pro-progress anti-xr-fixation views, but haven't been quite ready to express them publicly, and I don't think they're endorsed by other members of the Progress community.

Tyler did send me some comments acknowledging that the far future is important in EV calculations. His counterargument is more or less that this still suggests prioritizing the practical work of improving institutions, rather than agonizing over the philosophical arguments. I'm heavily paraphrasing there.

He did also mention the risk of falling behind in AI development to less cautious actors. My own counterargument here is that this is a reason to either a) work very quickly on developing safe AI and b) work very hard on international cooperation. Though perhaps he would say those are both part of the Progress agenda anyway.

Ultimately, I suspect much of the disagreement comes down to there not being a real Applied Progress Studies agenda at the moment, and if one were put together, we would find it surprisingly aligned with XR aims. I won't speculate too much on what such a thing might entail, but one very low hanging recommendations would be something like:

  • Ramp up high skilled immigration (especially from China, especially in AI, biotech, EE and physics) by expanding visa access and proactively recruiting scientists
My current impressions on career choice for longtermists

Thanks for the writeup Holden, I agree that this is a useful alternative to the 80k approach.

On the conceptual research track, you note "a year of full-time independent effort should be enough to mostly reach these milestones". How do you think this career evolves as the researcher becomes more senior? For example, Scott Alexander seems to be doing about the same thing now as he was doing 8 years ago. Is the endgame for this track simply that you become better at doing a similar set of things?

Constructive Criticism of Moral Uncertainty (book)

Thanks for these notes! I found the chapter on Fanaticism notable as well. The authors write:

A better response is simply to note that this problem arises under empirical uncertainty as well as under moral uncertainty. One should not give 0 credence to the idea that an infinitely good heaven exists, which one can enter only if one goes to church; or that it will be possible in the future through science to produce infinitely or astronomically good outcomes. This is a tricky issue within decision theory and, in our view, no wholly satisfactory solution has been provided. But it is not a problem that is unique to moral uncertainty. And we believe whatever is the best solution to the fanaticism problem under empirical uncertainty is likely to be the best solution to the fanaticism problem under moral uncertainty. This means that this issue is not a distinctive problem for moral uncertainty.

I agree with their meta-argument, but it is still a bit worrying. Even if you reduce the unsolvable problems if your field to unsolvable problems in another field, I'm still left feeling concerned that we're missing something important.

In the conclusion, the authors call for more work on really fundamental questions, noting:

But it’s plausible that the most important problem really lies on the meta-level: that the greatest priority for humanity, now, is to work out what matters most, in order to be able to truly know what are the most important problems we face.

Moral atrocities such as slavery, the subjection of women, the persecution of non-heterosexuals, and the Holocaust were, of course, driven in part by the self-interest of those who were in power. But they were also enabled and strengthened by the common-sense moral views of society at the time about what groups were worthy of moral concern.

Given the importance of figuring out what morality requires of us, the amount of investment by society into this question is astonishingly small. The world currently has an annual purchasing-power-adjusted gross product of about $127 trillion. Of that amount, a vanishingly small fraction—probably less than 0.05%—goes to directly addressing the question: What ought we to do?

I do wonder, given the historical examples they cite, if purely philosophical progress was the limiting factor. Mary Wollstonecraft and Jeremy Bentham made compelling arguments for women's rights in the 1700s, but it took another couple hundred years for process to occur in legal and socioeconomic spheres.

Maybe it's a long march, and progress simply takes hundreds of years. The more pessimistic argument is that moral progress arises as a function of economic and technological progress, and can't occur in isolation. We didn't give up slaves until it was economically convenient to do so, and likely won't give up meat until we have cost and flavor competitive alternatives.

It's tempting to wash away our past atrocities under the guise of ignorance, but I'm worried humanity just knowingly does the wrong thing.

Load More