I think there was a version of my piece where I referenced your excellent post. I appreciate you!
I’d like to see research on this—I’ll suggest to some others.
But the most important paragraph in the piece, in my view, is this one:
“If there’s not enough at stake on Earth with respect to these complex moral considerations, consider that there are people who want to ‘help humanity flourish among the stars.’ They hope to colonize the galaxies, ensuring that trillions of people have the opportunity to exist. Folks like Elon Musk are already eyeing nearby planets. But Musk’s dream is my worst nightmare. Life on Earth is difficult enough—if we can’t effectively reduce the suffering that happens on Earth, why multiply it across the universe?”
I’d be more curious to see research that examines the effects of critiquing this brand of optimistic longtermism. More on that from me here: https://www.forbes.com/sites/briankateman/2022/09/06/optimistic-longtermism-is-terrible-for-animals/amp/
Thanks for your comment!
I think your overarching concern is very valid and writers on the fringe should take it seriously.
There were some constraints that made it infeasible to address your particulars.
That said, in my view, it’s actually this paragraph that made it well worth publishing:
“If there’s not enough at stake on Earth with respect to these complex moral considerations, consider that there are people who want to ‘help humanity flourish among the stars.’ They hope to colonize the galaxies, ensuring that trillions upon trillions of people have the opportunity to exist. Folks like Elon Musk are already eyeing nearby planets. But Musk’s dream is my worst nightmare. Life on Earth is difficult enough—if we can’t effectively reduce the suffering that happens on Earth, why multiply it across the universe?”
It is this that would be, as you put it, the “extreme moral catastrophe.” Not factory farming on Earth alone.
You can read more about this from me here: https://www.forbes.com/sites/briankateman/2022/09/06/optimistic-longtermism-is-terrible-for-animals/amp/.
Thank you for your comment!
Thanks for your comment. Are there any actions the EA community can take to help the AI Safety community prioritize animal welfare and take more seriously the idea that there are S-risks downstream or human values?
Interesting. Thanks for your comments.
In the meantime, I would treat the constitution component in the piece as a metaphor to illustrate the idea of lock-in for a general audience.
I’d certainly write the constitution differently (why doesn’t it mention welfare for insects, for example?), but I more take it to mean that numerous amendments were required to make it moral, and still many more are needed.
Interesting post. Just want to chime in with a comment that I think you’re overconfident in cell-cultured meat (though I don’t blame you—there’s been a lot of boostermism). It’s possible it won’t reach price parity and be a real contender in the marketplace. We have to try, and time will tell.
Where have you been all my life? We are thinking similarly, and I’m glad you are raising these topics and added nuance/wisdom/data to them. I wrote a related piece for Fast Company a while back: https://www.fastcompany.com/90599561/once-we-have-lab-grown-meat-will-we-still-need-animal-advocacy.
Link updated!