Strategy analyst at Innosight in Boston.
I graduated from Dartmouth College last year with a B.A. in philosophy and have been interested in EA for about 4 years. I currently work for a long-term strategy consultancy. I also have been serving on the Board of Directors of Positive Tracks, a national social change nonprofit, for five years. Within EA, I'm particularly interested in ethical theory and animal welfare.
Reach out to me if your organization is looking for a hard-working and passionate young person with analytical skills and a philosophy background.
I love to chat all things EA!
Thanks for the post! I agree that identifying those universal maxims or norms seems impossibly difficult given the breadth of humanity's views on morality. In fact, much of post-Kantian deontological thinking can be described as an attempt to answer the very question you ask in this post. I'm also not a trained philosopher (and I lean more towards consequentialism myself), but I'll share a few notes that might help:
TLDR: I agree that deontology has serious epistemic problems, and in practice, deontologists might be more prone to ignoring people unlike themselves (because they are far away or because they have different views). However, much work has been done to demystify non-consequentialist theories and make them actionable - it's just highly complex. In general, I tend to agree with Derek Parfit when he argues that all moral theorists are "climbing the same mountain on different sides" in their search for moral truth.
Great post - I think this is a really important meta-topic within EA that doesn't get enough airtime. It might also be worth considering the "hidden zero problem" coined by Mark Budolfson and Dean Spears here. The thrust of their argument is that if a charity is funded by the ultra-rich or their foundations, small donations may have measurably 0 impact.
As an example: suppose NGO X wants $10M in funding for 2022. Foundation X has been NGO X's largest donor for a few years running. If small donors give NGO X $8M in 2022, Foundation X will fully fund it to $10M, but if small donors give $8M, Foundation X will give $1M more and still fully fund it to $10M. This means that some of the small donations did 0 impact other than saving Foundation X some cash.
Of the top of my head, there are a few obvious problems with the hidden-zero problem:
Regardless, Budolfson and Spears did a lot of fancy math to show the hidden zero problem is worth taking seriously in many cases, especially within EA.
All that being said, it's not clear to me how the hidden zero problem impacts your claim here. On one hand, if we intentionally diversify funding sources, charities might raises their budgets and demand the same amount from big foundations. However, if these foundations see that more money is coming in from more donors, they might decide the charity/cause is no longer "neglected" and choose to reduce the size of their grant.
Would love to hear thoughts on this from people more deeply entrenched in the grant-making world...
I love this piece - super well argued. Your argument applies to virtue ethics too if you replace “RIGHTS” with any virtue claimed to be intrinsically valuable by the virtue ethicist.
Hi David, thanks for the reply. I think I just totally disagree that humanity stopped pursuing ambitious goals. Just yesterday, we generated energy with nuclear fusion. We've reduced the price of solar cells by over 100x in a few decades. Hundreds of millions of people in China/India/Africa, etc. have been lifted out of extreme poverty. There are thousands of scientists pursuing cures for cancer and dementia. I could go on...
Humanity has trillions of dollars to spend, and it goes big on video games, consumer electronics, and fast food.
But our government doesn't have trillions of dollars and we have a ton of really important stuff to spend it on. I just think that improving education, closing the racial wealth gap, offering food stamps - heck, even building infrastructure here on earth are far more important. We can do multiple things at once, but we can't do everything. Every additional spend means something else has to be cut. Space exploration is near the bottom of my list of things I think our govt should spend on.
Hey Brad! I love the idea. I’m late to this comment section and many of my initial reactions were discussed at length. That being said here are a few ideas/questions which haven’t gotten much attention:
I just wanted to say it’s really heartening for me to know that there’s so much good work going into aligning intl aid with the priorities of its recipients. As many have noted on this forum in the past few months, the potential for impact here is massive. Thank you!
If innovation really has stalled (which I’m skeptical of in the first place) it’s not because the space race is (mostly) over. There are deeply important issues on Earth for us to solve, and millions of people are innovating towards solutions to them every day. Sure, designing a tele-health or mobile banking system for people living in extreme poverty isn’t as sexy as landing on the moon, but it’s surely innovation. These types of projects may not dominate the news cycle but they represent the beginning of an alignment of research and development with the flourishing of all humans (and animals). Space exploration does not.
You say that we should aim higher than our current massive endeavors (eliminating diseases, expanding clean energy, protecting animal rights and natural habitats). But decades of work has proven that these endeavors are extremely difficult. Every marginal dollar and hour spent on these projects counts. And space exploration distracts from urgent need for innovation in these areas.
Thanks for fleshing this out - that all makes sense to me.
Thanks for the detailed reply to the trauma case. Your delineation between various definitions of personhood are helpful for interrogating my other questions as well.
If it is the case that a "new" welfare subject can be "created" by a traumatic brain injury, then it might well be the case that new welfare subjects are created as one's life progresses. This implies that, as we age, welfare subjects effectively die and new ones are reborn. However, we don't worry about this because 1. we can't prevent it and 2. it's not clear when this happens / if it ever happens fully (perhaps there is always a hint of one's old self in one's new self, so a "new" welfare subject is never truly created).
Given the same argument applies to non-human animals, we could reasonably assume that we can't prevent this loss and recreation of welfare subjects. Moreover, we would probably come to the same conclusions about the badness of the death of the animals, even if throughout their lives they exist as multiple welfare subjects that we should care about. Where it becomes morally questionable is in considering non-human animals whose lives are worse than not worth living. Then, there should be increased moral concern for factory farmed animals given we accept that: 1. their lives are worse than not worth living; 2. they instantiate different welfare subjects throughout there life and 3. there is something worse about 2 different subjects each suffering for 1 year than 1 subject suffering for 2 years. (Again I don't think I accept premise 2 or 3 of this conclusion, I just wanted to take the hypothetical to its fullest conclusion.)
Interesting article - thanks for sharing. My main problem with it has to do with the moral psychology piece. You write that:
It's "disgusting and counterintuitive" for most people to imagine offsetting murder.
"Most of us still live in extremely carnist cultures and are bombarded with burger ads and sights of people enjoying meat next to us all the time like it is perfectly harmless."
In my opinion, these two arguments together make meat offsets a bad idea. People are opposed to murder offsets (no matter how theoretically effectively they may be) because murder feels like a deeply immoral thing to do. However, most people feel that eating meat is not deeply immoral - most people do it every day. I'd imagine folks react the same way to meat offsets as they do to carbon offsets. They think, "well I know I probably shouldn't eat so much meat / consume so much carbon, but I'm not gonna stop, so this offset makes some sense". But this is the wrong way to think about eating meat (and perhaps consuming carbon, too, but that's beside the point). We want people to feel that eating meat is immoral; we want them to feel that it's a form of killing a sentient being. And the availability of an offset trivializes the consumption.
I'm on board with your consequentialist reasoning here, but I'm worried the availability meat offsets may cause people's moral opinion on animal ethics to regress.