Jasper Meyer

Analyst @ Innosight
Working (0-5 years experience)
79Boston, MA, USAJoined Nov 2022

Bio

Strategy analyst at Innosight in Boston.

I graduated from Dartmouth College last year with a B.A. in philosophy and have been interested in EA for about 4 years. I currently work for a long-term strategy consultancy. I also have been serving on the Board of Directors of Positive Tracks, a national social change nonprofit, for five years. Within EA, I'm particularly interested in ethical theory and animal welfare.

How others can help me

Reach out to me if your organization is looking for a hard-working and passionate young person with analytical skills and a philosophy background. 

How I can help others

I love to chat all things EA!

Comments
24

Interesting article - thanks for sharing. My main problem with it has to do with the moral psychology piece. You write that: 

It's "disgusting and counterintuitive" for most people to imagine offsetting murder.

and 

"Most of us still live in extremely carnist cultures and are bombarded with burger ads and sights of people enjoying meat next to us all the time like it is perfectly harmless."

In my opinion, these two arguments together make meat offsets a bad idea. People are opposed to murder offsets (no matter how theoretically effectively they may be) because murder feels like a deeply immoral thing to do. However, most people feel that eating meat is not deeply immoral - most people do it every day. I'd imagine folks react the same way to meat offsets as they do to carbon offsets. They think, "well I know I probably shouldn't eat so much meat / consume so much carbon, but I'm not gonna stop, so this offset makes some sense". But this is the wrong way to think about eating meat (and perhaps consuming carbon, too, but that's beside the point). We want people to feel that eating meat is immoral; we want them to feel that it's a form of killing a sentient being. And the availability of an offset trivializes the consumption.

I'm on board with your consequentialist reasoning here, but I'm worried the availability meat offsets may cause people's moral opinion on animal ethics to regress.

Thanks for the post! I agree that identifying those universal maxims or norms seems impossibly difficult given the breadth of humanity's views on morality. In fact, much of post-Kantian deontological thinking can be described as an attempt to answer the very question you ask in this post. I'm also not a trained philosopher (and I lean more towards consequentialism myself), but I'll share a few notes that might help:

  1. Most modern non-consequentialists have much more abstract moral/ethical theories than "follow the categorical imperative". For example, contractualists believe that these norms are only hypothetical and could only be agreed upon universally in an imagined scenario - for Rawls this was the original position. More modern contractualists like Tim Scanlon push this further. Scanlon summarizes his moral theory as: "An act is wrong if its performance under the circumstances would be disallowed by any set of principles for the general regulation of behaviour that no one could reasonably reject as a basis for informed, unforced, general agreement". The key word here is "reasonably". Scanlon pretty much wrote an entire book about what it means to reasonably reject a set of principles. The point is that these more modern deontological theories abstract away from our earthly moral disagreements. Many of them rely on a the existence of an objective moral truth that we can strive to emulate through rational discourse. In this way, it actually doesn't matter much what people or animals care about or their current moral customs.
  2. Other modern deontologists are more concerned with rights and obligations. In my opinion, these moral theories are too much about what we are barred from doing (don't lie, don't steal, don't violate anyone's right to privacy, etc.) than what we should do. Regardless, given you believe in the objective existence of these rights, employing them gives a more clear-cut guide to moral action than the categorical imperative alone. These philosophers would use this reasoning to dispute your claim that deontologists are "more likely to ignore what people unlike themselves care about".
  3. You write that a "deontologist might not draw any repugnant conclusions". All moral theories have shortcomings, and I think that all draw repugnant conclusions in certain hypothetical situations. In turn, philosophers who defend these theories either explain away the problem, "band-aid" their theory, or bite the bullet. There are a bunch of problems I could mention for deontology, but perhaps the most famous is the Inquiring Murderer problem
  4. I think it's worth noting that consequentialism has its fair share of epistemic problems. Determining the well-being or preferences or desires of others, especially when they are halfway around the world, is no easy task. Plus, even if you know what would make others' lives go better, it's often difficult to know how to bring about these outcomes.

TLDR: I agree that deontology has serious epistemic problems, and in practice, deontologists might be more prone to ignoring people unlike themselves (because they are far away or because they have different views). However, much work has been done to demystify non-consequentialist theories and make them actionable - it's just highly complex. In general, I tend to agree with Derek Parfit when he argues that all moral theorists are "climbing the same mountain on different sides" in their search for moral truth.

Great post - I think this is a really important meta-topic within EA that doesn't get enough airtime. It might also be worth considering the "hidden zero problem" coined by Mark Budolfson and Dean Spears here. The thrust of their argument is that if a charity is funded by the ultra-rich or their foundations, small donations may have measurably 0 impact.

As an example: suppose NGO X wants $10M in funding for 2022. Foundation X has been NGO X's largest donor for a few years running. If  small donors give NGO X $8M in 2022, Foundation X will fully fund it to $10M, but if small donors give $8M, Foundation X will give $1M more and still fully fund it to $10M. This means that some of the small donations did 0 impact other than saving Foundation X some cash.

Of the top of my head, there are a few obvious problems with the hidden-zero problem:

  1. Foundations having more money isn't necessarily a bad thing, especially if they give their assets away relatively quickly and effectively. 
  2. How are we supposed to know how much a certain real foundation like Open Philanthropy plans to give certain organizations?
  3. Many charities don't have such cut-and-dried budgets and fundraising goals. E.g., if GiveDirectly gets more money in 2022, it will simply give away more money by expanding the number of recipients and/or its geographical operations.

Regardless, Budolfson and Spears did a lot of fancy math to show the hidden zero problem is worth taking seriously in many cases, especially within EA.

All that being said, it's not clear to me how the hidden zero problem impacts your claim here. On one hand, if we intentionally diversify funding sources, charities might raises their budgets and demand the same amount from big foundations. However, if these foundations see that more money is coming in from more donors, they might decide the charity/cause is no longer "neglected" and choose to reduce the size of their grant.

Would love to hear thoughts on this from people more deeply entrenched in the grant-making world...

Okay, I see now. I read that as "one-tenth" not one out of 10.

I'm on board with your lack-of-guardrails argument against utilitarianism. I hope arguments like the one made in this post help to construct them so we don't end up with another catastrophe. 

I disagree that I argue against a strawman. The media's coverage of Bankman-Fried frequently implies that he used consequentialism to justify his actions. This, in turn, implies that consequentialism endorses fraud so long as you give away your money. Like I said, the arguments in the post are not revolutionary, but I do think they are important. 

You give no evidence for your claim that hardcore utilitarians commit 1/10 of the "greatest frauds". I struggle to even engage with this claim because it seems so speculative. But I will say that I agree that utilitarianism has been (incorrectly) used to justify harm. As I stated: 

"the EA movement is broadly consequentialist, so we should examine our own theoretical endorsements under a broadly consequentialist framework. If we determine that publicly advocating consequentialism directly causes many people to act immorally "in the name of consequentialism", we should either 1. change our messaging or 2. stop advocating consequentialism even if it's still what we truly believe"

Part of my motivation for making this post was helping consequentialists think about our actions - specifically those around the idea of earning to give. In other words, the post is intended to clarify some "ethical guardrails" within a consequentialist framework.

That's a very fair point - unimaginable is the wrong word. I guess I'll say I find it curious.

To use a stronger example, suppose a dictator spends all day violating the personal rights of her subjects and by doing so increases overall well-being. I find it curious to believe she's acting morally. You don't need to believe in the intrinsic badness of rights violations to hold this point of view. You just have to believe that objective moral truth cannot be fully captured using a single, tidy theory. Moral/ethical life is complex, and I think that even if you are committed to one paradigm, you still ought to occasionally draw from other theories/thinkers to inform your moral/ethical decision making.

This agrees with what you said in your first comment: "We need many other areas of study and theory to guide in specific areas." As long as this multifaceted approach is at least a small part of your overall theory, I can definitely imagine holding it, even if I don't agree.

Thanks for the insight. Fortunately, you don't have to agree with this disclaimer in my post for the rest of the argument to remain sound. 

That being said, I find it perfectly reasonable for one's actions to be primarily (or even almost entirely) guided by consequentialist reasoning. However, I cannot understand never considering reasons stemming from deontology or virtue ethics. For example, it's impossible for me to imagine condemning a gross rights violation purely based on its consequences without even considering that perhaps violating personal rights has some intrinsic dis-value.

Ecomodernists are more meliorist, while effective altruists are more longtermist.

Admittedly, "meliorism" is a new concept for me, but I'm confused how it is in conflict with longtermism. Aren't most EA-proposed solutions to longtermist problems based on human technological/social progress?

Yes, exactly. One might even wonder if, because a GiveWell recommendation generates SO much funding, select on the ground charities have conformed to the point that we might consider GiveWell (via their standards/eval criteria) an essential part of their operation model. (I don't have the info/experience to support this claim - just interesting food for thought...)

Load More