Hide table of contents

I.

Philosopher Amanda Askell questions the practice of moral offsetting.

Offsetting is where you compensate for a bad thing by doing a good thing, then consider yourself even. For example, an environmentalist takes a carbon-belching plane flight, then pays to clean up the same amount of carbon she released.

This can be pretty attractive. If you’re really environmentalist, but also really want to take a vacation to Europe, you could be pretty miserable not knowing whether your vacation is worth the cost to the planet. But if you can calculate that it would take about $70 to clean up more carbon than you release, that’s such a small addition to the overall cost of the trip that you can sigh with relief and take the flight guilt-free.

Or use offsets instead of becoming vegetarian. An typical person’s meat consumption averages 0.3 cows and 40 chickens per year. Animal Charity Evaluators believes that donating to a top animal charity this many animals’ lives for less than $5; others note this number is totally wrong and made up. But it’s hard to believe charities could be less cost-effective than just literally buying the animals; this would fix a year’s meat consumption offset price at around $500. Would I pay between $5 and $500 a year not to have to be a vegetarian? You bet.

Askell is uncomfortable with this concept for the same reasons I was when I first heard about it. Can we kill an enemy, then offset it with enough money to save somebody else’s life? Smash other people’s property, then give someone else enough money to buy different property? Can Bill Gates nuke entire cities for fun, then build better cities somewhere else?

She concludes:

There are a few different things that the harm-based ethicist could say in response to this, however. First, they could point out that as the immorality of the action increases, it becomes far less likely that performing this action and morally offsetting is the best option available, even out of those options that actualists would deem morally relevant. Second, it is very harmful to undermine social norms where people don’t behave immorally and compensate for it (imagine how terrible it would be to live in a world where this was acceptable). Third, it is – in expectation – bad to become the kind of person who offsets their moral harms. Such a person will usually have a much worse expected impact on the world than someone who strives to be as moral as they can be.

I think that these are compelling reasons to think that, in the actual world, we are – at best – morally permitted to offset trivial immoral actions, but that more serious immoral actions are almost always not the sorts of things we can morally offset. But I also think that the fact that these arguments all depend on contingent features of the world should be concerning to those who defend harm-based views in ethics.

I think Askell gets the right answer here – you can offset carbon emissions but not city-nuking. And I think her reasoning sort of touches on some of the important considerations. But I also think there’s a much more elegant theory that gives clear answers to these kinds of questions, and which relieves some of my previous doubts about the offsetting idea.

II.

Everything below is taken from vague concepts philosophers talk about all the time, but which I can’t find a single good online explanation of. I neither deserve credit for anything good about the ideas, nor can avoid blame for any mistakes or confusions in the phrasing. That having been said: consider the distinction between axiology, morality, and law.

Axiology is the study of what’s good. If you want to get all reductive, think of it as comparing the values of world-states. A world-state where everybody is happy seems better than a world-state where everybody is sad. A world-state with lots of beautiful art is better than a world-state containing only featureless concrete cubes. Maybe some people think a world-state full of people living in harmony with nature is better than a world-state full of gleaming domed cities, and other people believe the opposite; when they debate the point, they’re debating axiology.

Morality is the study of what the right thing to do is. If someone says “don’t murder”, they’re making a moral commandment. If someone says “Pirating music is wrong”, they’re making a moral claim. Maybe some people believe you should pull the lever on the trolley problem, and other people believe you shouldn’t; when they debate the point, they’re debating morality.

(this definition elides a complicated distinction between individual conscience and social pressure; fixing that would be really hard and I’m going to keep eliding it)

Law is – oh, come on, you know this one. If someone says “Don’t go above the speed limit, there’s a cop car behind that corner”, that’s law. If someone says “my state doesn’t allow recreational marijuana, but it will next year”, that’s law too. Maybe some people believe that zoning restrictions should ban skyscrapers in historic areas, and other people believe they shouldn’t; when they debate the point, they’re debating law.

These three concepts are pretty similar; they’re all about some vague sense of what is or isn’t desirable. But most societies stop short of making them exactly the same. Only the purest act-utilitarianesque consequentialists say that axiology exactly equals morality, and I’m not sure there is anybody quite that pure. And only the harshest of Puritans try to legislate the state law to be exactly identical to the moral one. To bridge the whole distance – to directly connect axiology to law and make it illegal to do anything other than the most utility-maximizing action at any given time – is such a mind-bogglingly bad idea that I don’t think anyone’s even considered it in all of human history.

These concepts stay separate because they each make different compromises between goodness, implementation, and coordination.

One example: axiology can’t distinguish between murdering your annoying neighbor vs. not donating money to save a child dying of parasitic worms in Uganda. To axiology, they’re both just one life snuffed out of the world before its time. If you forced it to draw some distinction, it would probably decide that saving the child dying of parasitic worms was more important, since they have a longer potential future lifespan.

But morality absolutely draws this distinction: it says not-murdering is obligatory, but donating money to Uganda is supererogatory. Even utilitarians who deny this distinction in principle will use it in everyday life: if their friend was considering not donating money, they would be a little upset; if their friend was considering murder, they would be horrified. If they themselves forgot to donate money, they’d feel a little bad; if they committed murder in the heat of passion, they’d feel awful.

Another example: Donating 10% of your income to charity is a moral rule. Axiology says “Why not donate all of it?”, Law says “You won’t get in trouble even if you don’t donate any of it”, but at the moral level we set a clear and practical rule that meshes with our motivational system and makes the donation happen.

Another example: “Don’t have sex with someone who isn’t mature enough to consent” is a good moral rule. But it doesn’t make a good legal rule; we don’t trust police officers and judges to fairly determine whether someone’s mature enough in each individual case. A society which enshrined this rule in law would be one where you were afraid to have sex with anyone at all – because no matter what your partner’s maturity level, some police officer might say your partner seemed immature to them and drag you away. On the other hand, elites could have sex with arbitrarily young people, expecting police and judges to take their side.

So the state replaces this moral rule with the legal rule “don’t have sex with anyone below age 18”. Everyone knows this rule doesn’t perfectly capture reality – there’s no significant difference between 17.99-year-olds and 18.01-year-olds. It’s a useful hack that waters down the moral rule in order to make it more implementable. Realistically it gets things wrong sometimes; sometimes it will incorrectly tell people not to have sex with perfectly mature 17.99-year-olds, and other times it will incorrectly excuse sex with immature 18.01-year-olds. But this beats the alternative, where police have the power to break up any relationship they don’t like, and where everyone has to argue with everybody else about whether their relationships are okay or not.

A final example: axiology tells us a world without alcohol would be better than our current world: ending alcoholism could avert millions of deaths, illnesses, crimes, and abusive relationships. Morality only tells us that we should be careful drinking and stop if we find ourselves becoming alcoholic or ruining our relationships. And the law protests that it tried banning alcohol once, but it turned out to be unenforceable and gave too many new opportunities to organized crime, so it’s going to stay out of this one except to say you shouldn’t drink and drive.

So fundamentally, what is the difference between axiology, morality, and law?

Axiology is just our beliefs about what is good. If you defy axiology, you make the world worse.

At least from a rule-utilitarianesque perspective, morality is an attempt to triage the infinite demands of axiology, in order to make them implementable by specific people living in specific communities. It makes assumptions like “people have limited ability to predict the outcome of their actions”, “people are only going to do a certain amount and then get tired”, and “people do better with bright-line rules than with vague gradients of goodness”. It also admits that it’s important that everyone living in a community is on at least kind of the same page morally, both in order to create social pressure to follow the rules, and in order to build the social trust that allows the community to keep functioning. If you defy morality, you still make the world worse. And you feel guilty. And you betray the social trust that lets your community function smoothly. And you get ostracized as a bad person.

Law is an attempt to formalize the complicated demands of morality, in order to make them implementable by a state with police officers and law courts. It makes assumptions like “people’s vague intuitive moral judgments can sometimes give different results on the same case”, “sometimes police officers and legislators are corrupt or wrong”, and “we need to balance the benefits of laws against the cost of enforcing them”. It also tries to avert civil disorder or civil war by assuring everybody that it’s in their best interests to appeal to a fair universal law code rather than try to solve their disagreements directly. If you defy law, you still get all the problems with defying axiology and morality. And you make your country less peaceful and stable. And you go to jail.

In a healthy situation, each of these systems reinforces and promotes the other. Morality helps you implement axiology from your limited human perspective, but also helps prevent you from feeling guilty for not being God and not being able to save everybody. The law helps enforce the most important moral and axiological rules but also leaves people free enough to use their own best judgment on how to pursue the others. And axiology and morality help resolve disputes about what the law should be, and then lend the support of the community, the church, and the individual conscience in keeping people law-abiding.

In these healthy situations, the universally-agreed priority is that law trumps morality, and morality trumps axiology. First, because you can’t keep your obligations to your community from jail, and you can’t work to make the world a better place when you’re a universally-loathed social outcast. But also, because you can’t work to build strong communities and relationships in the middle of a civil war, and you can’t work to make the world a better place from within a low-trust defect-defect equilibrium. But also, because in a just society, axiology wants you to be moral (because morality is just a more-effective implementation of axiology), and morality wants you to be law-abiding (because law is just a more-effective way of coordinating morality). So first you do your legal duty, then your moral duty, and then if you have energy left over, you try to make the world a better place.

(Katja Grace has some really good writing on this kind of stuff here)

In unhealthy situations, you can get all sorts of weird conflicts. Most “moral dilemmas” are philosophers trying to create perverse situations where axiology and morality give opposite answers. For example, the fat man version of the trolley problem sets axiology (“it’s obviously better to have a world where one person dies than a world where five people die”) against morality (“it’s a useful rule that people generally shouldn’t push other people to their deaths”). And when morality and state law disagree, you get various acts of civil disobedience, from people hiding Jews from the Nazis all the way down to Kentucky clerks refusing to perform gay marriages.

I don’t have any special insight into these. My intuition (most authoritative source! is never wrong!) says that we should be very careful reversing the usual law-trumps-morality-trumps-axiology order, since the whole point of having more than one system is that we expect the systems to disagree and we want to suppress those disagreements in order to solve important implementation and coordination problems. But I also can’t deny that for enough gain, I’d reverse the order in a heartbeat. If someone told me that by breaking a promise to my friend (morality) I could cure all cancer forever (axiology), then f@$k my friend, and f@$k whatever social trust or community cohesion would be lost by the transaction.

III.

With this framework, we can propose a clearer answer to the moral offsetting problem: you can offset axiology, but not morality.

Emitting carbon doesn’t violate any moral law at all (in the stricter sense of morality used above). It does make the world a worse place. But there’s no unspoken social agreement not to do it, it doesn’t violate any codes, nobody’s going to lose trust in you because of it, you’re not making the community any less cohesive. If you make the world a worse place, it’s perfectly fine to compensate by making the world a better place. So pay to clean up some carbon, or donate to help children in Uganda with parasitic worms, or whatever.

Eating meat doesn’t violate any moral laws either. Again, it makes the world a worse place. But there aren’t any bonds of trust between humans and animals, nobody’s expecting you not to eat meat, there aren’t any written or unwritten codes saying you shouldn’t. So eat the meat and offset it by making the world better in some other way.

(the strongest counterargument I can think of here is that you’re not betraying animals, but you might be betraying your fellow animals-rights-activists! That is, if they’re working to establish a social norm against meat-eating, the sort of thing where being spotted with a cheeseburger on your plate produces the same level of horror as being spotted holding a bloody knife above a dead body, then your meat-eating is interfering with their ability to establish that norm, and this is a problem that requires more than just offsetting the cost of the meat involved)

Murdering someone does violate a moral law. The problem with murder isn’t just that it creates a world in which one extra person is dead. If that’s all we cared about, murdering would be no worse than failing to donate money to cure tropical diseases, which also kills people.

(and the problem isn’t just that it has some knock-on effects in terms of making people afraid of crime, or decreasing the level of social trust by 23.5 social-trustons, or whatever. If that were all, you could do what 90% of you are probably already thinking – “Just as we’re offsetting the murder by donating enough money to hospitals to save one extra life, couldn’t we offset the social costs by donating enough money to community centers to create 23.5 extra social-trustons?” There’s probably something like that which would work, but along with everything else we’re crossing a Schelling fence, breaking rules, and weakening the whole moral edifice. The cost isn’t infinite, but it’s pretty hard to calculate. If we’re positing some ridiculous offset that obviously outweighs any possible cost – maybe go back to the example of curing all cancer forever – then whatever, go ahead. If it’s anything less than that, be careful. I like the metaphor of these three systems being on three separate tiers – rather than two Morality Points being worth one Axiology Point, or whatever – exactly because we don’t really know how to interconvert them)

This is more precise than Askell’s claim that we can offset “trivial immoral actions” but not “more serious” ones. For example, suppose I built an entire power plant that emitted one million tons of carbon per year. Sounds pretty serious! But if I offset that with environmental donations or projects that prevented 1.1 million tons of carbon somewhere else, I can’t imagine anyone having a problem with it.

On the other hand, consider spitting in a stranger’s face. In the grand scheme of things, this isn’t so serious – certainly not as serious as emitting a million tons of carbon. But I would feel uncomfortable offsetting this with a donation to my local Prevent Others From Spitting In Strangers’ Face fund, even if the fund worked.

Askell gave a talk where she used the example of giving your sister a paper cut, and then offsetting that by devoting your entire life to helping the world and working for justice and saving literally thousands of people. Pretty much everyone agrees that’s okay. I guess I agree it’s okay. Heck, I guess I would agree that murdering someone in order to cure cancer forever would be okay. But now we’re just getting into the thing where you bulldoze through moral uncertainty by making the numbers so big that it’s impossible to be uncertain about them. Sure. You can do that. I’d be less happy about giving my sister a paper cut, and then offsetting by preventing one paper cut somewhere else. But that seems to be the best analogy to the “emit one ton of carbon, prevent one ton of carbon” offsetting we’ve been talking about elsewhere.

I realize all this is sort of hand-wavy – more of a “here’s one possible way we could look at these things” rather than “here’s something I have a lot of evidence is true”. But everyone – you, me, Amanda Askell, society – seems to want a system that tells us to offset carbon but not murder, and when we find such a system I think it’s worth taking it seriously.

9

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
More from Tessa
Curated and popular this week
Relevant opportunities