One common objection to EA/utilitarianism is that it appears to demand a lot from us, and can end up in seemingly paradoxical situations. For example, why should we prioritise our own children over kids in other parts of the world? And if our work allows us to a lot of good, shouldn't we push ourselves really hard, to just below the point of where will burn out? One might reply that the total expected utility would be greater if everyone worked reasonable hours and looked after their own families, but this argument this seems a bit strenuous since it would require us to do an expected value calculation before we decide whether we should take care of ourselves or our families instead of working on our cause area. In my own life, I try to think of it as using different moral theories depending on the type and scale of problems that I am considering, something that is analagous to what we do in physics.
In physics it is often useful to know the scale of the problem before deciding on which theory to use. For example, say that we want to calculate the motion of a two-body interacting system. If the two bodies are fundamental particles, for example electrons, Quantum Electrodynamics (QED) would be best suited since it describes the strongest interaction at that physical scale. If instead we knew that the bodies are planets, consisting of an nearly infinite amount of particles, the gravitational force would be strongest and we should instead use General Relativity (GR). GR and QED are both equally valid at both scales - there is nothing stopping you from calculating the gravitational force between two electrons, but the result won't be very useful since the gravitational force is dwarfed by the electromagnetic. In a simplified way, we could draw the following diagram and use it to classify the different forces depending on which scale they are most useful:
Ultimately, most physicists believe that all forces are just different manifestations of the same fundamental force that can be described through a "theory of everything". Until such a theory is found, however, we are happy with using different laws at different scales, and someone specialising in GR wouldn't claim to know more about small scale interactions than someone specialising in QED.
Similarly, I think of the different moral theories as being applicable at different scales according to the following diagram:
Here are a few examples of where I think these moral theories are best applied:
a) Virtue ethics: Spending time with friends and family, self care. Following the virtues of "being a good friend" or "a good family member" seem to be more intuitive than doing an explicit calculation to show how this will lead to existential risk reduction.
b) Deontology/contractualism: Civil and human rights on a national/international level. It would be very strange to consider a society without the inalienable and individual rights to free expression, freedom of love, or fair trials, no matter the expected value in each individual case.
c) Utilitarianism: Future of humanity. On the global scale, aggregated over billions of lives, it makes sense to use utilitarianism to decide what to do based on a calculation of how many lives can be saved by different interventions, since any personal or national obligations can be averaged out on this scale.
The EA position is that we should focus more resources on trying to solve global scale problems, which I think is correct. However, as humans we also live in a local and national world, and we need moral theories to guide our actions in our daily lives. Just because utilitarianism is a great tool at the global scale, does not mean that it is the best theory at smaller scales as well. Ultimately, we will probably find a new "moral theory of everything" which works at every level, but until then we I think should see the competing moral theories as being useful over a certain range in space and time.
I like the general thrust of your argument and would like to point out that within moral philosophy there is already an (in my view) satisfactory way to incorporate judgements associated with deontology and virtue ethics within a utilitarian framework—by going from “single-level utilitarianism” to “multi-level utilitarianism“:
I'm currently writing a text on this topic and will copy an excerpt here:
"Utilitarians believe that their moral theory is the appropriate standard of moral rightness, in that it specifies what makes an act (or rule, policy, etc) right or wrong. However, as Henry Sidgwick noted, “it is not necessary that the end which gives the criterion of rightness should always be the end at which we consciously aim”.
Most, if not all, utilitarians discourage the use of utilitarianism as a decision procedure to guide all their everyday actions. Using utilitarianism as a decision procedure means always calculating the expected consequences of our day-to-day actions in an attempt to deliberately try to promote overall wellbeing. For example, we might pick what breakfast cereal to buy at the grocery store by trying to determine which one best contributes to overall wellbeing. To try and do so would be to follow single-level utilitarianism, which treats the utilitarian theory as both a standard of moral rightness and a decision procedure. But using such a decision procedure for all our decisions is a bad and fruitless idea, which explains why almost no one ever defended it. Jeremy Bentham rejected it, writing that “it is not to be expected that this process [of calculating expected consequences] should be strictly pursued previously to every moral judgment.” Deliberately calculating the expected consequences of our actions is error-prone and takes a lot of time. Thus, we have reason to think that following single-level utilitarianism would itself not lead to the best consequences, which is why the theory is often criticized as “self-defeating”.
For these reasons, many advocates of utilitarianism have instead argued for multi-level utilitarianism, which is defined as follows:
Multi-level utilitarianism is the view that, in most situations, individuals should follow tried-and-tested heuristics rather than trying to calculate which action will produce the most wellbeing.
Multi-level utilitarianism implies that we should, under most circumstances, follow a set of simple moral heuristics—do not lie, steal, kill etc.—knowing that this will lead to the best outcomes overall. To this end, we should use the commonsense moral norms and laws of our society as rules of thumb to guide our actions. Following these norms and laws will save time and usually lead to good outcomes, in part because they are based on society’s experience of what promotes individual wellbeing. The fact that honesty, integrity, keeping promises and sticking to the law have generally good consequences explains why in practice utilitarians value such things highly and use them to guide their everyday actions."
Thanks, Darius. I would advise the OP to read up on this literature. As stated, this has been extensively discussed.
I like Aaron's recent reply here: https://forum.effectivealtruism.org/posts/mG6mckPHAisEbtKv5/should-you-familiarize-yourself-with-the-literature-before#gKYcFEXGtQZLmjzM7
I am glad this post way made, and glad for Darius's comment.
Thanks Darius! I agree that this is probably one of the strongest arguments against my model; what I gather from your reply is that we don't need other moral theories since everything can already be explained by utilitarianism.
I agree with you that some sort of a consequentalist moral theory probably underpins the other moral theories (why should we be virtuous if it didn't have a good consequences?). However - I think this is not giving enough credit to those theories, since if their moral prescriptions are correct according to utilitarianism, the theories themselves should be considered correct.
To take another example from physics: of course we know that quantum mechanics is more fundamental than classical mechanics (classical mechanics is the limit of quantum mechanics at large scales). This doesn't mean that people consider classical mechanics "just quantum mechanics with some heuristics" - it is considered to be a field in its own right. The reason is that at the physical scale at which classical mechanics becomes useful, quantum mechanics becomes too cumbersome to use. Students who are asked to calculate the motion of a ball down an inclined plane don't start with quantum mechanics, they go directly to classical mechanics, which is infinitely more useful for solving problems at that scale.
My argument is that at certain scales, virtue ethics and deontology should be considered emergent moral theories, either from utilitarianism or some other theory. But this doesn't mean that they are "just utilitarianism with some heuristics". They should be studied and practiced in their own rights, since the insights they give are more useful for how to live our daily lives or how we should structure a society. If utilitarianism + heuristics is just virtue ethics at some scale, why not just call it virtue ethics and use utilitarianism to justify why it is correct at that scale?
While I see the intuitive appeal of this idea, it honestly seems a bit ad hoc. The physics analogy is interesting, yes, but we should be careful not to mistake the practical usefulness of local level deontology or virtue ethics for an actual normative difference between levels. If we just accept the local heuristics as useful for social cohesion etc. without critically assessing whether we could do better, we run the risk of not actually improving sentient experience - just rationalizing standards that mainly exist because they were evolutionarily expedient, or maintain some power structure.
To be more specific, it's very much an open question whether trying to be a "good" friend/family member, in ways that significantly privilege your friends/family over others, actually achieves more good in the long run. It seems very unlikely to me that, say, (A) buying or making a few hundred dollars' worth of presents for people during holidays (reciprocated with similar presents, many of which in my experience honestly haven't been worth the money even though I appreciate their thought) makes the world a better place than (B) spending that money/time on the seemingly cold utilitarian choice.
The usual objection to this is that B weakens social bonds or makes people trust you less. But: (1) from the perspective of the people or animals you'd be helping by choosing B, those bonds and small degrees of weakened trust would probably seem paltry and frivolous by comparison to their suffering. There also doesn't seem to be much robust evidence supporting this claim anyway, it's just an intuition I've seen repeated without justification. (2) It's possible that this is one of several social norms that we can change over time by challenging the assumption that it's eternal; in the short run, perhaps people think of you as cold or weird, but if enough people follow suit, maybe refusing to waste money on trivialities for holidays could become normal. Omnivores have argued that veganism threatens social bonds and the (particularly American) culture of eating meat together; c.f. this article. I think that that argument is self-evidently weak in the face of great animal suffering, so analogously it isn't a stretch to suppose that deontological norms we currently consider necessary for social cohesion are disposable, if we challenge them.
I think this is a great and really sensible way to think about things. It's really natural, and the physics analogy provides some intuition behind why that is. A question: have you thought about how this way of thinking is in some sense "baked into" certain moral frameworks? I'm thinking specifically here of rule utilitarianism: rules can apply at different scales. It seems to be that at the personal level rule utilitarianism is basically instantiated as virtue ethics.
This model makes a lot of sense, and feels helpful for me to improve how to (justify how I) prioritize actions.
What you present is more a descriptive view of our moral intuitions than prescriptive guidelines. But it seems like it is related to how heuristically we should act based on the scale of our decision making process.
Thanks! I have only recently started thinking about it in terms of scale so I am mainly basing it on my own intuitions (I am also not a philosopher, so not sure if I would be able to formalize the arguments). However, if I were to try to make a prescriptive version I would probably start by saying that we have obligations to each other (i.e. like parents have obligations to their children), and at each "scale" or population size, some of these obligations cancel out (a state doesn't have obligations to a particular child but to its children in general). At the largest scale, the only obligation left is to preserve life itself, which is why utilitarianism works so well here.
Ok, that's a good start. Let me challenge your view a bit. In this framework, how do you choose whether to invest 100$ in your family, or donating them to AMF?
I would say that depends slightly on the circumstances. If for example you are a single parent and need $100 to spend on medicines for your children (even if it is for a non life-threatening condition), I would say that you need to fulfill that obligation before you should consider donating to AMF.
I agree with your observation about scale. It's interesting to think about where the idea of parents having obligations to their children - or of individuals having a special obligation to their community members/fellow citizens - comes from. I think these might come partially from a notion of neglectedness. My child is not more important, morally, than any other, but I can assume most other children already have parents looking out for them, so my child is counterfactually the most neglected cause (and the most tractable cause among children I could care for).
(and sadly, it's not true that we can assume most other children already have parents looking out for them. Or at least, for your argument to work you need to replace most other children with all other children)
Neglectedness is usually taken to be the amount of resources going into a problem. You can measure the resources by "parenting time" (what about orphans, by the way?) but in many cases it is not the most important resource.