Hide table of contents

Cross-posted from my personal blog.

Here are two premises I firmly believe:

Premise 1: Normatively, psychological harms[1] matter.

Premise 2: Descriptively, an individual's ideology[2] will change her affective response to events.

From these, there arises a dilemma: who is to blame when psychological harm to an individual arises as a result of the victim's ideology? In this post, I explore the this dilemma by:

First, sketching the relationship between emotion and ideology.

Second, proposing two extreme approaches to it and rejecting each.

Finally, proposing and exploring a middle-ground framework to addressing harms resulting from affective ideologies.

Affective Ideologies

The history of humanity is littered with conflicts driven by ideology, with emotion acting as a bridge from belief to action. The Catholics and Lutherans believed the Anabaptists to be a threat to their religious-political order and so tortured and executed them, doubtlessly experiencing a variety of emotions in the process. Bigotry-derived disgust drove homophobia and anti-miscegenation laws. Belief in a threat to the American way of life drives anti-immigrant hate-crimes.[3] And so on . . .

The path from belief to emotion to action is not necessarily a conscious or a unidirectional one. Nor are the above examples intended to diminish participants' moral responsibility or downplay the role that (flawed) reasoning played in them. It is simply to note an obvious truth: for most of these atrocities, a certain set of underlying beliefs ("ideology") created an emotional response that (further?) motivated harmful action.

Of course, affective ideologies can be a positive force as well: ideologically derived hope, optimism, love, and so on can motivate the best human actions. And in a Humean sense, the path from ideology to affect appears to be more of a result of human psycho-biology rather than one of logical necessity. The psycho-biological nature of affective ideologies also means the mapping from belief to affect is messy and often inscrutable.

But the simple fact is this: adopting an ideology changes the way you react to events in the world, including emotional reactions. Observers with different ideologies will often have very different reactions to the same set of facts.

Furthermore, it is clear that many affective states are morally relevant. All else equal, most people disprefer states like sorrow, disgust, anger, annoyance, and so forth. These states are usually inherently unpleasant to experience. They might be instrumentally useful (e.g., in motivating good action), and some philosophers would argue that a minimum mix of all is necessary to living a fulfilled human life, but on the margin and all else equal reducing these unpleasant affective states is morally valuable.

Two Extreme Approaches

Extreme Approach One: Reject Premise 1

One seemingly easy way out of this dilemma is to simply reject the idea that psychological harms matter. However, this obviously will not do for a number of reasons.

The first is that it simply contradicts important first moral principles, like the inherent badness of pain. Anyone who has suffered from a serious mental illness can tell you that it is no less real or bad than physical pain, though it may be qualitatively different. Pain is bad, whether physical or mental.

Furthermore, the boundary between psychological and physical pain may be hard to define given that all pain signals ultimately manifest in the brain via physical interactions of nervous cells, neurotransmitters, and so on.

Finally, many moral judgments can make sense only if we accept the relevance of psychological pain. In the abstract, most would condemn actions that had no physical harm but caused needless mental harm. Moral judgments of things like bullying and verbal abuse also make sense only to the extent we allow psychological harms to "count."

It may be tempting to limit the set of morally relevant psychological harms to mental illnesses. But this is unworkable too. First, this gets the direction of causality wrong: surely what constitutes a mental illness is determined by normatively relevant mental harm, not the reverse. Second, categorizing such phenomena as "mental illness" has more to do with the correct approach to remedying a psychological harm rather than whether such harm is real or relevant: a "mental illness" categorization means the tools of psychology and psychiatry might help remedy the harm. Third, we make no such distinction with physical pain: a punch is bad if it causes pain, not iff it creates a medical condition. I see no reason for a similar distinction with mental pain.

So even though the more callous among us may wish to write off all affective harms, I do not see a way they can plausibly do so.

Extreme Approach Two: Naïve Accounting of Psychological Harms

The second extreme approach, and one that I see embraced a lot, is to naïvely count all affective outcomes of an action as the morally relevant consequences of that action. Thus, if Penny takes action A that causes an emotional harm H to Desmond, all of H is included in the set of morally relevant consequences of A. Thus, Penny is morally responsible for all of H.

I see several problems with this, but the biggest is this: while it's true that H is a consequence of A, it is also the result of Desmond's psychology. If H is the result of some aspect of Desmond's psychology over which he has control, then there is a prima facie case that Desmond is also (at least partially) responsible for H. Generally: if an affective harm H is the result of both non-psychological and psychological factors, it is not immediately obvious that only the non-psychological factors are morally relevant causes of H. This is especially true when the affected party has some control over their own psychology.

We apply a similar standard to physical harms. Suppose Jack cuts Kate. Suppose that Kate then neglects to bandage the wound, neglects to keep it clean, and develops an infection. Suppose Kate further refuses medical treatment and then dies. Jack clearly has some responsibility in this situation, but so does Kate. Kate was reckless in her refusal to respond appropriately to the situation, and the harm was worse as a result.[4]

So too with many psychological harms: external actions can cause such harms, but the underlying psychology of the victim also plays a role. Insofar as the victim can preventatively or remedially alter their own psychology such that the harm is reduced, they may be under an obligation to do so, and failure to do so may absolve the offender of some or all of their responsibility.[5]

As I will argue in the next section, the choice of our ideology may be one such controllable aspect of our psychology.

To be very clear, I think many people of all political orientations are guilty of this. Many on the right fear terrorism much more than the objective threat of physical harm therefrom suggests they should. Many marginal anti-terrorism measures would therefore have to be measured on ameliorating the fears of terrorism, rational or not. On the left, psychic harms from many supposedly offensive actions are a good example of this: for some offended reactions, it's not clear to me whether such reactions are in fact justified or reasonable and therefore ought to be considered in toto.

I think all of this has a more pernicious effect than just bad blame assignment in individual cases. Naïve accounting of psychological harms (including unwillingness to ask individuals to change their psychology) causes affectively expensive ideologies [6] to propagate, which leads to supraoptimal restrictions on physical actions. This works thus ("Naïve Assignment Framework"):

  1. Psychological harm H is the result of an external act A and the victim's psychology Ψ
  2. By supposition, Ψ cannot be blameworthy cause of H
  3. Therefore, A is the sole morally blameworthy cause of H

As compared to a framework where we allowed Ψ to be a morally blameworthy cause of harms in at least some cases (as I argued above must be allowed), this framework for assigning moral blame will shift more blame onto people taking external actions. It is therefore supraoptimally restrictive of external actions, and suboptimally restrictive of affectively expensive ideologies. This causes those expensive ideologies to propagate more than they would under a better blame-assignment framework.[7]

More perversely, it incentivizes moral actors to adopt affectively expensive ideologies, since those ideologies are in effect subsidized by the Naïve Assignment Framework.

I think that most people would agree that the Naïve Assignment Framework makes little sense in apolitical contexts. For example, if I have an intense irrational negative reaction to some arbitrary, normally benign word, I can hardly un-forewarned blame strangers for harming me by using it in my presence. In more intimate settings, and depending on the word, it may be reasonable for me to expect others to avoid using that word, especially if it is the result of a mental illness. Even so, they may also be justified in using it if, for example, after several years I have made no attempt to cure my phobia despite my ability to cheaply do so.

A Better Framework

If naïve accounting of psychological harms as described above is inappropriate, what's a better approach? The unglamorous answer is to carefully analyze the costs to changing both A and Ψ to minimize harm.

We can borrow some lessons from the economics of tort law here. Tort plaintiffs have a duty to take reasonable efforts to mitigate harms that befall them. Similarly, individuals who find themselves psychologically malaffected may be reasonably expected to take reasonable efforts to mitigate the psychic harms. In tort, a plaintiff may also be comparatively negligent in causing her own injury, thus reducing the defendant's liability. So too for psychic harms, the victim may be comparatively responsible by, say, knowingly holding an unreasonable and fixable ideology that causes her to be excessively susceptible to psychic harms.

None of this deductively proves that in most—or even any—cases the victim of a psychic harm should bear most of the blame. Nor does it imply that the current assignment of blame in popular discourse is too biased in any particular direction. Instead, in any particular case of psychic harm, the proper assignment of blame will depend on the particular nature of the harm and the act that caused it.

Here, another tort concept is useful: the least-cost avoider. Developed by judge and legal scholar Guido Calabresi, the idea is that, under certain assumptions, the person who should bear liability for an injury in tort is the person who could have avoided the injury at the least cost. Applying the same principle to the psychic harms dilemma, we would say that the person who ought to bear the moral "liability" for a psychic harm (and thus be blamed for it) is the person who could have avoided the harm at the least cost. In some cases, this will be the external actor, but in some cases it will be the victim, who may have been able to avoid the harm by adjusting her psychology.

In estimating the costliness of psychological changes, one should account for the instrumental values of emotions. Negative emotions can warn us about possible unavoidable dangers and motivate us to combat injustices. Reshaping one's psychology to diminish these instrumental uses could be very costly—to both the individual and society—indeed. Yet, we must also be careful to avoid circularity here: if a psychic reaction at t1 is justified due its instrumental relationship to a further psychic state at t2, the cost of avoiding the psychic state at t2 must also be accounted for.

Furthermore, it is also quite possible that many psychic harms are practically unavoidable from the victim's perspective. It is difficult to imagine a well-functioning human who was indifferent to intense verbal degradation, for example. We should assume that the cost to the victim of avoiding nearly universal traits of human psychology is quite high, if it's even possible.

At this point, one might object that reshaping one's psychology is impossible. However, I see no reason to think that. We know that this sort of reshaping—learning to have appropriate emotional reactions and so on—is a normal part of growing up. And we know that psychiatric techniques like cognitive-behavioral therapy can be quite effective at helping patients develop healthier emotional responses.

Better Affective Ideologies

By now I hope I have convinced you that:

  1. Psychic harms matter, and
  2. The victim of a psychic harm is sometimes (at least in theory) morally responsible for (at least some of) mitigating or at least bearing the cost of that harm.

So how does this cash out in terms of constructing an ideology?

Ideally, one's ideology (and broader psychology), including its affective components, should allocate the burden of avoiding psychic harms to the person who can avoid them at the least cost.

So, I encourage readers to try to identify psychic harms that they can avoid at low total cost, both in themselves and in others. A trivial example might be negative reactions to a neighbor winning the lottery and becoming rich. Being a bit more polite and kind to others might be another. Even better would be increasing positive reactions to others' happiness, even when doing so is not intuitive.

In general, a zen-like reaction may be perfectly achievable and instrumentally acceptable for many circumstances.

To the extent your ideology demands instrumental affective reactions, try to do the following:

  1. Maintain proportion between the intensity of the negative affective reaction and the harm it is supposed to protect against.
  2. Consider the costs—to others and to society as a whole—of avoiding triggering those reactions.

If you are accused of causing psychological harm, you should take such accusation seriously (as you would with physical harms). You should acknowledge that there are costs of improving one's own psychology, including real monetary costs for things like therapy and the psychological cost of doing the difficult work of changing one's emotional instincts. But you should also feel morally licensed to ask whether the victim can bear the moral burden of dealing with those costs better than you.


  1. In this post I’ll talk mostly about psychological harms, rather than psychological pleasures, even though I think both are morally relevant. I do this because I see psychological harms brought up in public discourse more often than psychological pleasures. However, the framework here is relevant to both. ↩︎

  2. This may generalize to those aspects of an individual’s psychology over which the individual has some control. I sometimes use the concepts interchangeably, but focus on ideology because I think that it’s most relevant as a matter of public reason. ↩︎

  3. To be clear, none of this is to deny that it is theoretically possible (and may in fact have been the case that) for some, these actions were motivated purely by argument and not emotion. However, intuitively that was probably only a tiny minority of cases, if any. For most people, emotion mediates many actions. ↩︎

  4. Some people will be worried I am “blaming the victim” here. A couple responses: First, some victims are in fact blameworthy. When I fell off my electric scooter because I was not being careful, I became a blameworthy victim. Second, blame is properly assigned when such assignment incentivizes harm-minimizing behavior. In this example, we ought to blame Jack some (and maybe even most) because unprovokedly cutting people is generally not justified, and blaming him discourages that bad behavior. But we also ought to incentivize people to remedy and minimize harms that befall them, and so after the harm has befallen Kate, we can blame her for unreasonably failing to do so. Third, blaming a victim for failing to take reasonable remedial steps does not necessarily absolve the original perpetrator of their portion of blame, as in the Jack and Kate example. ↩︎

  5. Similarly, victims are not responsible for inevitable physical harms. ↩︎

  6. By which I mean, ideologies which cause their adherents to suffer more affective harms than they might otherwise. ↩︎

  7. Memetically, those ideologies also often benefit from the fact that they are both ideologically expensive and tend to endorse and propagate the Naïve Assignment Framework. They thereby shape the memetic ecosystem to be more accommodating to them, the way a human erects walls and roofs to make her ecosystem more accommodating. ↩︎

38

0
0

Reactions

0
0

More posts like this

Comments13
Sorted by Click to highlight new comments since: Today at 1:14 AM

I agree that ignoring psychological harms completely is arbitrary. Many people would prefer moderate physical pain to public humiliation and this seems pretty hard-wired in our psychology.

 At the same time, in the current climate claims of psychological harm are clearly used strategically. People supposedly feel unsafe if a colleague has political views that they disagree with for example, which clearly is not some sort of universal fact of human psychology. Certain claims of emotional harm should be discounted not because they are necessarily false, but because indulging them leads to a bad equilibrium. 

I think this is exactly the point I was trying to make here:

Naïve accounting of psychological harms (including unwillingness to ask individuals to change their psychology) causes affectively expensive ideologies [6] to propagate, which leads to supraoptimal restrictions on physical actions. This works thus ("Naïve Assignment Framework"):

  1. Psychological harm H is the result of an external act A and the victim's psychology Ψ
  2. By supposition, Ψ cannot be blameworthy cause of H
  3. Therefore, A is the sole morally blameworthy cause of H

As compared to a framework where we allowed Ψ to be a morally blameworthy cause of harms in at least some cases (as I argued above must be allowed), this framework for assigning moral blame will shift more blame onto people taking external actions. It is therefore supraoptimally restrictive of external actions, and suboptimally restrictive of affectively expensive ideologies. This causes those expensive ideologies to propagate more than they would under a better blame-assignment framework.[7]

More perversely, it incentivizes moral actors to adopt affectively expensive ideologies, since those ideologies are in effect subsidized by the Naïve Assignment Framework.

Great article, very logical approach.

We might want to consider the idea that the 'victims' of ideology-driven psychological harms might be blameworthy, even if they are not the least-cost avoider any more. It might be the case that the cheapest way to avoid the harm is to not adopt the ideology in the first place but, having adopted it, it is very hard to avoid subsequent harm, and it cannot easily be un-adopted. In this case I think we would not want to encourage people to adopt such an ideology, so we might want to hold them responsible after the fact. (This is implicitly covered in your piece but I thought I'd make it explicit).

Yes, 100%. Worth noting that, in law, "cost-avoider" assessments include the cost of avoiding the cost ex ante, not just the cost of remedying it ex post! After all, we care about incentive-setting.

I agree that psychologial harms (intrinsically) matter and that the fact that some such harms are contingent on the harmed persons having certain beliefs, attitudes or dispositions (i.e. their psychology) raises complicated questions.

That said, I don't think that a simple framework based around whether it is easier to minimise harm by changing the offending 'actions' (fwiw, it seems like this could include broader states of affairs) or the harmed person's psychology, will suffice.

We probably also need to be concerned with whether the harmed person's beliefs are true or false and whether their attitudes are fitting (not merely whether they are fortunate) (see Chappell, 2009).

For example, if Sam comments on Alex's post on the Forum and Alex experiences harm due to taking this in a certain way, it's probably important to know whether their Alex's response is itself appropriate. (Obviously there are various complexities about how this might go: Alex might reasonably/unreasonably have true/false beliefs and have fitting/unfitting attitudes which result in appropriate/inappropriate responses, in any number of different combinations).

We might have non-consequentialist reasons to care about each of these things (i.e. not wanting people to have to form false beliefs or inappropriate attitudes, even if it would lead to fortunate outcomes if they did). A famous example of this concerns the possibility of adaptive preferences, i.e. it seems intuitively troubling if someone or some group who face poor prospects, form low expectations in light of this fact and are thereby satisfied receiving little (and less than they could in better circumstances).

But we might also have consequentialist grounds for not taking a naive approach based on asking whether it would be easier for Alex or Sam to change to reduce the harm caused to Alex. Whichever might seem easier in a particular case or set of cases, it seems reasonable to think there might be significant downstream costs to people having having false beliefs or unreasonable responses. This is especially so given that, as you note, what incentives we establish here can encourage different 'affective ideologies' or different individual psychologies to propagate (especially since people have some capacity to 'tie themselves to the mast' and make it such that they could not cheaply change their attitudes (even if they otherwise would have been able to)).

Agree that this is an important consideration! See my response above for a reply to a similar comment :-)

Suppose there is some kind of new moral truth, but only one person knows it.  (Arguably, there will always be a first person.  New moral truth might be the adoption of a moral realism, the more rigorous application of reason in moral affairs, an expansion of the moral circle, an intensification of what we owe the beings in the moral circle, or a redefinition of what "harm" means. ) 

This person may well adopt an affectively expensive point of view, which won't make any sense to their peers (or may make all too much sense).  Their peers may have their feelings hurt by this new moral truth, and retaliate against them.  The person with the new moral truth may endure an almost self-destructive life pattern due to the moral truth's dissonance with the status quo, which will be objected to by other peers, who will pressure  that person to give up their moral truth and wear away at them to try to "save" them.  In the process of resisting the "caring peer", the new-moral-truth person does things that hurt the "caring peer"'s feelings.

There are at least two ideologies at play here.  (The new one and the old one, or the old ones if there are more than one.)  So we're looking at a battle between ideologies, played out on the field of accounting personal harm.  Which ideology does a norm of honoring the least-cost principle favor?  Wouldn't all the harm that gets traded back and forth simply not happen if the new-moral-truth person just hadn't adopted their new ideology in the first place?  So the "court" (popular opinion? an actual court?) that enforces the least-cost principle would probably interpret things according to the status quo's point of view and enforce adherence to the status quo.  But if there is such a thing as moral truth, then we are better off hearing it, even if it's unpopular.

Perhaps the least-cost principle is good, but there should be some provision in a "court"for considering whether ideologies are true and thus inherently require a certain set of emotional reactions.

These are all great considerations! However, I think that it's perfectly consistent with my framework to analyze the total costs to avoiding a harm, including harms to society from discouraging true beliefs or chilling the reasoned exchange of ideas. So in the case you imagine, there's a big societal moral cost from the peers' reactions, which they therefore have good reason to try to minimize.

This generalizes to the case where we don't know whose moral ideas are true by "penalizing" (or at least failing to indulge) psychological frameworks that impede moral discourse and reasoning (perhaps this is one way of understanding the First Amendment).

Reminded me of this paper, on a somewhat related topic:

We investigate the consequences and predictors of emitting signals of victimhood and virtue. In our first three studies, we show that the virtuous victim signal can facilitate nonreciprocal resource transfer from others to the signaler. Next, we develop and validate a victim signaling scale that we combine with an established measure of virtue signaling to operationalize the virtuous victim construct. We show that individuals with Dark Triad traits—Machiavellianism, Narcissism, Psychopathy—more frequently signal virtuous victimhood, controlling for demographic and socioeconomic variables that are commonly associated with victimization in Western societies. In Study 5, we show that a specific dimension of Machiavellianism—amoral manipulation—and a form of narcissism that reflects a person’s belief in their superior prosociality predict more frequent virtuous victim signaling. Studies 3, 4, and 6 test our hypothesis that the frequency of emitting virtuous victim signal predicts a person’s willingness to engage in and endorse ethically questionable behaviors, such as lying to earn a bonus, intention to purchase counterfeit products and moral judgments of counterfeiters, and making exaggerated claims about being harmed in an organizational context.

This is interesting! I've been thinking about emotional harms caused by social systems recently.

Robinhood is being sued for allegedly causing the suicide of Alex Kearns through negligence. How do courts address psychological harms like this?

In tort law (the relevant domain), there are two possible causes of action:

These are fairly rarely successful, which is one way tort law may diverge from moral analysis of emotional harms: blameworthy infliction of emotional distress is probably much more common than tortious infliction of emotional distress.

Thanks a lot for this, this feels like an important puzzle piece in a discussion I recently had and part of an intuition that is now more understandable to me.

Glad to hear :-)

More from Cullen
Curated and popular this week
Relevant opportunities