AllAmericanBreakfast

2644Joined May 2019

Comments
255

My friend is not part of EA, she was just at an EA-adjacent organization, where the community health team does not have reach AFAIK.

It would be nice to imagine that aspiring to be a rational, moral community makes us one, but it’s just not so. All the problems in the culture at large will be manifest in EA, with our own virtues and our own flaws relative to baseline.

And that’s not to mitigate: a friend of mine was raped by a member of the Bay Area AI safety community. Predators can get a lot of money and social clout and use it to survive even after their misbehavior comes to light.

I don’t know how to deal with it except to address specific issues as they come to light. I guess I would just say that you are not alone in your concern for these issues, and that others do take significant action to address them. I support what I think of as a sort of “safety culture” for relationships, sexuality, race, and culture in the EA movement, which to me means promoting an openness to the issues, a culture of taking them seriously, and taking real steps to address them when they come up. So I see your post as beneficial in promoting that safety culture.

What you have is a hypothesis. You could gather data to test it. But we should not take any significant action on the basis of your hypothesis.

I am really specifically interested in the claim you promote that moral calculation interferes on empathic development, rather than contributes to it or is neutral, on net. I don’t expect there’s much lit studying that, but that’s kind of my point. Why would we fee so confident that this or that morality has that or this psychological effect? I have a sense of how my morality has affected me, and we can speculate, but can we really claim to be going beyond that?

No worries!

I understand your concern. It seems like your model is that you assume most people start with a sort of organic, healthy gut-level caring and sense of fellow-feeling, which moral calculation tends to distort.

My model is the reverse. Most people are somewhere between cold and unfeeling, and aggressively egocentric. Moral reflection builds into them some capacity for paying attention to others and cultivating empathy, which at first starts as an intellectual exercise and eventually becomes a deeply ingrained and felt habit that feels natural.

By analogy, you seem to see that moral reflection turns humans into robots. By contrast, I see moral reflection as turning animals into humans. Or think of it like acting. If you've ever acted, or read lines for a play in school, you might have experienced that at first, it's hard to even understand what your character is saying or identify their objectives. After time with the script, actors understand the goal and develop an intellectual understanding of their character and the actions they use to convey emotion. The greatest actors are perhaps method actors, who spend so much time with their character that they actually feel and think naturally like their character. But this takes a lot of time and effort, and seems like it requires starting with a more intellectualized relationship with their character.

As I see it, this is pretty much how we develop our adult personalities and figure out how to fit into the social world. Maybe I'm wrong - maybe most people have a nice well-adjusted sense of fellow feeling and empathy from the jump, and I'm the weird one who's had to work on it. If so, I think that my approach has been successful, because I think most people I know see me as an unusually empathic and emotionally aware person.

I can think of examples of people with all four combinations of moral systematization and emapthy: high/high, high/low, low/high, and low/low. I'm really not sure how the correlations run.

Overall, this seems like a question for psychology rather than a question for philosophy, and if you're really concerned that consequentialism will turn us into calculators, I'd be most interested to see that argument referring to the psych literature rather than the philosophy literature.

Based on this comment, I think I understand your original point better. In most situations, a conscious chain of ethical reasoning held in the mind is not what should be motivating our actions from moment to moment. That would be crazy. I don’t need to consider the ethics of whether to take one more sip of my cup of tea.

But I think the way we resolve this is a common sense and practical form of consequentialism: a directive to apply moral thought in a manner that will have the most good consequences.

One way that might look is outsourcing our charity evaluations to specialists. I don’t have to decide if bednets or direct donations is better: GiveWell does it for me with their wonderful spreadsheets.

And I don’t have to consider every moment whether deontology or consequentialism is better: the EA movement and my identity as an EA does a lot of that work for me. It also licenses me to defer to habit almost 100% of the time, and invites applying modest limits to my obligation to give of my resources - time, money, and by extension thought.

So I think EA is already doing a pretty darn good job of limiting our need to think about ethics all the time. It’s just that when people do EA stuff, that’s what they think about. My personal EA involvement is only a tiny fraction of my waking hours, but if you thought of my EA posting as 100% of who I am, it would certainly look like I’m obsessed.

The term I'd probably use is hypocrisy. Usually, we say that hypocrisy is when one's behaviors don't match one's moral standards. But it can also take on other meanings. The film The Big Short has a great scene in which one hypocrite, whose behavior doesn't match her stated moral standards, accuses FrontPoint partners of being hypocrites, because their true motivations (making money by convincing her to rate the mortgage bonds they are shorting appropriately) don't match their stated ethical rationales (combating fraud).

On Wikipedia, I also found definitions from David Runciman and Michael Gerson showing that hypocrisy can go beyond a behavior/ethical standards mismatch:

According to British political philosopher David Runciman, "Other kinds of hypocritical deception include claims to knowledge that one lacks, claims to a consistency that one cannot sustain, claims to a loyalty that one does not possess, claims to an identity that one does not hold".[2] American political journalist Michael Gerson says that political hypocrisy is "the conscious use of a mask to fool the public and gain political benefit".[3]

I think "motivational hypocrisy" might be a more clear term than "moral schizophrenia" for indicating a motives/ethical rationale mismatch.

The Mayo Clinic says of schizophrenia:

“ Schizophrenia is characterized by thoughts or experiences that seem out of touch with reality, disorganized speech or behavior, and decreased participation in daily activities. Difficulty with concentration and memory may also be present.”

I don’t see the analogy between schizophrenia and “a certain coldness toward ethical choices,” and if it were me, I’d avoid using mental health problems as analogies, unless the analogy is exact.

Thanks for clarifying!

The big distinction I think needs to be made is between offering a guide to extant consensus on moral paradigms, and proposing your own view on how moral paradigms ought to be divided up. It might not really be possible to give an appropriate summary of moral paradigms in the space you’ve allotted to yourself, just as I wouldn’t want to try and sum up, say, “indigenous vs Western environmentalist paradigms” in the space of a couple paragraphs.

Load More