Very short post. This is not my area of expertise at all. But it seems like an opportunity.

The Olympics start this week. In the UK, the biggest Olympic story is not about any runner or swimmer or gymnast. It is about animal rights. But, as with most animal-rights stories which make the front-pages (bull-fighting, hunting), it misses the real problem, factory-farming.  

The story: Apparently a famous Olympian equestrian has been forced to withdraw from the Olympics after footage emerged of her whipping a horse during training, 4 years ago. Cue the standard apologies, the "error of judgment" comment, the universal condemnation - and of course the video is shared with a warning that people might find this shocking. 

I think it would be wonderful if someone with the right moral stature (which is not me, I'm not even a vegan ...) were to highlight the absurdity of so much moral outrage for an otherwise well-treated, well-fed horse who gets whipped on the leg one time, but no reaction to the billions of factory-farmed animals who suffer in cages for their entire lives before we kill them and eat them. Maybe it would make people think again about factory-farming, or at least ask themselves if their views on animals were consistent. 

I was reminded of the Tolstoy description of a lady who "faints when she sees a calf being killed, she is so kind hearted that she can’t look at the blood, but enjoys serving the calf up with sauce.” 

My point with this post is just that if someone is in a position to express a public opinion on this, or write a letter to the editor, it might be an opportune moment given the size of the story right now. 



Charlotte Dujardin out of Olympics: The video, the reaction and what happens now explained | Olympics News | Sky Sports

 

91

12
0

Reactions

12
0
Comments18


Sorted by Click to highlight new comments since:

I can't recall the paper, but I remember reading a paper in moral psychology that argues that on a psychological level, we think of morality in terms of 'is this person moral', not 'is this act moral'. We are trying to figure out if the person in front of us is trustworthy, loyal, kind, etc.

In the study, participants do say that a human experiencing harm is worse than an animal experiencing harm, but view a person who hits a cat as more immoral than a person who hits their spouse. I think what people are implicitly recoiling at is that the person who hits a cat is more likely to be a psychopath. 

I think this maps pretty well onto the example here, and the outrage of people's reactions. And to clarify, I think this explanation captures WHY people react the way they do in the descriptive sense. I don't think that's how people ought to react. 

Perhaps Uhlman et al (2015) or Landy & Uhlmann (2018)?

From the latter:

Evidence for this assertion comes from studies involving two jilted lovers (Tannenbaum et al., 2011, Studies 1a and 1b).  Participants were presented with information about two men who had learned that their girlfriends were cheating on them.  Both men flew into a rage, and one beat up his unfaithful girlfriend, while the other beat up her cat.  Participants judged the former action as more immoral, but judged the catbeater as having worse character (specifically, as being more lacking in empathy) than the girlfriend-beater.  This is an example of an act-person dissociation. 

I think it was the first one. Well done for finding it!

That's really interesting, and makes a lot of sense. Thanks for sharing! 

[anonymous]1
0
0

on a psychological level, we think of morality in terms of 'is this person moral', not 'is this act moral'. We are trying to figure out if the person in front of us is trustworthy, loyal, kind, etc.

I think this, as written, is not explanatory, because one could regard another to be of immoral character on the basis that they perform immoral acts. I'm not sure what else 'moral character' could mean, other than "their inner character would endorse acting in {moral or immoral way}".

I think it would be correct to say that average humans act on various non-moral judgements in ways we think should be reserved for moral judgements.

In the study, participants [...] view a person who hits a cat as more immoral than a person who hits their spouse

Hmm, I might share this view (I'm unsure which evidences the more bad character), but I don't think it comes from something irrational. It's more like: inferring underlying principles they might have in some deep, unconscious level. E.g., someone who hits a cat might have a deep attitude of finding it okay to hurt the weak. But someone hitting a spouse is also evidence of different bad 'deep attitudes'. This way of thinking about the question is compatible with my consequentialism, because how those individuals act is a result of these 'deep attitudes'.

Hi Quila,

If I understand you correctly I think we broadly agree that people tend to use how someone acts to judge moral character. I think though this point is underappreciated in EA, as evidenced by the existence of this forum post. The question is 'why do people get so much more upset about hitting one horse than the horrors of factory farming', when clearly in terms of the badness of an act, factory farming is much worse. The point is that when people view a moral/immoral act, psychologically they are evaluating the moral character of the person, not the act in and of itself.

[anonymous]1
0
0

My point was that purchasing animal products usually suggests a bad 'moral character' trait: the willingness to cause immense individual harm when this is normative/convenient.

I'm saying that average people's judgements of others' characters are not best described as 'moral' per se, because if they were, they would judge each other harshly for consuming animals.

So this involves a bit of potentially tenuous evolutionary psychology, but I think part of what is going on here is that people are judging moral character based on what would have made sense to judge people on 10,000 years ago which is, is this person loyal to their friends (ie me), empathetic, helps the person in front of them without question, etc. 

I think it's important to distinguish between morality (what is right and wrong) from moral psychology (how do people think about what is right and wrong). On this account, buying animal products tells you that a person is a normal member of society, and hitting an animal tells you someone is cruel, not to be trusted, potentially psychopathic, etc.   

[anonymous]1
0
0

Okay, sounds like we indeed agree on the object-level. I guess it's just not intuitive to me to refer to things like 'will this person be loyal to me' as 'moral character'

Francione did this in his 2007 article "We're all Michael Vick". He calls it moral schizophrenia. Singer calls it secondary speciesism: prioritising some non-human animals over others. I don't know if anyone has made a habit of it, I think it's a good idea. I'd be interested to see someone try to measure the effects this kind of argument has on the audience.

Wow, great example. Thanks for sharing this. Everytime I see this happening, it frustrates me, but I don't actually have a clear idea of how to talk about it. 

I agree that this is kind of absurd but I expect that public concern for small-scale animal suffering weakly increases potential future concern for large-scale animal suffering, rather than funging against it. I think it weakly helps by propagating the meme of "animal suffering is a problem worth taking seriously".

I wouldn't promote concern for Olympic horses as an effective cause area, but I wouldn't fight against it, either.

Absolutely. Definitely this is still better than a world where people say "it's OK to whip a horse!"

Most people view farm animals as serving a purpose, whereas animal cruelty is criticized more when more unnecessary. That's why moral progress is made in fashion, poaching, and animal-fighting sports and why veganism should focus more on food waste and traditions like egg tosses and egg decorating: https://www.vox.com/future-perfect/22890292/food-waste-meat-dairy-eggs-milk-animal-welfare Omnivores can respect farm animal sacrifices more, which is a useful mindset shift

I think it would be wonderful if someone with the right moral stature (which is not me, I'm not even a vegan ...)

I'm not sure what (else) you mean by having the right moral stature, but I think in general people shouldn't need to meet some moral bar to talk about doing the right thing - one need not be vegan to promote eating less meat, need not be giving X% to advocate donation, etc.

That's a very accurate observation. I recently saw a clip, maybe from some movie or show. There was a guy who brought a pig to a party, saying "Meet Jack (for example, I don't remember), we're going to eat him, I'm going to slaughter him now". What outraged the guests was that there would be children watching, so his behavior was deemed inappropriate. But overall, no one would have minded eating the pig if it was done discreetly. I'm not a vegan either, but it's kind of hypocritical.

Reminds me of the part in Douglas Adams' "The Restaurant at the End of the Universe" where a cow-like being is eager to be eaten, describes how she had been overfeeding to fatten herself, and suggests to the Earthlings dishes made of parts of its body. They end up horrified and ordering a salad instead.

I don't expect that Adams wrote it to defend veganism, but he was good at laughing at this kind of absurdity / hypocrisy.

On this subject, it was nice to see Nick Kristof in the New York Times write on a related theme, comparing how we treat and respect dogs and pigs.

Opinion | Dogs Are the Best! But They Highlight Our Hypocrisy. - The New York Times (nytimes.com)

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f