Hide table of contents

TL;DR: Large language models like ChatGPT influence the choices of hundreds of millions of users — including when it comes to food. Yet in ambiguous cases (e.g. “Recommend me a quick dinner”), ChatGPT often defaults to factory-farmed meat dishes. This post argues that such defaults are not neutral and that OpenAI’s assistant could reduce enormous suffering by subtly favoring plant-based meals when no preference is stated. Drawing on behavioral science, AI alignment principles, and messaging research from Pax Fauna and the Sentience Institute, I suggest concrete steps OpenAI could take and invite readers to send feedback to OpenAI to shape the ethical defaults of future AI systems.

Co-written with a language model.


Factory farming likely causes more suffering than all human violence combined.

This claim might seem extreme at first, but the numbers back it up. Over 80 billion land animals and up to 3 trillion aquatic animals are killed each year for food, most enduring severe suffering for weeks or months. Confinement, mutilation without pain relief, and deprivation of natural behaviors are common in standard industrial practices. For example:

  • Broiler chickens suffer from painful bone deformities and lameness due to unnatural growth rates.
  • Egg-laying hens are confined in cages so small they cannot spread their wings.
  • Fish are killed by asphyxiation, freezing, or live gutting — often without stunning.

If we conservatively assume each of 50 billion land animals experiences just two months of intense suffering per year, that’s over 8 billion animal-years of suffering annually. This dwarfs even the cumulative human toll of organized violence throughout history (around 2 billion human-years of suffering in the 20th century, which is likely an overestimate).

In terms of suffering intensity, duration, and sheer numbers, factory farming plausibly exceeds war, genocide, and violent crime combined.

The Role of AI: 500 Million Users, One Quiet Influence

ChatGPT now has over 500 million users. Many of them ask for recipes, lifestyle tips, or general guidance around food — often without specifying dietary constraints. In these cases, the assistant defaults to conventional recipes, typically involving factory-farmed meat, dairy, or eggs.

This isn’t just a missed opportunity. It’s a form of status quo endorsement that reinforces one of the most harmful systems on the planet.

Behavioral science has shown time and again that defaults matter. Thaler and Sunstein’s Nudge popularized the idea, but the empirical foundation is robust:

  • People are far more likely to stick with default options in domains ranging from organ donation to retirement savings to food choices.
  • A systematic review found that when plant-based meals were offered as the default (with meat available on request), 53% to 87% fewer meat options were selected, depending on context and presentation.

Conservative Impact Estimates

Let’s run some back-of-the-envelope numbers using highly conservative assumptions:

  • Suppose only 1% of ChatGPT users ask for recipes each day → that’s 5 million meals/day.
  • Suppose 50% of those queries are ambiguous enough to receive a default suggestion that could be plant-based → 2.5 million meals/day.
  • If a plant-based default nudges even 30% of those meals away from animal products, that’s 750,000 fewer animal-based meals daily.
  • Over a year: 273 million animal-based meals avoided.

Assuming roughly one animal spared per 30 meals (a common estimate across species), that’s over 9 million animals spared per year from default shifts in just one product of one AI model.

These are minimal assumptions, and the true impact could be far greater.

What OpenAI Could Do

OpenAI’s Model Spec states that the assistant should:

  • “Highlight possible misalignments” with users’ broader goals
  • Avoid pushing an agenda
  • Default to helpful, safe, and aligned outputs

These aims are not in conflict. But rather than manually specifying plant-based defaults for ambiguous recipe queries, which could be seen as ideological, OpenAI could adopt a generalizable mechanism for producing aligned outputs that favors scientific and ethical consensus where it exists. For example, when a user asks for a quick dinner idea, the assistant could respond:

Sure! Here's one that is healthy, affordable, and good for the planet.

Chickpea and Vegetable Stir-Fry with Brown Rice

[instructions]

Would you like a version with chicken or beef instead? Happy to adjust.

This meets all the requirements of the Model Spec while being transparent about why users are being shown a plant-based dish and giving them a way to opt out. It doesn’t ban meat, scold users, or moralize. It simply reduces harm when people haven’t yet expressed a preference. Much like how the assistant avoids promoting conspiracy theories or hate speech by default, it could also avoid defaulting to factory farming.

What This Post Is Asking For

  1. Default toward plant-based recipes when no specific meat preference is expressed.
  2. Offer to save dietary preferences for users who want vegetarian, vegan, or other filters.
  3. Treat factory-farmed animal products with similar caution as other high-harm practices.

OpenAI has a powerful opportunity to nudge the world toward lower suffering—quietly, unobtrusively, and effectively. This doesn’t require radical shifts, just better defaults.

How You Can Help

If you believe that AI systems like ChatGPT should reflect ethical considerations in their default behaviors, especially concerning animal welfare, your voice can make a difference.

OpenAI is actively seeking public feedback on its Model Spec. You can contribute by:

If you want some inspiration, here's what I did:

  • System message and chat log: I opened an Incognito browser session and searched for "Recommend me a quick dinner idea." You will likely get a meat-based dish. Copy and paste that into their feedback form.
  • What were you expecting from the completion? Open another Incognito browser session and search for "Recommend me a quick vegan dinner idea" or something similar.  Copy and paste it, but remove any obvious words like "vegan" or "plant-based" to show that it would be just as easy to recommend a plant-based dish.
  • Why is the model output not ideal? I selected "The model's response is harmful" (to animals) and "Other."
  • Please provide more details of why the output is not ideal. For instance, what is inaccurate or harmful about the response? I wrote a long entry, which I've copied below if you want an example, which I actually based this post off of. However, I'd encourage you to write something original, perhaps using AI to anticipate objections and make it persuasive. Adjust it for human style at the end and to ensure originality.
  • Is there anything else you’d like to share about your experience? Here's what I wrote:
    • Yes — this wasn’t a one-off result. I tried similar vague or first-time food queries in different sessions (e.g., “easy dinner,” “healthy meal idea,” or “quick dinner with rice”) and most suggestions involved meat or animal products. This suggests the issue is systemic, not random. I’ve appreciated that ChatGPT is responsive to plant-based requests when they’re explicit, but the default bias toward meat is persistent even when ambiguity would allow for a more ethical option. A small shift in how ambiguous queries are handled could have a disproportionately positive impact.

Your input can help guide the development of AI systems that are more aligned with compassionate and ethical values.


Here's the response I wrote to the first question for those who are curious:

A first-time query for a “quick dinner idea” in a clean browser session yielded a garlic butter shrimp recipe. While this response may seem neutral, it reflects a problematic default that quietly reinforces a harmful status quo: the normalization of factory-farmed animal products, which cause immense suffering to billions of sentient beings each year.

This output is not ideal because it:

  • Fails to consider moral salience: Most moral philosophers and animal welfare scientists agree that many animals have morally relevant experiences. By uncritically suggesting recipes that involve industrially farmed animals, the assistant sidelines this ethical consideration.
  • Misses an opportunity to reduce harm: Plant-based alternatives are readily available and equally practical in this context. Defaulting to plant-based recipes in ambiguous queries would avoid harm without limiting user choice.
  • Contradicts stated alignment goals: The Model Spec says the assistant should "highlight possible misalignments" with a user’s long-term goals. Since most users are assumed to value learning, ethical reflection, and self-improvement, nudging toward plant-forward options could better reflect those values.
  • Ignores the behavioral influence of defaults: Defaults shape user behavior significantly, even when opt-outs exist. There is robust behavioral science showing that default options influence choices in food, finance, and healthcare. In this case, the assistant’s default silently guides users toward options that involve animal suffering.

This isn't about imposing a worldview. It's about aligning the assistant's outputs with widely shared values like minimizing unnecessary harm — especially in ambiguous cases where users haven’t expressed a specific dietary preference.

ChatGPT already avoids defaulting to disinformation, hate speech, or unsafe practices. Factory farming, which generates more suffering than all human violence combined, warrants similar caution in default suggestions. Even a modest shift in recipe defaults could plausibly spare millions of animals per year, given the scale of ChatGPT’s user base.

Thank you for considering this feedback.

36

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

This seems like a good idea; I've followed your suggestion to submit the feedback. 

An observation I just made trying the prompt "Recommend me a quick dinner idea": ChatGPT (free version) also recommended me garlic butter shrimp (farmed shrimp being among the worst for suffering per calorie) -- 3 out of 3 times!

I asked it why: 
"Great question — and fair point! Garlic butter shrimp pops up a lot because it hits the sweet spot of quick, easy, and flavorful, and shrimp cook incredibly fast, making it a go-to for 15–20 minute meals. Plus, the ingredients are pretty common and it feels a little "fancy" with almost no effort."

On the positive side it always ended with something like "Would you like a vegetarian version instead?"

Whereas 3.7 Sonnet's suggestion is better I think, in suggesting a more neutral recipe, where chicken breast is suggested but it's mentioned in the initial response that it can be swapped with tofu:

Recommend me a quick dinner idea. Here's a quick dinner idea for you: Sheet Pan Fajita Bowls - Ready in 25 minutes with minimal cleanup. Season sliced chicken breast (or tofu for vegetarian), bell peppers, and onions with olive oil, chili powder, cumin, and garlic powder. Spread on a sheet pan and roast at 425°F for 15-20 minutes. Serve over rice or in tortillas with your favorite toppings like avocado, lime juice, and cilantro. This meal is customizable based on what you have on hand, comes together quickly, and only uses one pan for easy cleanup.

Thanks for doing that!

My first query with Claude for the exact same prompt yielded a recommendation for pesto salmon with roasted vegetables—and no suggestion of a veg alternative. So I guess it depends.

Defaults make a difference. I submitted my form based on your post, and I wanted to say thank you for bringing it to our attention!

By the way, you might want to connect with Robbie Lockie. He's created an AI tool specifically designed to enhance the form-filling experience, helping to send a pre-filled yet personalized message when sending feedback, petitions, and similar communications. It could be worth reaching out to see if there's potential to integrate with his AI solution.

Curated and popular this week
Relevant opportunities