https://www.sciencedirect.com/science/article/pii/S095032932200180X

I found this topic first from a short snippet in The Week, then from the news article https://www.smithsonianmag.com/smart-news/maintaining-a-vegetarian-diet-might-be-in-your-genes-180983021

 According to the twin study of one of the quoted papers, if I don't misunderstand it, 70-80% of abstinence from various animal products can be ascribed to genetic influence, regardless of people's conscious reasons. This is striking to the point that I am initially sceptical. They cite similar results specific to vegetarianism and veganism from another paper in 2021, https://linkinghub.elsevier.com/retrieve/pii/S0950329321003037

I'll confess that my first response wasn't to actually look at the papers- I was in poor mental health and productivity condition at the time. Instead I searched the EA forums to see what the community consensus was on heritable diet preference. But searching 'meat genetics', 'vegetarian genetics' and 'vegan genetics', and those search terms in reverse order, on the EA forums, I saw only one post that used both terms in the immediate search results, suggesting this is not widely discussed here. In that context it was talking about whether EAs have to trade off their health to go vegan. This paper was not cited. https://forum.effectivealtruism.org/posts/3Lv4NyFm2aohRKJCH/change-my-mind-veganism-entails-trade-offs-and-health-is-one?commentId=G7ZK76h99Nrv6GCEq

I have no domain specific knowledge, so I'd like others to weigh in. How convincing are these studies, and what do the results mean from the more general standpoint of animal advocacy?

If there are fixed individual genetic markers that cause people to feel significantly worse after giving up meat and/or dairy, that might be causing a disconnect between veg*ns who say it's easy and ex-veg*ns who claim to- in extreme cases- have almost died, both believing the other side is engaging in motivated reasoning. And perhaps general acceptance of the moral necessity of vegetarian and/or vegan diets is going to be more difficult than we'd thought, barring improved plant-based substitutes that mimic real meat, human gene editing, or some great strides in nutritional science that can isolate the necessary nutrients. I cautiously agree with Singer's stance that even if human nutrition was suboptimal it'd be worth it, but I imagine advocates would struggle to convince a public if a vocal minority were experiencing personal problems from the transition.

As someone new to the forums I don't know how to weight 'these studies are misleading/wrong and everybody already knows' vs 'nobody else has posted this vital info specifically yet' or various points in between. Either way it will be a learning experience.

Summary- these two twin studies claim a person's willingness to stick with veg*n diets, regardless of their stated reasons, are 70-80% inborn. Few people in EA seem to be talking about this topic. If true, it could explain the wildly different accounts of the effect of veg*n diets on health, and presents barriers to making veg*nism normal. I'd like better-informed people with more domain knowledge to weigh in on whether these are good studies and my interpretation is correct.

21

0
0

Reactions

0
0
Comments6


Sorted by Click to highlight new comments since:

I found this topic first from a short snippet in The Week, then from the news article https://www.smithsonianmag.com/smart-news/maintaining-a-vegetarian-diet-might-be-in-your-genes-180983021.

Remove the dot at the end, otherwise it's a dead link.

It is important to note that behavior is always in relation to an environment, so we can't say that some behavior is 70% caused by genetics, the most we can say is that something is 70% caused by genetics in this specific environment. This is easy to check with a thought experiment, lets take these people whose "willingness to stick with veg*n diets, regardless of their stated reasons, are 70-80% inborn" lock them in a vegetarian Hindu monastery and you'll obviously see the rate of vegetarian diets skyrocket. So when you write "Vegetarianism is mostly genetic, claim Wesseldijk et al." Wesseldijk herself would say:

Yet, as Dr. Wesseldijk reminded me in an email, high heritabilities do not imply that biology is destiny. According to surveys by the Vegetarian Resource Group, the percentage of Americans who are vegetarian or vegan jumped six-fold between 1994 and 2022—from 1% to 6%. This impressive change in patterns of meat-eating was due to shifts in cultural attitudes, not changes in our DNA.

And to tie it in to the Hindu monastery (from the same article):

It is important, however, to keep in mind that estimates of heritability only apply to the populations that the subjects in the studies represent. Most of the individual differences in meat-eating among the Dutch are rooted in genes, yet culture is almost entirely responsible for the fact that per capita meat consumption is 20 times higher in the Netherlands than it is in India.

Or as Dr. Wesseldijk has also phrased it:

An environment can completely counteract something that is highly heritable, and the same goes with vegetarianism

Agreed completely. A genetic component influencing dietary decisions doesn't mean that veganism / vegetarianism is out of reach for most or that cultural factors play no role in the adoption of animal-friendly lifestyles. There's definitely still a role for advocacy regardless of the heritability of veg*nism.

As someone who has done vegan advocacy for a long time, this matches with my experience unfortunately. A meatless diet just "clicks" with some people, while with others it's a near impossibility to get them to sustain without meat (let alone other animal products). A genetic component would certainly explain my observations, because there definitely seems to be something deeper than underlying belief or commitment going on. 

If anything, this further underscores the need for cellular agriculture (lab-grown meat / eggs / dairy, without harm to animals). We need to find a way to make these foods cheap and cruelty-free, since universal veganism / vegetarianism may not be possible (although there are certainly a lot of cultural barriers that can be addressed first). 

I've talked about this a bit in the post you cited, and happen to have recently commented on it. I haven't dug into the genetics or tried to quantify the effect because I don't expect the data to be very actionable at this stage. We're years from having specific treatments based on genetics, most studies are very bad, and I think self-reports and veg*nism attrition rates should be enough to convince people that people vary widely and you need to plan for that in full generality. 

I would appreciate it if someone with technical competence assessed the reliability of this study and its findings.

Doesn't pass the sniff test for me. Two concerns:

  1. Every vegetarian I've met or heard of is vegetarian because of either a) animal welfare, b) climate change or c) cultural tradition. It seems very unlikely that any of these factors could be strongly genetic.
  2.  They're determining genetic heritability by comparing identical twin pairs with non-identical twin pairs (i.e. if the identical twins are more similar in their preferences than non-identical twins, they assume that there's more of a genetic component). I imagine that there could be lots of confounders here. Growing up as an identical twin is a different experience to being a non-identical twin. There could be different environmental factors between the two situations (e.g. maybe identical twins tend to feel closer and more closely mimic each other's behaviours/choices).


 

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f