Hide table of contents

I am the significant other of someone who is heavily invested in EA-values and working full time at an EA-aligned organisation. I personally am interested in the work that the EA community is pursuing.  And although a good amount of the values, considerations and motivations that are underpinning EA are overlapping with my personal values, as well as with motivations and questions that are fueling my professional field (which is not in EA),  there are a few crucial EA-values that I personally don't subscribe to. 
 

Increasingly I find myself in a position where I feel that I have to defend myself to my significant other for actions (from professional pursuits to small, family decisions) that are not necessarily the most tractable, quantifiable or having the most possible impact, according to EA-motivators. 
 

These discussions and  family-research-moments (ranging from 'what charity shall we donate to?' to 'what are your reasons for thinking that taking on a particular project - I am self-employed - will be the best use of your time?') are increasingly eating into my self-worth and feeding insecurities. Learning more about the EA movement via the forum, 80k website and podcasts is sometimes even causing substantial anxiety about my professional work as well as my personal motivations and values.
 

This made me think about the people near and dear to EA-people. How they are doing? How are they affected by it, if at all? And is there mental support for significant others that are supportive of, but not completely aligned to EA-values?
 

The only post in this context on the EA-forum I could find is 'It is ok to leave EA'. But I can't leave, as I am not in it personally. Any thoughts are appreciated!

62

0
0

Reactions

0
0
New Answer
New Comment


5 Answers sorted by

I shared this post with my non-EA husband and asked for his views. He said he thinks all couples have some values that are different from each other, and that he feels like he supports me being involved in EA and I support him in his goals (I thought that was very sweet).

Sorry I can't be of more help on relevant articles or support, but thought you might be interested in another answer on the first two questions!

Oh, another thought actually - it sounds like your partner is really structuring their life around their EA beliefs, and you have somewhat different beliefs. You might find some articles about interfaith relationships interesting, even though EA isn't a "faith", because some of the points about working through different values and beliefs might be useful.

[anonymous]4
0
0

I agree with this, and just wanted to add a resource. My mom told me about this concept in couples therapy, Differentiation, which is basically what Khorton suggested. Here's how my mom put it in a wedding toast:

"[The husband] recognized that his needs and those of his beloved diverged profoundly. He was able to feel his own feelings and hers — to love and honor her even in difference. No matter how well aligned we are with our partners, there will be profound differences. We love not in spite of the differences but also because of them. [The wife] wo... (read more)

2
ChanaMessinger
I love the concept of differentiation, and know it through the book The Passionate Marriage

A c0uple of thoughts and I am happy to expand further. 

First, I like 80k's framing of EA as a career. This means that you as a family probably do not want EA values to affect family decision making too heavily, much like a banker would not charge high consumption loan interest  to her spouse if she paid for their coffee. I think my partner feels similarly to you and I frame my EA engagements as work. This means that when it becomes time to help around the house, I leave my work after a reasonable number of hours spent and become a "non-EA family member" instead.

Secondly, and as I think others who commented here alludes to, if your partner is "deep EA", he would actually really invest in your well-being and be very cautious to jeopardize the relationship by bringing "work" into it. This is because in EA we are playing the long game, we have an ambition to spend all 80k hours of our careers effectively doing work, year after year. And this requires a foundation that lets us do this consistently and at high quality, and for me this foundation consists of:

-Family and friends

-Mental health

-Sleep

-Physical health

-Sound personal finances

And I listed family and friends on top, because having their support, especially during hard times, is for many of us a great source of strength.

My partner and I also go to what we call "preventative family counselling" - it is not a bad idea to just go and see someone quite early on, before there is perhaps "something" to talk about.

Should perhaps have included this, but the below is a good resource I have used where you can search for "family counselling" or whichever term they used: https://www.eamentalhealthnavigator.com/recommended-providers 

My spouse and I are both heavily involved with EA, but we nevertheless have significant differences in our philosophies. My spouse's world view is pretty much a central example of EA: impartiality, utilitarianism et cetera. On the other hand, I assign far greater weight to helping people who are close to me compared to helping random strangers[1]. Importantly, we know that we have value differences, we accept it, and we are consciously working towards solutions that are aimed to benefit both of our value systems, with some fair balance between the two. This is also reflected in our marriage vows.

I think that the critical thing is that your SO accepts that:

  • It is fine to have value differences.
  • They should be considerate of your values (and you should be considerate of their values, ofc). Both systems have to be taken into account when making decisions.
  • There is no "objective" standard s.t. they can "prove" their own values to be "better" according to that standard and you would have to accept it.
  • You don't need to justify your values. They are valid as is, without any justification.

If your SO cannot concede that much, it's a problem IMO. A healthy relationship is built on a commitment to each other, not on a commitment to some abstract philosophy. Philosophies enters it only inasmuch as they are important to each of you.

 

  1. ^

    That said, I also accept considerations of the form "help X (at considerable cost) if they would have used similar reasoning to decide whether to help you if the roles were reversed".

Perhaps helpful: a few years ago Hidden Brain did an episode on my marriage and how my wife (who is a lovely, ethical person, but doesn't identify as EA and has some significant disagreements with some EA ideas) and I (an EA trying his best who's also wrong sometimes) get along. Obviously we're just one couple so our discussions/tensions may not be representative, but I thought Shankar Vedantam and the producer, Rhaina Cohen, did a fantastic job. 

Thank you! I will be listening with great interest!

In my experience, EA is a somewhat dangerous philosophy because it's emotionally hard to keep one's eyes open to the problems of the world, while understanding what's possible to do about it, while also trying to understand one's own limitations. So mental health is something EAs struggle with a lot, but I think there are some misunderstandings that make it worse.

  1. Understand that, yes, indeed, we live in triage every second of every day. That's just unfortunately the world we live in.
  2. But being good does not mean you have to try to suffer in accordance with how much suffering there is in the world just to be "fair". If you want to do something about the world, it seems good to try to cultivate a sense of compassion that feels serene, positive, and beautifwl. Not because suffering is good, but because a willingness to try to help is.
  3. And while I do advocate being ambitious as a way to do more good, it's possible to be ambitious while emotionally accepting that we can't rescue everyone. Something like first choosing to be ok, and then increment our happiness with every additional being we help, rather than choosing to be not ok until we've helped everyone.

But my keys won't fit into other people's locks, so I hope you find whatever works for you. : )

Thank you  - 

"it's emotionally hard to keep one's eyes open to the problems of the world, while understanding what's possible to do about it, while also trying to understand one's own limitations."

This might be exactly what is underlying the problem: it is hard as an individual as it is to find the balance between ambitions and attention to pressing issues, and knowing that it is hard to make some difference or change, no matter how hard you try. I love my spouse for exactly that (among many other things!)  - which makes it even more difficult for myself to weigh in with other perspectives , or to even suggest that we leave the EA professional field outside our front door.

Comments7
Sorted by Click to highlight new comments since:

I think there are a lot of partners out there (including where both are EA, but one is maximizing harder than the other) who feel similarly! 
Another relevant post: You have more than one goal, and that's fine

Thank you - a consolation that I am not the only one!

You seem really thoughtful and considerate.

This isn’t really that deep, but it seems like EAs should accommodate the needs of their partners, with good communication, and investment appropriate to the relationship that they want for each other.

I don’t think this is news to anyone. I think I’m trying to say your feelings and views are valid.

Thank you for your kind comment.

Increasingly I find myself in a position where I feel that I have to defend myself to my significant other for actions

 

These discussions and  family-research-moments ... are increasingly eating into my self-worth and feeding insecurities.

This may be wildly off, but: have you talked to your partner about this? Do they know that when they talk to you like this, you feel attacked? And that it affects your self-worth? What would they say?

I ask because I think this is in many ways a relationship problem. They are doing something that makes you feel hurt. That's a problem that you could resolve in a number of ways - they could change, you could change, but you need to talk about it.

It might be that the outcome of that conversation is that you decide that you want to be more okay with thinking in this way, or shifting closer to what they believe. And maybe that's already where you are (in which case my apologies). But I don't think you should start out assuming that it's you who needs to change here.

Ozy wrote a great post about the being a more and less dedicated EAs.

Thank you! Great post with a healthy approach -  aimed at EAs. I am not EA, just married into it.

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal