When we as EAs examine our moral beliefs, there always exists some pressure behind each consideration. It is impossible to keep fully out of mind, for instance, the fact that if I choose to believe the long-term future is the most important, most of my career capital goes out the window. In an ideal world we would be able to perfectly separate this from the ethical reasoning process. First we'd make decisions about our ethical stances then we would factor in these real world concerns when thinking about what actions actually seem feasible.

This is a bit scary though. Who wants to work in a cause area which they definitely believe to be second best? With a tweak in a key belief here and an extra serving of doubt in a hypothesis there, surely I can dethrone technical AI safety and AI policy so as to clear the path for research in development economics as the top cause area! This is called motivated reasoning.

This was the driving idea for a small workshop I created for an EA Zurich team retreat in August called 'Rationality vs. Rationalization' (find more on how to do this event here). The idea sprang from a 'chapter' in Eliezer Yudkowsky’s Sequences about rationalization. It makes the distinction that rationality means using information and then coming to a conclusion, while rationalization is starting with a conclusion then working backwards to find supporting evidence. Over a few weeks this really congealed into a useful concept for me and I began to realize how many beliefs I had simply because I wanted to, because it was convenient to hold them, exceedingly convenient in some cases.

It seems to be something that isn't talked about enough in the EA community. This sword of Damocles hanging over our heads when we engage in reasoning about abstract topics like population ethics. In fact, I constructed this workshop in part as a method to push me to finally make a decision. Given how easy it is to avoid any definite stance on population ethics it is hard not to just skate by when it comes to big decisions like switching career paths, thus avoiding significant plan changes.

It's an incredible thing to ask of someone. To act on a moral intuition that is inherently uncertain. To say, 'yeah, this long-termism thing seems like it's probably the real deal' and then toss career capital in the trash and move from an already high impact career path to, what, at this moment in time, most EAs consider one of the highest impact careers.

Just to name a few things that I’ve lost by moving from development economics to AI policy:

  • The ability to virtue signal to non-EAs:
  • While talking about how you do Randomized Controlled Trials that help the poorest people in the world using cool behavioral insights is pretty virtuous and attractive to most people, AI policy is more of an, 'oh that's interesting'. Well, that's the reaction if you explain it in the right way. It could go downhill fast if you do it wrong... The key point being that people don’t generally perceive this as a virtuous career path so you don’t get points for being socially aware and responsible.
  • The experience I’d built outside of my general economics background.
  • I was lucky to retain even this general background though! For those of you coming from med school, wow! For me this was mostly my years spent living in the developing world, a few internships, a modest network in the field, etc..
  • The part of my identity based around caring about and living in the developing world.

The worst part is that moral philosophy is just one of many options for rationalizing away the need for a career change. You can always make the argument that personal fit will save you. That you can avoid a transition by arguing that you would be happier and more productive in your current trajectory. But then you realize you’re 23 and have a pretty strong ability to adapt to things. Then you wonder if you're overconfident about your ability to adapt. Then you keep going in circles.

There are a million pitfalls along this road of changing beliefs and I observed myself carefully exploring each one trying to find if maybe there would be a tunnel that would lead me back to believing in developing world health/poverty as priority number one. I harnessed this energy to explore each and every argument as I couldn't imagine looking back in 3 years and finding out that, in fact, it really was the best fit for me. I wanted to be utterly confident. Even though that is of course impossible...

So I guess the key idea here, and the key idea of the workshop I organized, is to make clear the motivations that drive us one way or another. Not just away from EA ideas, but also the pressures driving us towards them. On the one hand AI policy is high reward in the EA community, it's a respected field to go into, and so I have to be aware of the pressure on that side. On the other hand, I know that if I met someone outside of EA who came up with the idea that long-term focused AI governance is of primary importance and chose to dedicate their career to it, it would seem very odd to me. As a not terribly iconoclastic personality, I can't imagine being that person, and being a part of the EA community allows me to feel ok about having these beliefs. Once these more emotional motivations are revealed we can better observe their influence and, if necessary, directly factor them in to our decision-making process.

It's by no means easy to get people to open up about beliefs that may be grounded in less than optimal reasoning, and I'm still working on how to best draw this out. I find having a comfortable and familiar atmosphere helps significantly. Also, starting out with my own flawed beliefs and how 'embarrassing' things like virtue signaling potential have influenced my beliefs whether I like it or not.

I think it would be helpful for more EAs to reveal the obstacles they had to overcome to get where they are now, and how changing beliefs has shifted their life-path. We should also try to take a closer look at how we've arrived at our current set of beliefs. Keeping in mind always that just because there are strong motivations in favor does not mean the belief is false, but we should become more suspicious. We should think about ways to facilitate these leaps of faith. About how to make the consequences of taking that big step a little less scary. This is where I think supportive communities come in, but I think there's much more to be done (just not sure what it is).

It's a big, scary, and and kind of heroic step to change ones' mind, and we should reward one another for taking it!

Very important to mention also: this effect can go the other way around in terms of driving people toward more EA beliefs when in fact they perhaps shouldn't update or there are motivations that drive people toward EA beliefs that don't have proper foundations. For instance, I would guess a lot of people coming from an AI background might start off with no grounding for believing that AI safety should be number 1, find confirmation in the community, and so stick with the belief that it's most important without any grounding in empirical fact. While the career choice result may remain the same, people without the proper foundations will not tend not to update to new information about what might be highest impact thus creating a less flexible EA community.

31

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since: Today at 12:04 PM

Hello Alex. Thanks for writing this up. I agree we should try, hard as that might be, to be honest with ourselves about our underlying motivations (which are often non-obvious anyway). I often worry about this in my own case.

That being said, I want to push back slightly on the case you've picked. To paraphrase, your example was "I think long-termism is actually true, but I'm going to have to sacrifice a lot to move from development economics to AI policy". Yet, if you hang around in the EA world long enough, the social pressure and incentives to conform to long-termist seem extremely strong: the EA leadership endorse it, there seem to be much greater status and job prospects for long-termists, and if you work on near-term causes people keep challenging you for having "weird beliefs" and treating you as an oddity (sadly, I speak from experience). As such, it's not at all obvious to me that your rationalisation example works here: there is a short-term cost to switching your career path but, over the longer-term, switching to long-termism plausibly benefits one's own welfare (assuming one hangs around with EAs a lot). Hence, this isn't a clear case of "I think this X is true but it's really going to cost me, personally, to believe X".

Yes! Totally agree. I think I mentioned very briefly that one should also be wary of social dynamics pushing toward EA beliefs, but I definitely didn't address it enough. Although I think the end result was positive and that my beliefs are true (with some uncertainty of course), I would guess that my update toward long-termism was due in large part to lot's of exposure to the EA community and from the social pressure that brings.

I basically bought some virtue signaling in the EA domain at the cost of signaling in broader society. Given I hang out with a lot of EAs and plan to do so more in the future, I'd guess that if I were to rationally evaluate this decision it would look net positive in favor of changing toward long-termism (as you would also gain within the EA community by making a similar switch, though with some short-term itoldyouso negative effects).

So yes, I think it was largely due to closer social ties to the EA community that this switch finally became worthwhile and perhaps this was a calculation going on at the subconscious level. It's probably no coincidence that I finally made a full switch-over during an EA retreat where the broad society costs of switching beliefs was less salient and the EA benefits much more salient. To have the perfect decision-making situation I guess it would be nice to have equally good opportunities in communities representing every philosophical belief, but for now seems a bit unlikely. I suppose it's another argument for cultivating diversity within EA.

This brings up a whole other rabbit hole in terms of thinking about how we want to appeal to people with some interest in EA but not yet committed to the ideas. I think the social aspect is probably larger than many might think. Of course if we emphasized this we're limiting people's choice to join EA in a rational way. But then what is 'choice' really given the social construction of our personalities and desires....

Curated and popular this week
Relevant opportunities