Rationality vs. Rationalization: Reflecting on motivated beliefs


30


When we as EAs examine our moral beliefs, there always exists some pressure behind each consideration. It is impossible to keep fully out of mind, for instance, the fact that if I choose to believe the long-term future is the most important, most of my career capital goes out the window. In an ideal world we would be able to perfectly separate this from the ethical reasoning process. First we'd make decisions about our ethical stances then we would factor in these real world concerns when thinking about what actions actually seem feasible.

This is a bit scary though. Who wants to work in a cause area which they definitely believe to be second best? With a tweak in a key belief here and an extra serving of doubt in a hypothesis there, surely I can dethrone technical AI safety and AI policy so as to clear the path for research in development economics as the top cause area! This is called motivated reasoning.

This was the driving idea for a small workshop I created for an EA Zurich team retreat in August called 'Rationality vs. Rationalization' (find more on how to do this event here). The idea sprang from a 'chapter' in Eliezer Yudkowsky’s Sequences about rationalization. It makes the distinction that rationality means using information and then coming to a conclusion, while rationalization is starting with a conclusion then working backwards to find supporting evidence. Over a few weeks this really congealed into a useful concept for me and I began to realize how many beliefs I had simply because I wanted to, because it was convenient to hold them, exceedingly convenient in some cases.

It seems to be something that isn't talked about enough in the EA community. This sword of Damocles hanging over our heads when we engage in reasoning about abstract topics like population ethics. In fact, I constructed this workshop in part as a method to push me to finally make a decision. Given how easy it is to avoid any definite stance on population ethics it is hard not to just skate by when it comes to big decisions like switching career paths, thus avoiding significant plan changes.

It's an incredible thing to ask of someone. To act on a moral intuition that is inherently uncertain. To say, 'yeah, this long-termism thing seems like it's probably the real deal' and then toss career capital in the trash and move from an already high impact career path to, what, at this moment in time, most EAs consider one of the highest impact careers.

Just to name a few things that I’ve lost by moving from development economics to AI policy:

  • The ability to virtue signal to non-EAs:
  • While talking about how you do Randomized Controlled Trials that help the poorest people in the world using cool behavioral insights is pretty virtuous and attractive to most people, AI policy is more of an, 'oh that's interesting'. Well, that's the reaction if you explain it in the right way. It could go downhill fast if you do it wrong... The key point being that people don’t generally perceive this as a virtuous career path so you don’t get points for being socially aware and responsible.
  • The experience I’d built outside of my general economics background.
  • I was lucky to retain even this general background though! For those of you coming from med school, wow! For me this was mostly my years spent living in the developing world, a few internships, a modest network in the field, etc..
  • The part of my identity based around caring about and living in the developing world.

The worst part is that moral philosophy is just one of many options for rationalizing away the need for a career change. You can always make the argument that personal fit will save you. That you can avoid a transition by arguing that you would be happier and more productive in your current trajectory. But then you realize you’re 23 and have a pretty strong ability to adapt to things. Then you wonder if you're overconfident about your ability to adapt. Then you keep going in circles.

There are a million pitfalls along this road of changing beliefs and I observed myself carefully exploring each one trying to find if maybe there would be a tunnel that would lead me back to believing in developing world health/poverty as priority number one. I harnessed this energy to explore each and every argument as I couldn't imagine looking back in 3 years and finding out that, in fact, it really was the best fit for me. I wanted to be utterly confident. Even though that is of course impossible...

So I guess the key idea here, and the key idea of the workshop I organized, is to make clear the motivations that drive us one way or another. Not just away from EA ideas, but also the pressures driving us towards them. On the one hand AI policy is high reward in the EA community, it's a respected field to go into, and so I have to be aware of the pressure on that side. On the other hand, I know that if I met someone outside of EA who came up with the idea that long-term focused AI governance is of primary importance and chose to dedicate their career to it, it would seem very odd to me. As a not terribly iconoclastic personality, I can't imagine being that person, and being a part of the EA community allows me to feel ok about having these beliefs. Once these more emotional motivations are revealed we can better observe their influence and, if necessary, directly factor them in to our decision-making process.

It's by no means easy to get people to open up about beliefs that may be grounded in less than optimal reasoning, and I'm still working on how to best draw this out. I find having a comfortable and familiar atmosphere helps significantly. Also, starting out with my own flawed beliefs and how 'embarrassing' things like virtue signaling potential have influenced my beliefs whether I like it or not.

I think it would be helpful for more EAs to reveal the obstacles they had to overcome to get where they are now, and how changing beliefs has shifted their life-path. We should also try to take a closer look at how we've arrived at our current set of beliefs. Keeping in mind always that just because there are strong motivations in favor does not mean the belief is false, but we should become more suspicious. We should think about ways to facilitate these leaps of faith. About how to make the consequences of taking that big step a little less scary. This is where I think supportive communities come in, but I think there's much more to be done (just not sure what it is).

It's a big, scary, and and kind of heroic step to change ones' mind, and we should reward one another for taking it!

Very important to mention also: this effect can go the other way around in terms of driving people toward more EA beliefs when in fact they perhaps shouldn't update or there are motivations that drive people toward EA beliefs that don't have proper foundations. For instance, I would guess a lot of people coming from an AI background might start off with no grounding for believing that AI safety should be number 1, find confirmation in the community, and so stick with the belief that it's most important without any grounding in empirical fact. While the career choice result may remain the same, people without the proper foundations will not tend not to update to new information about what might be highest impact thus creating a less flexible EA community.