In her appearance on the 80k podcast, Ajeya Cotra introduces the concept of the train to crazy town.
Ajeya Cotra: And so when the philosopher takes you to a very weird unintuitive place — and, furthermore, wants you to give up all of the other goals that on other ways of thinking about the world that aren’t philosophical seem like they’re worth pursuing — they’re just like, stop… I sometimes think of it as a train going to crazy town, and the near-termist side is like, I’m going to get off the train before we get to the point where all we’re focusing on is existential risk because of the astronomical waste argument. And then the longtermist side stays on the train, and there may be further stops.
This analogy immediately resonated with me and I still like it a lot. In this post, I want to use this metaphor to clarify a couple of problems that I often encounter regarding career choice and other moral considerations.
Feel free to add metaphors in the comments.
The train to crazy town goes every five minutes:
Getting off the train is not a final decision. You can always hop back on. Clearly, there is some opportunity cost and a bit of lost time but in the broad scheme of things, there is still lots of room for change in your moral views. Especially when it comes to career choice, some people I talked to definitely take path dependencies too seriously. I had conversations along the lines of “I already did a Bachelor’s in Biology and just started a Master’s in Nanotech, surely it’s too late for me to pivot to AI safety”. To which my response is “You’re 22, if you really want to go into AI safety, you can easily switch”.
The return train goes every five minutes too:
If your current stop feels a bit too close to crazy town, you can always go back. The example from above also applies in the other direction. In general, I feel like people overestimate the importance of their path dependencies rather than updating them accordingly when their moral views change. Often, some years down the line, they can’t live with the discrepancy between their moral beliefs and lived actions anymore and change careers even later rather than cutting their losses early.
You can ride the train (nearly) for free:
There are a ton of ways in which you can do roundtrips to the previous or next town without having to get off the train. Obviously, there is a ton of stuff online such as 80K, Holden Karnofsky’s Aptitudes post, or probablygood, and many people use these resources already. However, I think people still don’t talk enough to other EAs about these questions. The amount of knowledge and experiences that you get from one weekend of 1on1s at EAG or EAGx is just insanely cheap and valuable compared to most reading or thinking that you can do on your own.
You are the conductor:
You control the speed at which your train rides. There is no reason to ride at an uncomfortably slow or fast paste if there is no reason for it.
More experienced people tend to ride longer:
What people intuitively think of as a “crazy view” depends on how long they consider themselves an EA, in my experience. I guess to my previous self from 6 years ago, some of my current beliefs would have seemed crazy.
More people are moving to further towns:
While the movement is growing a lot and therefore all towns become more populous, it seems like some further towns are getting disproportionately more citizens. Longtermism, AI safety, Biosecurity, and so on certainly were less mainstream 5 years ago than they are now.
There is no crazy town:
Nobody has found crazy town yet. Some people got on the first train went full gas no breaks and still haven’t reached it yet. I talked to someone with this mentality at EAG2021 and absolutely loved it. They just never got off and look where the train takes them. This might not be for most people but the movement definitely needs this mentality to explore new ideas that seem crazy today but might be more mainstream a couple of years down the line. And even if they will never reach the mainstream, it is still important that somebody is exploring them.
The analogy resonated to me too. It reminded me of a part of my journey where I went to what to me was crazy town and came back. I’d like to share my story, partly to illustrate the concept. And if others would share their stories, I think that could be valuable or at least interesting.
At one point I decided that by far the best possible outcome for the future would be the so-called hedonium shockwave. The way I imagined it at that time, it would be an AI filling the universe as fast as possible with a homogenous substance that experiences extreme bliss. E.g. nano chips that simulate sentient minds in the constant state of extreme bliss. And those minds might be just sentient enough to experience bliss to save computation power for more minds. And since this is so overwhelmingly important, I thought that the goal of my life should be to increase the probability of a hedonium shockwave.
But then I procrastinated doing anything about it. When thinking why, I realized that the prospect of hedonium shockwave doesn’t excite me. In fact, this scenario seemed sad and worrying. After more contemplation, I think I figured out why. I viewed myself as an almost pure utilitarian (except some selfishness). And this seemed like the correct conclusion from the utilitarian POV, hence I concluded that this is what I want. But while utilitarianism might do a fine job at approximating my values in most situations, it did a bad job in this edge case. Utilitarianism was a map not the territory. So nowadays I still try to figure out what utilitarianism would suggest to do but then try to remember to ask myself: is this what I really want (or really think)? My model of myself might be different from the real me. In my diary at the time I made this drawing to illustrate it. It’s superfluous to the text but drawings help me to remember things.
Thank you so much Saulius! I never heard of prioritarianism. That is amazing! Thanks for telling me!!
I’m not the best one to speak for the pure utilitarians in my life, but yes, I think it was what you said: Starting with one set of emotions (the utilitarian’s personal experience of preferring the feeling of pleasure over the feeling of suffering in his own life), and extrapolating based on logic to assume that pleasure is good no matter who feels it and that suffering is bad no matter who feels that.