Epistemic Status
Written in a hurry while frustrated. I kind of wanted to capture my feelings in the moment and not sanitise it when I'm of clearer mind.
Context
This is mostly a reply to these comments:
1) One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.
Agree.
Fully agree we need new hard-to-fake signals. Ben's list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:
- Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports.
- Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or funding such people.
A Little Personal Background
I've been involved in the rationalist community since 2017 and joined EA via social osmosis (I rarely post on the forum and am mostly active on social media [currently Twitter]). I was especially interested in AI risk and x-risk mitigation more generally, and still engage mostly with the existential security parts of EA.
Currently, my main objective in life is to help create a much brighter future for humanity (that is, I am most motivated by the prospect of creating a radically better world as opposed to securing our current one from catastrophe). I believe strongly that one is possible (nothing in the fundamental laws prohibit it), and effective altruism seems like the movement for me to realise this goal.
I am currently training (learning maths, will start a CS Masters this autumn and hopefully a PhD afterwards) to pursue a career as an alignment researcher.
I'm a bit worried that people like me are not welcome in EA.
Motivations
Since my mid to early teens, I've always wanted to have a profound impact on the world. It was how I came to grasp with mortality. I felt like people like Newton, Einstein, etc. were immortalised by their contributions to humanity. Generations after their deaths, young children learn about their contributions in science class.
I wanted that. To make a difference. To leave a legacy behind that would immortalise me. I had plans for the world (these changed as I grew up, but I never permanently let go of my desire to have an impact).
Nowadays, it's mostly not a mortality thing (I aspire to [greatly] extended life), but the core idea of "having an impact" persists. Even if we cure aging, I wouldn't be satisfied with my life if it were insignificant — if I weren't even a footnote in the story of human civilisation — I want to be the kind of person who moves the world.
Argument
Purity Tests Aren't Effective
I want honour and glory, status, and prestige. I am not a particularly kind, generous, selfless, or altruistic person. I'm not vegan, and I'd only stop eating meat when it becomes convenient to do so. I want to be affluent and would enjoy (significant) material comfort. Nonetheless, I feel that I am very deeply committed to making the world a much better place; altruism just isn't a salient factor driving me.
Reading @weeatquince's comment, I basically match their description for "bad people". It was both surprising and frustrating?
It feels like a purity test that is not that useful/helpful/valuable? I don't think I'm any less committed to improving the world just because my motives are primarily selfish? And I'm not sure what added benefit the extra requirement for altruism adds? If what you care about is deep ideological commitment to improving the world, then things like veganism, frugality, etc. aren't primarily selecting for what you ostensibly care about, but instead people who buy into a particular moral framework.
I don't think these purity tests are actually a strong signal of "wants to improve the world". Many people who want to improve the world aren't vegan or frugal. If EA has an idiosyncratic version of what improving the world means, such that enjoying material comfort is incompatible with improving the world, then that should be made (much) clearer? My idea of a brighter world involves much greater human flourishing (and thus much greater material comfort).
Status Seeking Isn't Immoral
Desiring status is a completely normal human motivation. Status seeking is ordinary human psychology (higher status partners are better able to take care of their progeny, and thus make better mates). Excluding people who want more status excludes a lot of ambitious/determined people; are the potential benefits worth it? Ambitious/determined people seem like valuable people to have if you want to improve the world?
Separately from the matter of how effective it is to the movement's ostensible goals, I find the framing of "bad people" problematic. Painting completely normal human behaviour as "immoral" seems unwise. I would expect that such normal psychology being directed to productive purposes would be encouraged not condemned.
I guess it would be a problem if I tried to get involved in animal welfare but was a profligate meat eater, but that isn't the case (I want to work on AI safety [and if that goes well, on digital minds]). I don't think my meat eating makes me any less suited to those tasks.
Conclusions
I guess this is an attempt to express my frustration with what I consider to be counterproductive purity tests and inquire if the EA community is interested in people like me.
- Are people selfishly motivated to improve the world (or otherwise not "pure" [meat eaters, lavish spenders, etc.]) not welcome in EA?
- Should such people not be funded?
I really enjoyed your frankness.
From reading what you wrote I have a suspicion that you may not be a bad person. I don’t want to impose anything on you and I don’t know you, but from the post you seem mainly to be ambitious and have a high level of metacognition. Although it’s possible that you are narcissistic and I’m being swayed by your honesty.
When it comes to being “bad” - have you read Reducing long-term risks from malevolent actors? It discusses at length what it means to be a bad actor. You may want to see how much of these applies to you. Note that these traits are on dimension and have to be somewhat prevalent in population due to increasing genes fitness in certain contexts, so it’s about quantity.
Regarding status. I would be surprised if a significant portion of EAs or even the majority is not status-driven. My understanding is that status is a fundamental human motive. This is not a claim whether it’s good or bad, but rather pointing out that there may be a lot of selfish motivations here. In fact, I think what effective altruism nailed is hacking status in a way that is optimal for the world - you gain status the more intellectually honest you are and the more altruistic you are which seems to be a self correcting system to me.
Personally, I have seen a lot of examples of people who are highly altruistic / altruistic at first glance / passing a lot of purity tests that were optimizing for self-serving outcomes when having a choice, sometimes leading to catastrophic outcomes for their groups in the long term. I have also seen at least a dozen examples of people who broadcast strong signals of their character to be exposed as heavily immoral. This also is in accordance to what the post about malevolent actors points:
So, it seems to me that the real question is whether:
So, I second what was mentioned by NunoSempere that what you [are able to] optimize for is an important question.
Personally, when hiring, the one of the things that scares me the most are people of low integrity that can sacrifice organizational values and norms for a personal gain (e.g. sabotaging psychological safety to be liked, sabotaging others to have more power, avoiding truth-seeking because of personal preferences, etc.). So basically people who do not stand up to their ideals (or reported ideals) - again with a caveat that it’s about some balance and not 100% purity - we all have our shortcomings.
In my view, a good question to ask yourself (if you are able to admit it to yourself) is whether you have a track record of integrity - respecting certain norms even if they do not serve you. For example, I think it’s easy to observe in modern days by watching yourself playing games - do you respect fair play, do you have a respect for rules, do you cheat or have a desire to cheat, do you celebrate wins of others (especially competitors), etc. I think it can be a good proxy for real world games. Or recalling how you behaved toward others and ideals when you were in a position of power. I think this can give you an idea of what you are optimizing for.
I also heavily recommend topics that explore virtues / values for utilitarians to see if following some proposals resonates with you, especially Virtues for Real-World Utilitarians by Stefan Schubert and Lucius Caviola.