Epistemic Status
Written in a hurry while frustrated. I kind of wanted to capture my feelings in the moment and not sanitise it when I'm of clearer mind.
Context
This is mostly a reply to these comments:
1) One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.
Agree.
Fully agree we need new hard-to-fake signals. Ben's list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:
- Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports.
- Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or funding such people.
A Little Personal Background
I've been involved in the rationalist community since 2017 and joined EA via social osmosis (I rarely post on the forum and am mostly active on social media [currently Twitter]). I was especially interested in AI risk and x-risk mitigation more generally, and still engage mostly with the existential security parts of EA.
Currently, my main objective in life is to help create a much brighter future for humanity (that is, I am most motivated by the prospect of creating a radically better world as opposed to securing our current one from catastrophe). I believe strongly that one is possible (nothing in the fundamental laws prohibit it), and effective altruism seems like the movement for me to realise this goal.
I am currently training (learning maths, will start a CS Masters this autumn and hopefully a PhD afterwards) to pursue a career as an alignment researcher.
I'm a bit worried that people like me are not welcome in EA.
Motivations
Since my mid to early teens, I've always wanted to have a profound impact on the world. It was how I came to grasp with mortality. I felt like people like Newton, Einstein, etc. were immortalised by their contributions to humanity. Generations after their deaths, young children learn about their contributions in science class.
I wanted that. To make a difference. To leave a legacy behind that would immortalise me. I had plans for the world (these changed as I grew up, but I never permanently let go of my desire to have an impact).
Nowadays, it's mostly not a mortality thing (I aspire to [greatly] extended life), but the core idea of "having an impact" persists. Even if we cure aging, I wouldn't be satisfied with my life if it were insignificant — if I weren't even a footnote in the story of human civilisation — I want to be the kind of person who moves the world.
Argument
Purity Tests Aren't Effective
I want honour and glory, status, and prestige. I am not a particularly kind, generous, selfless, or altruistic person. I'm not vegan, and I'd only stop eating meat when it becomes convenient to do so. I want to be affluent and would enjoy (significant) material comfort. Nonetheless, I feel that I am very deeply committed to making the world a much better place; altruism just isn't a salient factor driving me.
Reading @weeatquince's comment, I basically match their description for "bad people". It was both surprising and frustrating?
It feels like a purity test that is not that useful/helpful/valuable? I don't think I'm any less committed to improving the world just because my motives are primarily selfish? And I'm not sure what added benefit the extra requirement for altruism adds? If what you care about is deep ideological commitment to improving the world, then things like veganism, frugality, etc. aren't primarily selecting for what you ostensibly care about, but instead people who buy into a particular moral framework.
I don't think these purity tests are actually a strong signal of "wants to improve the world". Many people who want to improve the world aren't vegan or frugal. If EA has an idiosyncratic version of what improving the world means, such that enjoying material comfort is incompatible with improving the world, then that should be made (much) clearer? My idea of a brighter world involves much greater human flourishing (and thus much greater material comfort).
Status Seeking Isn't Immoral
Desiring status is a completely normal human motivation. Status seeking is ordinary human psychology (higher status partners are better able to take care of their progeny, and thus make better mates). Excluding people who want more status excludes a lot of ambitious/determined people; are the potential benefits worth it? Ambitious/determined people seem like valuable people to have if you want to improve the world?
Separately from the matter of how effective it is to the movement's ostensible goals, I find the framing of "bad people" problematic. Painting completely normal human behaviour as "immoral" seems unwise. I would expect that such normal psychology being directed to productive purposes would be encouraged not condemned.
I guess it would be a problem if I tried to get involved in animal welfare but was a profligate meat eater, but that isn't the case (I want to work on AI safety [and if that goes well, on digital minds]). I don't think my meat eating makes me any less suited to those tasks.
Conclusions
I guess this is an attempt to express my frustration with what I consider to be counterproductive purity tests and inquire if the EA community is interested in people like me.
- Are people selfishly motivated to improve the world (or otherwise not "pure" [meat eaters, lavish spenders, etc.]) not welcome in EA?
- Should such people not be funded?
I think there are many EAs with "pure" motivations. I don't know what the distribution of motivational purity is, but I don't expect to be a modal EA.
I came via osmosis from the rat community (partly due to EA caring about AI safety and x-risk). I was never an altruistic person (I'm still not).
I wouldn't have joined a movement focusing on improving lives for the global poor (I have donated to GiveWell's Maximum Impact Fund, but that's due to value drift after joining EA).
This is to say that I think that pure EAs exist, and I think that's fine, and I think they should be encouraged.
Being vegan
Frugal living, etc. are all fine IMO.
I'm just against using them as a purity tests. If the kind of people we want to recruit are people strongly committed to improving the world, then I don't think those are strong (or even useful) signals.
I think ambition is a much stronger signal of someone who actually wants to make an impact than veganism/frugality/other moral fashions.
As long as we broadly agree on what a better world looks like (more flourishing, less suffering), then ambitious people seem valuable.
Even without strict moral alignment, we can pursue pareto optimal improvements on what we consider a brighter world?
Like most humans probably agree a lot more on what is moral than they disagree, and we can make the world better in ways that we both agree on?
I don't think that e.g. not caring about animal welfare is that big an obstacle to cooperating with other EAs? I don't want animals to suffer, and I wouldn't hinder efforts to improve animal welfare. I'd just work on issues that are more important to me.
Very compatible with "big tent" EA IMO.