Hide table of contents

Epistemic Status

Written in a hurry while frustrated. I kind of wanted to capture my feelings in the moment and not sanitise it when I'm of clearer mind.

 

Context

This is mostly a reply to these comments:

Exhibit A

1)  One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.

 

Exhibit B

Agree.

Fully agree we need new hard-to-fake signals. Ben's list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:

  • Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports.
  • Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or funding such people.

A Little Personal Background

I've been involved in the rationalist community since 2017 and joined EA via social osmosis (I rarely post on the forum and am mostly active on social media [currently Twitter]). I was especially interested in AI risk and x-risk mitigation more generally, and still engage mostly with the existential security parts of EA.

Currently, my main objective in life is to help create a much brighter future for humanity (that is, I am most motivated by the prospect of creating a radically better world as opposed to securing our current one from catastrophe). I believe strongly that one is possible (nothing in the fundamental laws prohibit it), and effective altruism seems like the movement for me to realise this goal.

I am currently training (learning maths, will start a CS Masters this autumn and hopefully a PhD afterwards) to pursue a career as an alignment researcher.

I'm a bit worried that people like me are not welcome in EA.

 

Motivations

Since my mid to early teens, I've always wanted to have a profound impact on the world. It was how I came to grasp with mortality. I felt like people like Newton, Einstein, etc. were immortalised by their contributions to humanity. Generations after their deaths, young children learn about their contributions in science class.

I wanted that. To make a difference. To leave a legacy behind that would immortalise me. I had plans for the world (these changed as I grew up, but I never permanently let go of my desire to have an impact).

Nowadays, it's mostly not a mortality thing (I aspire to [greatly] extended life), but the core idea of "having an impact" persists. Even if we cure aging, I wouldn't be satisfied with my life if it were insignificant — if I weren't even a footnote in the story of human civilisation — I want to be the kind of person who moves the world.

 

Argument

Purity Tests Aren't Effective

I want honour and glory, status, and prestige. I am not a particularly kind, generous, selfless, or altruistic person. I'm not vegan, and I'd only stop eating meat when it becomes convenient to do so. I want to be affluent and would enjoy (significant) material comfort. Nonetheless, I feel that I am very deeply committed to making the world a much better place; altruism just isn't a salient factor driving me.

Reading @weeatquince's comment, I basically match their description for "bad people". It was both surprising and frustrating?

It feels like a purity test that is not that useful/helpful/valuable? I don't think I'm any less committed to improving the world just because my motives are primarily selfish? And I'm not sure what added benefit the extra requirement for altruism adds? If what you care about is deep ideological commitment to improving the world, then things like veganism, frugality, etc. aren't primarily selecting for what you ostensibly care about, but instead people who buy into a particular moral framework.

I don't think these purity tests are actually a strong signal of "wants to improve the world". Many people who want to improve the world aren't vegan or frugal. If EA has an idiosyncratic version of what improving the world means, such that enjoying material comfort is incompatible with improving the world, then that should be made (much) clearer? My idea of a brighter world involves much greater human flourishing (and thus much greater material comfort).


Status Seeking Isn't Immoral

Desiring status is a completely normal human motivation. Status seeking is ordinary human psychology (higher status partners are better able to take care of their progeny, and thus make better mates). Excluding people who want more status excludes a lot of ambitious/determined people; are the potential benefits worth it? Ambitious/determined people seem like valuable people to have if you want to improve the world?

Separately from the matter of how effective it is to the movement's ostensible goals, I find the framing of "bad people" problematic. Painting completely normal human behaviour as "immoral" seems unwise. I would expect that such normal psychology being directed to productive purposes would be encouraged not condemned.

I guess it would be a problem if I tried to get involved in animal welfare but was a profligate meat eater, but that isn't the case (I want to work on AI safety [and if that goes well, on digital minds]). I don't think my meat eating makes me any less suited to those tasks.

 

Conclusions

I guess this is an attempt to express my frustration with what I consider to be counterproductive purity tests and inquire if the EA community is interested in people like me.

  1. Are people selfishly motivated to improve the world (or otherwise not "pure" [meat eaters, lavish spenders, etc.]) not welcome in EA?
  2. Should such people not be funded?

73

0
0

Reactions

0
0

More posts like this

Comments37
Sorted by Click to highlight new comments since: Today at 1:31 PM

Currently, my main objective in life is to help create a much brighter future for humanity (that is, I am most motivated by the prospect of creating a radically better world as opposed to securing our current one from catastrophe).

 

I want honour and glory, status, and prestige

I think that there is a question here of "yeah, but what are you optimizing for, in practice?". Are you optimizing for the honor, for the brighter future, or for a mix of both? If you were, for instance, a grantmaker, these might look very different.  

If you were, for instance, a grantmaker, these might look very different.


Strongly upvoted, I would say that for most roles these do look very different.
The "altruism" part of "effective altruism" is something I really value.
I would much rather collaborate with someone that wants to do the most good, than with someone that wants to get the most personal glory or status.
For example, someone that cares mostly about personal status will spend much less time helping others, especially in non-legible ways.

But that's mostly relevant in small scale altruism? Like I wouldn't give to beggars on the street. And I wouldn't make great personal sacrifice (e.g. frugal living, donating the majority of my income to charity [I was donating 10% to GiveWell's maximimum impact fund until a few months ago (Forex issues [I'm in Nigeria], now I'm unemployed)]) to improve the lives of others.

But I would (and did!) reorient my career to work on the most pressing challenges confronting humanity given my current/accessible skill set. I quit my job as a web developer, I'm going back to university for graduate study and plan to work on AI safety and digital minds.

My lack of altruism simply is not that relevant for trying to improve the condition for humanity.

What you're missing is that I want to attain status/prestige/glory by having positive impact not through some other means.

It feels like you're failing to grasp what that actually means?

My pursuit of status/prestige/glory is by trying to have the largest impact on a brighter future conditioning on being the person I am (with the skills and personality I have).

But I would reorient my career to work on the most pressing challenges confronting humanity given my current/accessible skill set. I quit my job as a web developer, I'm going back to university for graduate study and plan to work on AI safety and digital minds.

 

I think this is very admirable and wish you success!
If indeed you're acting exactly like someone who straightforwardly wanted to improve the world altruistically, that's what matters :)

Edit: oh I see you were also donating 10%, that's also very altruistic! (At least from an outside view, I trust you on your motivations)

I think I've been defining "altruism" in an overly strict sense.

Rather than say I'm not altruistic, I mostly mean that:

  • I'm not impartial to my own welfare/wellbeing/flourishing
  • I'm much less willing to undertake personal hardship (frugality, donating the majority of my income, etc.) and I think this is fine

10% is not that big an ask (I can sacrifice that much personal comfort), but donating 50% or forgoing significant material comfort would be steps I would be unwilling to take.

(Reorienting my career doesn't feel like a sacrifice because I'll be able to have a larger positive impact through the career switch.)

Rather than say I'm not altruistic, I mostly mean that: *I'm not impartial to my own welfare/wellbeing/flourishing


To me, those are very different claims!

10% is not that big an ask (I can sacrifice that much personal comfort)

That's very relative! It's more than what the median EA gives, it's way more than what the median non-EA gives. When I talk to non-EA friends/relatives about giving, the thought of giving any% is seen as unimaginably altruistic.

Even people donating 50% are not donating 80%, and some would say it's not that big of an ask.
IMHO, claiming that only people making huge sacrifices and valuing their own wellbeing at 0 can be considered "altruists" is a very strong claim that doesn't match how the word is used in practice.

As Wikipedia says:

Altruism is the principle and moral practice of concern for happiness of other human beings or other animals ...

I now think it was a mistake/misunderstanding to describe myself as non altruistic and believe that I was using an unusually high standard.

(That said, when I started the 10% thing, I did so under the impression that it was what the sacrifice I needed to make to gain acceptance in EA. Churches advocate a 10% tithe as well [which I didn't pay because I wasn't actually a Christian (I deconverted at 17 and open atheism is not safe, so I've hidden [and still hide] it)], but it did make me predisposed to putting up with that level of sacrifice [I'd faced a lot of social pressure to pay tithes at home, and I think I gave in once].

The 10% felt painful at first, but I eventually got used to it, and it became a source of pride. I could brag about how I was making the world a better place even with my meagre income.)

"That said, when I started the 10% thing, I did so under the impression that it was what the sacrifice I needed to make to gain acceptance in EA"

If this sentiment is at all widespread among people on the periphery of EA or who might become EA at some point, then I find that VERY concerning. We'd lose a lot of great people if everyone assumed they couldn't join without making that kind of sacrifice.

Helping others in non-legible ways is often one of the best ways to build personal status. Scope sensitivity and impartiality seems like bigger issues, if I'm trying to accurately picture differences between status-seeking motivations and impartially altruistic motivations.

I'm a rationalist.

I take scope sensitivity very seriously.

Impartiality. Maybe I'm more biased towards rats/EAs, but not in ways that seem likely to be decision relevant?

You could construct thought experiments in which I wouldn't behave in an ideal utilitarian way, but for scenarios that actually manifest in the real world, I think I can be approximated as following some strain of preference utilitarianism?

I'm trying to question

For example, someone that cares mostly about personal status will spend much less time helping others, especially in non-legible ways.

In the abstract, rather than talking about you specifically. 

Some quotes helping other altruists:

by helping other people as much as possible, without any expectation of your favours being returned in the near future — you end up being much more successful, in a wide variety of settings, in the long run.

This is what you mention, and I agree with it.
But

if you and I share the same values, the social situation is very different: if I help you achieve your aims, then that’s a success, in terms of achieving my aims too. Titting constitutes winning in and of itself — there’s no need for a tat in reward. For this reason, we should expect very different norms than we are used to be optimal: giving and helping others will be a good thing to do much more often than it would be if we were all self-interested.

One of the incredible strengths of the EA community is that we all share values and share the same end-goals. This gives us a remarkable potential for much more in-depth cooperation than is normal in businesses or other settings where people are out for themselves. So next time you talk to another effective altruist, ask them how you can help them achieve their aims. It can be a great way of achieving what you value.

I really think altruism/value-alignment is a strength, and a group would lose a lot of efficiency by not valuing it.

(Of course, it's not the only thing that matters)

Empirically it feels hard to get much credit/egoist-value from helping people? Maybe your experience has just been different. But I don't find helping people very helpful for improving my status. 

Have you read How to Win Friends and Influence People? Iirc more than half the book is about taking an interest in other people, helping them, etc. 

Personal impact on a brighter world.

I'm not a grant maker and don't want to be.

I am not aware of any realistic scenario where I would act differently from someone who straightforwardly wanted to improve the world altruistically.

(The scenarios in which I would seem very contrived and unlikely to manifest in the real world.)

Could you describe a realistic scenario in which you think I'd act meaningfully different from an altruistic person in a way that would make me a worse employee/coworker?

Could you describe a realistic scenario in which you think I'd act meaningfully different from an altruistic person in a way that would make me a worse employee/coworker?

So the problem with this is that I don't know you. That said, here is my best shot:

You work hard, you attain a position of power and influence. Eventually, you realize that you have sort of been promoted to incompetence, or perhaps merely that you are probably no longer the best person to be having your particular position.  But it's a tricky question, and nobody is in a good position to realize that this is the case, or to call you out on it. You decide to do nothing.

In this example, as perhaps in others, capabilities really matter. For example, people have previously mentioned offering Terence Tao a few million to work on AI alignment, and his motivations there presumably wouldn't matter, just the results. 

That sounds fair.

"You shouldn't fund/patronise me or support my research" is probably a recommendation I'd be loathe to make. (Excluding cases where I'm already funded well enough that marginal funding is not that helpful.)

Selflessly rejecting all funding because I'm not the best bet for this particular project is probably something that I'd be unwilling to do.

(But in practice, I expect that probabilistic reasoning would recommend funding me anyways. I think having enough confidence to justify not funding a plausible pathway is unlikely before it's too late.)

But yeah, I think this is an example of where selfishness would be an issue.

Thanks for the reply!

In all fairness, I expect most people would be very reluctant to recommend that resources be directed away from the causes or organisations that give them status.

Having an aversion to "selfishness" might overcome this, but more likely it would just make them invent reasons why their organisation/area really is very important.

Reading @weeatquince's comment, I basically match their description for "bad people". It was both surprising and frustrating?

I don't think you do! It seems like you have a desire to be transparent about your motivations and concerned about what others expect of you. These are the sort of things that couldn't be further away from the picture I get when I read weeatquince's comment. The place in the comment where they mention "bad person/people" is here:

Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or funding such people.

When I read this and upvoted the comment, I didn't mean to discourage (with my upvote) all the people who don't have effective altruism as their primary life goal. Instead, I interpreted weeatquince as follows:

Bad people := "people whose cognition is constantly optimizing* for 'how do I come across?' and 'how can I advance?,' so much so that their reasoning about how to have impact is careless, inauthentic, or even deceptive."

(I'm talking about levels of carelessness, inauthenticity, or deception that are way beyond anything that's in the middle of the range for people in EA or rationality, given that our brains generally weren't selected for self-understanding and honesty very much.)

(I'd add that it doesn't matter much whether cognition is conscious, semi-conscious or unconscious. It matters more if there's potential for it to change/improve, but I'd say that if this potential is there, I wouldn't consider the label "bad person" appropriate in the first place!)

The point being: You don't sound like that!

(I also think it's common for "bad people" to claim they're highly altruistic. (Though sometimes you get "bad people" who give you subtle warnings – e.g., the scene in Game of Thrones where Littlefinger tells Ned Stark "Distrusting me was the wisest thing you've done since you climbed off your horse."))

I think it's good that you wrote this post. I would strongly disagree if weeatquince meant it the other way – for the reasons you mention! Even if not, it would suck if more people feel less welcome based on a misunderstanding.

*I don't think it's necessarily bad if someone is highly self-conscious (in a socially strategic way) in their interactions with others. Instead, what matters is whether they have an equally serious commitment to truth and integrity so that when they reason about how to have impact (around people whose entire life is built around having the most impact!), they feel responsible to not want to distort others' epistemics or mess up their efforts. (This includes being careful to double-check their reasoning and maybe even flag conflicts of interest whenever they find themselves advocating for self-serving conclusions.)

I think the community should welcome you. I share many of your motivations. You seem “altruistic” in the way that “counts” for most people’s purposes.

I really like this question because it raises the uncomfortable topic of what motivations people actually have being different than what we imagine they are. It seems to me that the community should not lie to itself about… well, anything, but least of all this; and I do suspect there’s a lot of self deception going on.

I think there are many EAs with "pure" motivations. I don't know what the distribution of motivational purity is, but I don't expect to be a modal EA.

I came via osmosis from the rat community (partly due to EA caring about AI safety and x-risk). I was never an altruistic person (I'm still not).

I wouldn't have joined a movement focusing on improving lives for the global poor (I have donated to GiveWell's Maximum Impact Fund, but that's due to value drift after joining EA).

This is to say that I think that pure EAs exist, and I think that's fine, and I think they should be encouraged.

Being vegan

Frugal living, etc. are all fine IMO. 

 

I'm just against using them as a purity tests. If the kind of people we want to recruit are people strongly committed to improving the world, then I don't think those are strong (or even useful) signals.

I think ambition is a much stronger signal of someone who actually wants to make an impact than veganism/frugality/other moral fashions.

As long as we broadly agree on what a better world looks like (more flourishing, less suffering), then ambitious people seem valuable.

 

Even without strict moral alignment, we can pursue pareto optimal improvements on what we consider a brighter world?

 

Like most humans probably agree a lot more on what is moral than they disagree, and we can make the world better in ways that we both agree on?

I don't think that e.g. not caring about animal welfare is that big an obstacle to cooperating with other EAs? I don't want animals to suffer, and I wouldn't hinder efforts to improve animal welfare. I'd just work on issues that are more important to me.

 

Very compatible with "big tent" EA IMO.

I think a lot of people miss the idea that "being an EA" is a different thing from being "EA adjacent"/"in the EA community"/ "working for an EA organization" etc. I am saying this as someone who is close to the EA community, who has an enormous amount of intellectual affinity, but does not identify as an EA. If the difference between the EA label and the EA community is already clear to you, then I apologize for beating a dead horse.

It seems from your description of yourself like you're actually not an Effective Altruist in the sense of holding a significantly consequentialist worldview that one tries to square with one's choices (once again, neither am I). From your post, the main way that I see in which your worldview deviates from EA is that, while lots of EA's are status-motivated, your worldview seems to include the idea that typical levels of status -based and selfish motivations aren't a cognitive error that should be pushed against.

I think that's great!  You have a different philosophical outlook (from the very little I can see in this post, perhaps it's a little close to the more pro-market and pro-self interest view of people like Zvi, who everyone I know in the community respects immensely). I think that if people call this "evil" or "being a bad person", they are being narrow-minded and harmful to the EA cause. But I also don't think that people like you (and me) who love the EA community and goals but have a personal philosophy that deviates significantly from the EA core should call ourselves EA's, any more than a heterosexual person who has lots of gay friends and works for a gay rights organization should call themselves LGBT. There is a core meaning to being an "effective altruist", and you and I don't meet it. 

No two people's philosophies are fully aligned, and even the most modal EA working in the most canonical EA organization will end up doing some things that feel "corporate" or suboptimal, or that matter to other people but not to them. If you work for an EA org, you might experience some of that because of your philosophical differences, but as long as you're intellectually honest with yourself and others, and able to still do the best you can (and not try to secretly take your project in a mission-unaligned direction) then I am sure everyone would have a great experience. 

My guess is that most EA organizations would love to hire/fund someone with your outlook (and what some of the posts you got upset with are worried about are people who are genuinely unaligned/deceptive and want to abuse the funding and status of the organization for personal gain). However if you do come in to an EA org and do your best, but people decline to work with you because of your choices or beliefs, I think that would be a serious organizational problem and evidence of harmful cultishness/"evaporative cooling of group beliefs".

I have a significantly consequentialist world view.

I am motivated by the vision of a much better world.

I am trying to create such a better world. I want to devote my career to that project.

I'm trying to optimise something like "expected positive impact on a brighter future conditional on being the person that I am with the skills available to/accessible for me".

The ways I perceive that I differ from EAs is:

  • Embracing my desire for status/prestige/glory/honour
  • I'm not impartial to my own welfare/wellbeing/flourishing
  • I'm much less willing to undertake personal hardship (frugality, donating the majority of my income, etc.) and I think this is fine
  • I'm not (currently) vegan

I want to say that I'm not motivated by altruism. But people seem to be imagining behaviour/actions that I oppose/would not take and I do want to create a brighter future.

And I'm not sure how to explain why I want a much brighter future in a way that isn't altruistic.

  • A much (immensely) better world is possible
  • We can make that happen

The "we should make that happen" feels like an obvious conclusion. Explaining the why draws blanks.

Rather than saying I'm not altruistic, I think it's more accurate to say that I'm less willing to undertake significant personal hardship and I'm more partial to my own welfare/flourishing/wellbeing.

Maybe that makes me not EA, but I was under the impression that I was simply a non standard EA.

I'm trying to optimise something like "expected positive impact on a brighter future conditional on being the person that I am with the skills available to/accessible for me".

If this is true, then I think you would be an EA. But from what you wrote it seems that you have a relatively large term in your philosophical objective function (as opposed to your revealed objective function, which for most people gets corrupted by personal stuff) on status/glory. I think the question determining your core philosophy would be which term you consider primary. For example if you view them as a means to an end of helping people and are willing to reject seeking them if someone convinces you they are significantly reducing your EV then that would reconcile the "A" part of EA.

A piece of advice I think younger people tend to need to hear is that you should be more willing to accept that "X is something I like and admire, and I am also not X" without having to then worry about your exact relationship to X or redefining X to include themselves (or looking for a different label Y). You are allowed to be aligned with EA but not be an EA and you might find this idea freeing (or I might be fighting the wrong fight here).

I plan to seek status/glory through making the world a better place.

That is, my desire for status/prestige/impact/glory is interpreted through an effective altruistic like framework.

"I want to move the world" transformed into "I want to make the world much better".

"I want to have a large impact" became "I want to have a large impact on creating a brighter future".

I joined the rationalist community at a really impressionable stage. My desire for impact/prestige/status, etc. persisted, but it was directed at making the world better.

I think the question determining your core philosophy would be which term you consider primary.

If this is not answered by the earlier statements, then it's incoherent/inapplicable. I don't want to have a large negative impact, and my desire for impact/prestige cannot be divorced from the context of "a much brighter world".

For example if you view them as a means to an end of helping people and are willing to reject seeking them if someone convinces you they are significantly reducing your EV then that would reconcile the "A" part of EA.

My EV is personally making the world a brighter place.

I don't think this is coherent either. I don't view them as a means to an end of helping people.

But I don't know how seeking status/glory by making the world a brighter place could possibly be reducing my expected value?

It feels incoherent/inapplicable.

A piece of advice I think younger people tend to need to hear is that you should be more willing to accept that "X is something I like and admire, and I am also not X" without having to then worry about your exact relationship to X or redefining X to include themselves (or looking for a different label Y). You are allowed to be aligned with EA but not be an EA and you might find this idea freeing (or I might be fighting the wrong fight here).

This is true, and if I'm not an EA, I'll have to accept it. But it's not yet clear to me that I'm just "very EA adjacent" as opposed to "fully EA". And I do want to be an EA I think.

I might modify my values in that direction (why I said I'm not "yet" vegan as opposed to not vegan).

I really enjoyed your frankness.

From reading what you wrote I have a suspicion that you may not be a bad person. I don’t want to impose anything on you and I don’t know you, but from the post you seem mainly to be ambitious and have a high level of metacognition. Although it’s possible that you are narcissistic and I’m being swayed by your honesty.

When it comes to being “bad” - have you read Reducing long-term risks from malevolent actors? It discusses at length what it means to be a bad actor. You may want to see how much of these applies to you. Note that these traits are on dimension and have to be somewhat prevalent in population due to increasing genes fitness in certain contexts, so it’s about quantity.

Regarding status. I would be surprised if a significant portion of EAs or even the majority is not status-driven. My understanding is that status is a fundamental human motive. This is not a claim whether it’s good or bad, but rather pointing out that there may be a lot of selfish motivations here. In fact, I think what effective altruism nailed is hacking status in a way that is optimal for the world - you gain status the more intellectually honest you are and the more altruistic you are which seems to be a self correcting system to me.

Personally, I have seen a lot of examples of people who are highly altruistic / altruistic at first glance / passing a lot of purity tests that were optimizing for self-serving outcomes when having a choice, sometimes leading to catastrophic outcomes for their groups in the long term. I have also seen at least a dozen examples of people who broadcast strong signals of their character to be exposed as heavily immoral. This also is in accordance to what the post about malevolent actors points:

Such individuals might even deliberately display personality characteristics entirely at odds with their actual personality. In fact, many dictators did precisely that and portrayed themselves—often successfully—as selfless visionaries, tirelessly working for the greater good (e.g., Dikötter, 2019).

So, it seems to me that the real question is whether:

  • your output is negative (including n-order effects),
  • you are not able to override your self-serving incentives when there is a misalignment with the community.

So, I second what was mentioned by NunoSempere that what you [are able to] optimize for is an important question.

Personally, when hiring, the one of the things that scares me the most are people of low integrity that can sacrifice organizational values and norms for a personal gain (e.g. sabotaging psychological safety to be liked, sabotaging others to have more power, avoiding truth-seeking because of personal preferences, etc.). So basically people who do not stand up to their ideals (or reported ideals) - again with a caveat that it’s about some balance and not 100% purity - we all have our shortcomings.

In my view, a good question to ask yourself (if you are able to admit it to yourself) is whether you have a track record of integrity - respecting certain norms even if they do not serve you. For example, I think it’s easy to observe in modern days by watching yourself playing games - do you respect fair play, do you have a respect for rules, do you cheat or have a desire to cheat, do you celebrate wins of others (especially competitors), etc. I think it can be a good proxy for real world games. Or recalling how you behaved toward others and ideals when you were in a position of power. I think this can give you an idea of what you are optimizing for.

I also heavily recommend topics that explore virtues / values for utilitarians to see if following some proposals resonates with you, especially Virtues for Real-World Utilitarians by Stefan Schubert and Lucius Caviola.

Epistemic status: 2am ramble.

It's about trust, although it definitely varies in importance from situation to situation. There's a very strong trust between people who have strong shared knowledge that they are all utilitarian. Establishing that is where the "purity tests" get value.

Here's a little example.

Let's say you had some private information about a problem/solution that the ea community hadn't yet worked on, and the following choice: A) reveal it to the community, with near certainty that the problem will be solved at least as well as if you yourself solved it (because you still might be the person to solve it), get only a little recognition for being the person to start the thread of investigation. B) work on/think about the solution yourself for some time first, which gives you a significantly higher likelihood of getting credit for the solution, with few/no personal repercussions.

(B) is strictly worse from a utilitarian perspective than (A)

Which would you do? In almost all industries/communities/whatever, people do B. Many EAs (me included) like to imagine we can be a community where people do A, even though it is personally bad to do A. There's a lot of kinda-decision-theory-y stuff that becomes possible between people who know each other will take (A)-like options.

For X-risk reduction (well, direct work at least), it's much less important than in other EA stuff, because there's not as many ways for these situations to come up, because anyone who groks (near-term) X-risk knows it's in their own greater interest to increase progress on the solution rather than recieve the recognition.

For other areas though, personal and altruistic interests aren't aligned and so these situations are gonna come up.

I personally wouldn't call anyone "bad", it's an unhealthy way to think. I prefer people be honest about their motivations, and big respect to you for doing so.

I agree that high-trust networks are valuable (and therefore important to build or preserve). However, I think that trustworthiness is quite disconnected to how people think of their life goals (whether they're utilitarian/altruistic or self-oriented). Instead, I think the way to build high-trust networks is by getting to know people well and paying attention to the specifics.

For instance, we can envision"selfish" people who are nice to others but utilitarians who want to sabotage others over TAI timeline disagreements or disagreements about population ethics. Similarly, we can envision "selfish" people who are transparent about their motivations, aware of their weaknesses, etc., but utilitarians who are deluded. (E.g., a utilitarian may keep a project idea secret because it doesn't even occur to them that others might be a better fit – they may think they excel at everything and lack trust in others / not want them to have influence.)

I think it's bad to have social norms that punish people who admit they have self-oriented goals. I think this implicitly reinforces a culture where claiming to be fully utilitarian gives you a trustworthiness benefit – but that's the type of thing that "bad actors" would exploit.

Huh. If I had a bright idea for AI Safety, I'd share it and expect to get status/credit for doing so.

The idea of hiding any bright alignment research ideas I came up with didn't occur to me.

I'm under the impression that because of common sense morals (i.e. I wouldn't deliberately sabotage to get the chance to play hero), selfishly motivated EAs like me don't behave particularly different in common scenarios.

There are scenarios where my selfishness will be highlighted, but they're very, very narrow states and unlikely to materialise in the real world (highly contrived and only in thought experiment land). In the real world, I don't expect it to be relevant. Ditto for concerns about superrational behaviour. The kind of superrational coordination that's possible for purely motivated EAs but isn't possible with me is behaviour I don't expect to actually manifest in the real world.

Yeah the example above with choosing to not get promoted or not recieve funding is a more realistic scenario.

I agree these situations are somewhat rare in practice.

Re. AI Safety, my point was that these situations are especially rare there (among people who agree it's a problem, which is about states of knowledge anyway, not about goals)

Thanks for this post, I think it's a good discussion.

I respect you immensely for writing this, but some degree of altruism is required for being an effective altruist - not an infinite duty to self-sacrifice, but the understanding that you can be trusted to do so on big things, and costly signs you will do so are helpful. 10% giving is one such costly sign and it's not required that you do all of them (I also think you overestimate the fraction of EAs who are vegan). However I  think the disjunction between wanting the best for the world and wanting to have a high profile by improving the world occurs everywhere; in the fairly plausible world where  AI alignment is impossible, your most effective action is probably either not working on AI or being subtly so bad at it that the field suffers, neither of which will win you much status (assuming you can't prove that alignment is impossible). This is a general instance of the problem outlined here: "I am most motivated by the prospect of creating a radically better world as opposed to securing our current one from catastrophe". A biased motivation combined with the unilateralist's curse can easily give your actions negative expected utility but positive expected status-payoff: you don't lose face if everyone goes extinct. There are lots of plausible real examples of this, like geo-engineering or gain-of-function research. Which way you'd fall on these questions in practice is a much better test of whether you're "actually EA" than whether you buy cheap things. 

On a more institutionally level, it is unhelpful for EA to become associated with narcissism (which in some circles it already is). Since the cost is borne by the movement not the individual we expect misalignment until being EA is harmful to your reputation, so some degree of excluding narcissists with marginally positive expected personal impact is warranted.

It seems possible to me that you have a concept-shaped hole for the concept "bad people"

[comment deleted]2y5
0
0
Curated and popular this week
Relevant opportunities