Hide table of contents

Effective altruism is a complicated idea. When an idea is complicated, often people don't understand the full idea but instead some low resolution version of the idea which they get from snippets of conversation, impressions they have from other people, and vague recollection of media articles.
 

What's the current low resolution version of effective altruism? Is it positive? What would a better low resolution image be?

35

0
0

Reactions

0
0
New Answer
New Comment

11 Answers sorted by

Here's what I wish the low-resolution version was:

"Effective altruists believe that if you actually try to do as much good as you can with your money or time, you'll do thousands of times more good than if you donate in the usual ways. They also think that you should do this."

I'm sympathetic to the idea of trying to make spread of impact the key idea. I think the problem in practice is "do thousands of times more good" is too abstract to be sticky and easily understood, so it gets simplified to something more concrete.

Nice! Could you do a version which is 70% lower resolution? 😁

7
kokotajlod
3y
Thanks! How about these:  "Effective altruists believe you'll 1000x more good if you prioritize impact" "Effective altruists believe you'll 1000x more good if you actually try to do the most good you can." "Effective altruists believe you'll do 1000x more good if you shut up and calculate" "Effective altruists believe you'll do 1000x more good if you take cost-effectiveness calculations seriously"   I think the third one is my favorite, haha, but the second one is what I think would actually be best.
5
Vaidehi Agarwalla
3y
I'd add "Most" to the beginning of all of those and then I think it's more accurate but still low-res :P
1
kokotajlod
3y
Sounds good.

I find others answers about what the actual low resolution version of EA they see in the wild fascinating.

I go with the classic and if people ask I give them a three word answer: "doing good better".

If they ask for more, it's something like: "People want to do good in the world, and some good doing efforts produce better outcomes than others. EA is about figuring out how to get the best outcomes (or the largest positive impact) for time/money/effort relative to what a person thinks is important."

Tyler Cowen's low resolution version: "COWEN: A lot of giving is not very rational. Whether that’s good or bad, it’s a fact. And if you try to make it too rational in a particular way, a very culturally specific way, you’ll simply end up with less giving. And then also, a lot of the particular targets of effective altruism, I’m not sure, are bad ideas. So somewhere like Harvard, it has a huge endowment, it’s super non- or even anti-egalitarian. But it’s nonetheless a self-replicating cluster of creativity. And if you’re a rich person, Harvard was your alma mater, and you give them a million dollars, is that a bad idea? I don’t know, but effective altruists tend to be quite sure it’s a bad idea." from https://conversationswithtyler.com/episodes/patrick-collison/
 

Seems mostly focused on the idea of 'EA tries to shift existing philanthropy to be given using more rational decision making procedures' 

"The effective altruism movement emerged around the start of this decade in Oxford. The big idea is to encourage a broadly utilitarian/rationalist approach to doing good, and it is particularly aimed at graduate higher earners who have more money to give and who thus, on a utilitarian calculus, can achieve more. This approach has proved particularly attractive to those with backgrounds in maths and computer science, and chapters of effective altruists have sprung up in Silicon Valley, New York and London, with many pledging at least 10% of their income to charity."

https://www.theguardian.com/money/belief/2017/nov/23/its-called-effective-altruism-but-is-it-really-the-best-way-to-do-good

(rough note) This seems to have strands of: 'rich people focused' 'rich people are more moral' 'E2G focus'

Pessimistically, my guess is that the current low-res impression of EA is something like: charity for nerds. 'Charity' still gets taken to mean 'global health charities'. Earning to give too often gets taken to be the main goal, and maybe there's also an overemphasis on EA's confidence in what can be measured / compared / predicted (a kind of naïve utilitarianism).

(Incidentally, I'm not sure effective altruism is an idea — maybe it's more like (i) a bunch of motivating arguments and concepts; (ii) the intellectual project of building on them; (iii) the practical project of 'following through' on those ideas; and (iv) the community of people engaged in those projects. Will MacAskill's 'The Definition of Effective Altruism' is really good.)

'Charity for nerds' doesn't sound like an awful low res version compared to others suggested like 'moral hand-washing for rich people'.

'Charity for nerds' has nice properties like:

  • it's okay if you're not into EA (maybe you're just not nerdy enough), compared to 'EA things you're evil if you don't agree with EA'
  • selects for nerdy people, who are willing to think hard about their work
4
Owen Cotton-Barratt
3y
I agree with this. I think "do-gooding for nerds" might be preferable than "charity for nerds", but probably "charity for nerds" is closer to current perceptions.

Here's the case that the low-fidelity version is actually better. Not saying I believe it, but trying to outline what the argument would be...

Say the low-fidelity version is something like: "Think a bit about how you can do the most good with your money and time, and do some research." 

Could this be preferable to the real thing?

It depends on how sharply diminishing the returns are to the practice of thinking about all of this stuff. Sometimes it seems like effective altruists see no diminishing returns at all. But it's plausible that they are steeply diminishing, and that effectively the value of EA is avoiding really obviously bad uses of time and money, rather than successfully parsing whether AI safety is better or worse than institutional decision-making as an area of focus. 

If you can get most of the benefits of EA with people just thinking a little about whether they're doing as much good as they could be, perhaps the low-fidelity EA is the best EA: does a lot of good, saves a lot of time for other things. And that's before you add in the potential of the low-fidelity version to spread more quickly and put off fewer people, thereby also potentially doing much more good.

Unfortunately I think the importance of EA actually goes up as you focus on better and better things. My best guess is the distribution of impact is lognormal, this means that going from, say, the 90th percentile best thing to the 99th could easily be a bigger jump than going from, say, the 50th percentile to the 80th.

You're right that at some point diminishing returns to more research must kick in and you should take action rather than do more research, but I think that point is well beyond "don't do something obviously bad", and more like "after you've thought really carefully about what the very top priority might be, including potentially unconventional and weird-seeming issues".

I wonder if a better low resolution version of EA is to highlight how some people in the EA community are willing to make large changes to their careers/career plans to solve big but neglected problems. 

An article on this could cite examples like Elie Hassenfeld and Holden Karnofsky switching from hedge fund trading to founding GiveWell, or Ben Todd switching from wanting to work in Climate Change to found 80,000 Hours, or Marie Gibbons switching from veterinary work to working on clean meat.

 I feel like EA in mainstream media focuses a bit too much on donations. I think I'd rather have more people optimizing their career plans to do more direct work to solve pressing problems, rather than have more people earn-to-give or just optimizing their donations.

My favourite version is that it is a question in which we ask ourselves how we can use our resources to do more good (than we otherwise would if we didn't ask the question). Or as Helen Toner put it: 

“How can I do the most good, with the resources available to me?”


 

You've heard of "doing good", but have you heard of "doing good better?" Really makes you think

From another tweet on the same thread with @nonmayorpete:

"Some people get a bad "i can be more virtuous by being smarter haha" impression. it also has a rep for being very utilitarian and putting a too much weight into world-ending AI risk.

this is the first time i've seen negative vibes about it being money-only though"

I'm just showing this as an example, not because I've heard of this criticism before. This is my first time hearing the criticism about "being virtuous by being smarter", and "putting too much weight into world-ending AI risk". I've heard the latter from people within the community, but not from someone outside of it.

But I'm not from SF or the U.S., so I'm not really exposed to people who have these low resolution definitions of effective altruism. I think here in the Philippines, we thankfully don't have any negative, low resolution versions of EA circulating around yet.

I think SF, in general, is not representative because a lot more people (non-EAs) are aware of EA,  AI risk etc. 

Also, the EA community in SF is also  different from most other EA communities, including other US EA communities because of the overlap with the SF tech scene and rationality community. 

So although I haven't seen that criticism from non-EAs as much, I think it's a reasonable low-res version that someone in SF might get if they just hear about EA. 

Here's one in a thread I saw on Twitter from @nonmayorpete. This tweet got 1,600 likes:

"Hey SF-based techies I wrote your resolutions for you:
- Delete ride-hailing and food-delivery apps
- Learn 3 bus lines
- Walk the Crosstown Trail
- Google who your Supervisor is
- Volunteer with your time, not your skills
- Pick a cause that is not effective altruism"

Another user replied: "What's effective altruism, and what's wrong with it?"

From @nonmayorpete: "I’ll let you look it up. It’s a completely fair topic to be interested in but it conveniently lets high-income people justify not getting their hands dirty in literally anything"

Other tweets of Pete aren't as negative on EA as that one, and Luke Freeman from GWWC and Aaron Gertler from CEA have both responded to the thread to try correcting his view. But it still shows how low-resolution and negative people's perception of EA can be.

Interesting about the idea that EA let's people off the moral hook easily: 'I'm rich so I just donate and I've done my moral duty and get to virtue signal'

It's interesting how that applies to people who are wealthy, work a conventional job, and donate 10% to charities, but doesn't seem like a valid criticism against those who donate way more like 50%+. That normally seems to be met with the response "wow that's impressive self sacrifice!". Same with those who might drastically shift their career

There's a lot to unpack in that tweet. I think something is going on like:

  • fighting about who is really the most virtuous
  • being upset people aren't more focused on the things you think are important
  • being upset that people claim status by doing things you can't or won't do
  • being jealous people are doing good doing things you aren't/can't/won't do
  • virtue signaling
  • righteous indignation
  • spillover of culture war stuff going on in SF

None of it looks like a real criticism of EA, but rather of lots of other things EA just happens to be adjacent to.

Doesn't mean it doesn't have to be addressed or isn't an issue, but I think also worth keeping these kinds of criticisms in context.

3
james
3y
It might be that SF has more people who are kinda into EA such that they donate 10% to givewell, diluting out the people who are representative of more extreme self sacrifice
More from james
Curated and popular this week
Relevant opportunities