All of ParthThaya's Comments + Replies

I'm the author of a (reasonably highly upvoted) post that called out some problems I see with all of EA's different cause areas being under the single umbrella of effective altruism. I'm guessing this is one of the schism posts being referred to here, so I'd be interested in reading more fleshed out rebuttals. 

The comments section contained some good discussion with a variety of perspectives - some supporting my arguments, some opposing, some mixed - so it seems to have struck a chord with some at least. I do plan to continue making my case for why I ... (read more)

5
Gavin
2y
Good post! I doubt I have anything original to say. There is already cause-specific non-EA outreach. (Not least a little thing called Lesswrong!) It's great, and there should be more. Xrisk work is at least half altruistic for a lot of people, at least on the conscious level. We have managed the high-pay tension alright so far (not without cost). I don't see an issue with some EA work happening sans the EA name; there are plenty of high-impact roles where it'd be unwise to broadcast any such social movement allegiance. The name is indeed not ideal, but I've never seen a less bad one and the switching costs seem way higher than the mild arrogance and very mild philosophical misconnotations of the current one. Overall I see schism as solving (at really high expected cost) some social problems we can solve with talking and trade.

I agree it'd be good do rigorous analyses/estimations on what the costs vs benefits to global poverty and animal welfare causes are from being under the same movement as longtermism.  If anyone wants to do this, I'd be happy to help brainstorm ideas on how it can be done. 

I responded to the point about longtermism benefiting from its association with effective giving in another comment

I don't believe the EA -> existential risk pipeline is the best pipeline to bring in people to work on existential risks. I actually think it's a very suboptimal one and that absent how EA history played out, no one would ever have had answered the question of "What's the best way to get people to work on existential risks?" with anything resembling "Let's start them with the ideas of Peter Singer and then convince them that they should include future people in their circle of concern and do the math." Obviously this argument has worked well for longter... (read more)

9
Chris Leong
2y
I don't know if it's the best pipeline, but a lot of people have come through this pipeline who were initially skeptical of existential risks. So empirically, it seems to be a more effective pipeline than people might think. I guess one of the advantages is that people only need to resonate with one of the main cause areas to initially get involved and they can shift cause areas over time and I think it's really important to have a pipeline like this.

Thanks for the kind words! Your observations that "people who are emphatically in one camp but not the other are very different people" matches my beliefs here as well. It seems intuitively evident to me that most of the people who want to help the less fortunate aren't going to be attracted to, and often will be repelled by, a movement that focuses heavily on longtermism. And that most of the people who want to solve big existential problems aren't going to be interested in EA ideas or concepts (I'll use Elon Musk and Dominic Cummings are my examples here again).

There's a sampling bias problem here. The EAs who are in the movement, and the people EAs are likely to encounter, are the people who weren't filtered out of the movement. One could sample EAs, find a whole bunch of people who aren't into longtermism but weren't filtered out, and declare that the filter effect isn't a problem. But that wouldn't take into account all the people who were filtered out, because counting them is much harder. 

In the absence of being able to do that, this is how I explained my reasoning about this: 

I have met various ef

... (read more)
1
Jeremy
2y
Sorry for the delay. Yes this seems like the crux. As you pointed out, there's not much evidence either way. Your intuitions tell you that there must be a lot of these people, but mine say the opposite. If someone likes the Givewell recommendations, for example, but is averse to longtermism and less appreciative of the other aspects of EA, I don't see why they wouldn't just use Givewell for their charity recommendations and ignore the rest, rather than avoiding the movement altogether. If these people are indeed "less appreciative of the rest of EA", they don't seem likely to contribute much to a hypothetical EA sans longtermism either. Further, it seems to me that renaming/dividing up the community is a huge endeavor, with lots of costs. Not the kind of thing one should undertake without pretty good evidence that it is going to be worth it. One last point, for those of us who have bought in to the longtermist/x-risk stuff, there is the added benefit that many people who come to EA for effective giving, etc. (including many of the movement's founders) eventually do come around on those ideas. If you aren't convinced, you probably see that as somewhere on the scale of negative to neutral.  All that said, I don't see why your chapter at Microsoft has to have Effective Altruism in the name. It could just as easily be called Effective Giving if that's what you'd like it to focus on. It could emphasize that many of the arguments/evidence for it come from EA, but EA is something broader. 

You make a strong case that trying to convince people to work on existential risks for just their own sakes doesn't make much sense. But promoting a cause area isn't just about getting people to work on them but about getting the public and governments and institutions to take them seriously. 

For instance, Will MacAskill talks about ideas like scanning the wastewater for new pathogens and using UVC to sterilize airborne pathogens. But he does this only after trying to sell the reader/listener on caring about the potential trillions of future people. I... (read more)

Do you have any evidence that this is happening?

Anecdotally, yes. My partner who proofread my piece left this comment around what I wrote here: "Hit the nail on the head. This is literally how I experienced coming in via effective giving/global poverty calls to action. It wasn't long till I got bait-and-switched and told that this improvement I just made is actually pointless in the grand scheme of things. You might not get me on board with extinction prevention initiatives, but I'm happy about my charity contributions."

The comment I linked to explains wel... (read more)

1
Jeremy
2y
I guess we can swap anecdotes. I came to EA for the Givewell top charities, a bit after that Vox article was written. It took me several years to come around on the longtermism/x-risk stuff, but I never felt duped or bait-and-switched. Cause neutrality is a super important part of EA to me and I think that naturally leads to exploring the weirder/more unconventional ideas.  Using terms like dupe and bait and switch also implies that something has been taken away, which is clearly not the case. There is a lot of longtermist/x-risk content these days, but there is still plenty going on with donations and global poverty. More money than ever is being moved to Givewell top charities (don't have the time to look it up, but I would be surprised if the same wasn't also true of EA animal welfare) and (from memory) the last EA survey showed a majority of EAs consider global health and wellbeing their top cause area. I hadn't heard the "rounding error" comment before (and don't agree with it), but before I read the article, I was expecting that the author would have made that claim, and was a bit surprised he was just reporting having heard it from "multiple attendees" at EAG - no more context than that. The article gets more mileage out of that anonymous quote than really seems warranted - the whole thing left me with a bit of a clickbait-y/icky feeling. FWIW, the author also now says about it, "I was wrong, and I was wrong for a silly reason..." In any case, I am glad your partner is happy with their charity contributions. If that's what they get out of EA, I wouldn't at all consider that being filtered out. Their donations are doing a lot of good! I think many  come to EA and stop with that, and that's fine. Some, like me, may eventually come around on ideas they didn't initially find convincing. To me that seems like exactly how it should work. 
5
david_reinstein
2y
My impression is that about half of EA is ppl complaining that ‘EA is just longtermism these days’ :)

The claim isn't that the current framing of all these cause areas as effective altruism doesn't make any sense, but that it's confusing and sub-optimal. According to Matt Yglesias, there are already "relevant people" who agree strongly enough with  this that they're trying to drop to just using the acronym EA - but I think that's a poor solution and I hadn't seen those concerns explained in full anywhere.

As multiple recent posts have said, EAs today try to sell the obvious important and important idea of preventing existential risk using counterintuit... (read more)

Hi Sindy, thanks for the kind words! Really cool to hear you’ve been looking into doing that, and I’d be interested in hearing more. And of course you’re more than welcome to reach out if you have any questions.

I can’t speak for everyone involved, but off the top of my head, my rough strategy is something like:

  1. Get more people to hear about EA. Last year, we only managed to get invites out to ~10% of the company, so there’s lots more to do here;
  2. As there is more interest and awareness among employees, work with the company to incorporate EA principles/cha
... (read more)