Quadratic Reciprocity

369Joined Jul 2022

Comments
50

I like EA ideas, I think my sanely trying to solve the biggest problems is a good thing. I am less sure about the current EA movement, partly because of the track record of the movement so far and partly because of intuitions that movements that are as into gaining influence and recruiting more people will go off track and it doesn't to me look like there's enough being done to preserve people's sanity and get them to think clearly in the face of the mind-warping effects of the movement. 

I think it could both be true that we need a healthy EA (or longtermist) movement to make it through this century and that the current EA movement ends up causing more harm than good. Just to be clear, I currently think that in the current trajectory, the EA movement will end up being net good but I am not super confident in this. 

Also, sorry my answer is mostly just coming from thinking about AI x-risk stuff rather than EA as a whole. 

Fairs. I think in FTX worlds, it should actually be in fact harder to get people who strongly dislike fraud to get on board with EA  and in Bostrom email worlds, it should actually be in fact harder to get people who strongly dislike the apology to get on board with EA. And that this difficulty, to the extent we care about people turned off by either event having favourable opinion of EA, is actually right and just. 

I guess I make comments like the one I made above because I think fewer people doing EA community building are seriously considering that the actual impact (and expected impact) of the EA movement could be net negative. It might not be, and I'm leaning towards it being positive but I think it is a serious possibility that EA movement causes more harm than good overall, for example via having sped up AI timelines due to DeepMind/OpenAI/Anthropic and a few of the EA community members committing one of the biggest frauds ever. Or more vague things like EAs fuck up cause prioritisation, maximise really hard, and can't course correct later. 

The reason why EA movement could end up being not net harmful is when we are ambitious but prioritise being correct and having good epistemics really hard. This is not the vibe I get when I talk to many community builders. A lot of them seem happy with "make more EAs is good" and forget that the mechanism for EA being positively impactful relies pretty heavily  on our ability to steer correctly. I think they've decided too quickly that "EA movement good therefore I must protect and grow it". I think EA ideas are really good, less sure about the movement. 
 

In the past, comments like Tegan's post have been useful for getting me to apply to things. 

Another thing that I liked was when a job post had a comment at the bottom saying that women and minorities are less likely to feel qualified to apply to things, mentioning something along the lines of this https://hbr.org/2014/08/why-women-dont-apply-for-jobs-unless-theyre-100-qualified and encouraging them to apply. 

Just conjecturing here but I think one reason the ratio is worse than it has to be is perhaps because EA jobs are usually somewhat atypical so it is difficult to figure out if you're actually qualified (compared to more "normal" jobs) which makes people who have a tendency to feel underqualified even less likely to apply. Plus because the community is small and a lot of people hear about opportunities / get encouraged to apply to things via people in their social networks, and people are less likely to have these social connections if they are from a currently underrepresented group. 

I also think mentorship programs are helpful. One-off or series of calls with a more experienced person are helpful for promising people (regardless of gender or other demographics) who don't have friends or social connections already in a field to figure out how to enter it. 

I had a negative reaction to the post but felt hesitant to reply because of the emotional content. It does suck what the OP is experiencing - I think they (and others) could make less of their identity be about the EA movement and that this would be a good thing. I don't like that 'small-scale EA community builders' are having to apologise for things others into EA have done or having to spend time figuring out how to react to EA drama. That does seem like a waste of time and emotional energy, and also unnecessary. 

I would appreciate something like a (pinned?) megathread for the topic and restricting discussion of the drama to that post 

Edit: I think the current approach of basically doing that and downgrading everything else on topic to personal blog makes sense. 

I hope more people, especially EA community builders, take some time to reevaluate the value of growing the EA movement and EA community building. Seems like a lot of community builders are acting as if "making more EAs" is good for its own sake. I'm much less sure about the value of growing the EA community building and more uncertain about whether it is positive at all. Seems like a lot of people are having to put in energy to do PR, make EA look good, fight fires in the community when their time could be better spent directly focusing on how to solve the big problems.

But I also think directly focusing on how to solve the big problems is difficult and "get more people into EA and maybe some of them will know how to make progress" feels like an easy way out. 

I haven't thought about this much. I am just reporting that some people I briefly talked to thought EA was mainly that and had a negative opinion of it. 

I'll list some criticisms of EA that I heard, prior to FTX, from friends/acquaintances who I respect (which doesn't mean that I think all of these critiques are good). I am paraphrasing a lot so might be misrepresenting some of them.

Some folks in EA are a bit too pushy to get new people to engage more. This was from a person who thought of doing good primarily in terms of their contracts with other people, supporting people in their local community, and increasing cooperation and coordination in their social groups. They also cared about helping people globally (donated some of their income to global health charities + were vegetarian) but felt like it wasn't the only thing they cared about. They felt like often in their interactions with EAs, the other person would try to bring up the same thought experiments they had already heard in order to get rid of their "bias towards helping people close to them in space-time". This was annoying for them. They also came from a background in law and found the emphasis on AI safety offputting because they didn't have the technical knowledge to form an opinion on it and the arguments were often presented to them by EA students who failed to convince them, and who they thought also didn't have good reason to believe in them. 

Another person mentioned that it looked weird to them that EA spent a lot of resources on helping itself. Without looking too closely at it, it looked like the ratio of resources spent on meta EA stuff to directly impactful stuff seemed suspiciously high. Their general thoughts on communities with access to billionaire money, influence, and young people wanting to find a purpose made them assume negative things about EA community as well. This made it harder for them to take some of the EA ideas seriously. I feel sympathetic to this and feel like if I wasn't already part of the effective altruism community and understood the value in a lot of the EA meta stuff, I would feel similarly suspicious perhaps.

Someone else mentioned that lots of EA people they met came across as young, not very wise, and quite arrogant for their level of experience and knowledge. This put them off. As one example, they had negative experiences with EAs who didn't have any experience with ML trying to persuade others that AI x-risk was the biggest problem.  

Then there was suspicion that EAs, because of their emphasis on utilitarianism, might be willing to do things like lie, break rules, push the big guy in front of the trolley, etc if it were for the "greater good".  This made them hard to trust. 

Some people I have briefly talked to mainly thought EA was about earning to give by working for Wall Street, and they thought it was harmful because of that.

I didn't hear the "EA is too elitist" or "EA isn't diverse enough" criticisms much (i can't think of a specific time someone brought that up as a reason they chose not to engage more with EA). 

I have talked to some non-EA friends about EA stuff after the FTX crisis (including one who himself lost a lot of money that was on the platform), mostly because they sent me memes about SBF's effective altruism. My impression was that their opinion (generally mildly positive though not personally enthusiastic) on EA did not change much as a result of FTX. This is unfortunately probably not the case for people who heard about EA for the first time because of FTX - they are more likely to assume bad things about EAs if they don't know any in real life (and I think this is to some extent, a justified response).

My quick alternative hypotheses: they could also be using disagree vote to mean "I don't work this many hours / this isn't normal for me" or "I don't seriously think you get that many hours of actual work done". 

Besides that, I also think there's a tendency for people to feel more comfortable reading answers to this question that are on the lower side.   

Load More