I'm a community builder and I've worked at/with CEA, the SERI ML Alignment Theory Scholars program, and EA Cambridge. My degree is in Computer Science.
Currently, I think a lot about the epistemics of EA: Is EA just like any other ideology (epistemologically)? Does moral uncertainty pose a problem to buying into "EA moral philosophy"? And I'm trying to find people who ask these questions in a similar way to me!
Help me find a role in EA operations/communications! Main constraints are it has to be part-time and in Oxford/London/Bay Area/remote.
Thanks Max!
Sounds like a plausible theory that you lost motivation because you pushed yourself too hard. I'd also pay attention to "dumber" reasons like maybe you had more motivation from supervisors/social environment/more achievable goals in the past.
Similar to my call to take a vacation, maybe it's worth it for you to only do motivating work (like a side project) for 1.5 weeks and see if the tiredness disappears.
All of this with the caveat that you understand your situation a lot better than I do ofc!
Optimistic note with low confidence:
In my impression, SBF thought he was doing an 'unpalatable' but right thing given the calculations (and his epistemic immodesty). Promoting a central meme in EA like "naïve calculations like this are too dangerous and too fallible" might solve a lot of the issue. I think dangerously-optimize-y people in EA are already updating in this direction as a result of FTX. Before FTX, being "hardcore" and doing naïve calculations was seen as cool sometimes. If we correct hard for this right now, it may be less of an issue in the future.
2 main caveats:
ah, the thing about fragile cooperative equilibria makes sense to me.
I'm not as sure as you that this shift would happen to core EA though. I could also imagine that current EAs will have a very allergic reaction to new, unaligned people coming in and trying to take advantage of EA resources. I imagine something like a counterculture forming where aligned EAs start purposefully setting themselves apart from people who're only in it for a piece of the pie, by putting even more emphasis on high EA alignment. I believe I've already seen small versions of this happening in response to non-altruistic incentives appearing in EA.
The faster the flood of new people and change of incentives happens, the more confident I am in this view. Overall, I'm not extremely confident at all though.
On your last point, if I understand this right this is not the thing you're most worried about though? Like, these people hijacking EA are not the mechanism by which EA may collapse in your view?
It's unclear to me whether you are saying that the potentially huge number of new people in EA will try to take advantage of EA resources for personal gain or that WE, who are currently in EA for altruistic reasons, will do so. The former sounds likely to me, the latter doesn't.
I might be missing crucial context here since I'm not familiar with the Thielosphere and all that, but overall I also don't think a huge number of new, unaligned people will be the downfall of EA. As long as leadership, thought-leaders, and grantmakers in EA stay aligned, it may be harder for them to determine whom to give that grant (or that stamp of approval), but wouldn't that just simply lead to less grants? Which seems bad but not like the end?
Or are you imagining highly intelligent people with impressive resumes who strategically aim to hijack EA resources for their aims and get into important positions in EA?
Thanks a lot, I think it's really valuable to have your experience written up!