Off and on projects in epistemic public goods, AI alignment (mostly interested in multipolar scenarios, cooperative AI, ARCHES, etc.), and community building. I've probably done my best work as a research engineer on a cryptography team, I'm pretty bad at product engineering.
meta note: this seems like a good aigov distillation that would help people all over the field, and shouldn't be given the community tag just because it takes the form of EAG anecdotes. It's a very (valuable, IMO) object level post!
epistemic status: started reading the sequences while delivering food to people on a bike.
I never really felt excluded by problem profile pages. I felt like I could access the tone they take of "here's some broken stuff why don't you go fix it", I just had to discount or not read any sections that talked about status, top universities, etc., kind of assumed I'd have to write my own theory of change and have a thick skin about not always being taken seriously. (and then what happened was the actually-existing EA movement contained tons of more sr people doing actual work saying "yeah here take a crack at it", encouraging me to accept mentorship or jobs from them. If anyone's ever assumed I'm low potential it's been the more community builder types lol)
Age is a more likely vector of neglectedness for 80k than class. I'd be more excited about 80k targeting the frustration people older than 25 or 35 have with 80k, that seems more useful than thinking deeply about class. Related to both of these, we also have to talk about the risk intolerant: are people with obligations and responsibilities (family members with expensive diseases, childrearing projects) low EV? This again seems like a more potent surface for 80k criticism and improvement than class stuff. (But of course, both age and obligation load have some correlations with class!)
short answer: yes I do worry.
longer answer: in my view, Baum 2020 is my favorite reading recommendation to kick off a framing of the problem, though it doesn't address totalitarianism in particular nor threats of specific players on the game board.
Related topic: see Bostrom's vulnerable world hypothesis (VWH), which frames a debate about xrisk as a possible moral case for restricting freedoms. A criticism that floats around is that VWH was irresponsible to publish because a nasty government could point to it as a rationalization for actions they wanted to take anyway.
I propose "positive and negative longtermism", so something to do with reaching full potential would all be positive longtermism and mere extinction protection is negative longtermism.
I think lesswrong and EA are gluttonous and appropriative and good at looting the useful stuff from a breadth of academic fields, but excluding continental philosophy is a deeply correct move that we have made and will continue to make.
I think the point is that our subculture's "rationalism" and a historian of philosophy's "rationalism" are homonyms.
Also I think "reason as the chief source of knowledge" is not quite it, right? I think "observation is the chief source of knowledge" would pass an ideological turing test a bit better.
Deciding on the Future on Our Own
off the top of my head not trying very hard, here are in-movement confrontations of that problem
It's not her fault that she has finite time and wanted to keep the running length of the video down, maybe this stuff is more in the weeds and not accessible to a surface glance. Maybe it is Ord's and MacAskill's fault for not emphasizing that we've been struggling with this debate a lot (I accuse Ord of being too bullish on positive longtermism and not bullish enough on negative longtermism in my above-linked shortform).
Prioritarianism is a flavor of utilitarianism that tries to increase impact by starting with the oppressed or unprivileged.
Standpoint theory or standpoint epistemology is about advantages and disadvantages to gaining knowledge based on demographic membership.
Leftist culture is deeply exposed to both of these views, occasionally to the point of them being invisible/commonsensical assumptions.
My internal gpt completion / simulation of someone like Thorn assumed that her rhetorical question was gesturing toward "these EA folks seem to be underrating at least one of prioritarianism or standpoint epistemology"