The recent pivot by 80 000 hours to focus on AI seems (potentially) justified, but the lack of transparency and input makes me feel wary.
PS: I am young and new, please be kind and constructive with your feedback.
TLDR;
80 000 hours, a once cause-agnostic broad scope introductory resource (with career guides, career coaching, online blogs, podcasts) has decided to focus on upskilling and producing content focused on AGI risk, AI alignment and an AI-transformed world.
According to their post, they will still host the backlog of content on non AGI causes, but may not promote or feature it. They also say a rough 80% of new podcasts and content will be AGI focused, and other cause areas such as Nuclear Risk and Biosecurity may have to be scoped by other organisations.
Whilst I cannot claim to have in depth knowledge of robust norms in such shifts, or in AI specifically, I would set aside the actual claims for the shift, and instead focus on the potential friction in how the change was communicated.
To my knowledge, (please correct me), no public information or consultation was made beforehand, and I had no prewarning of this change. Organisations such as 80 000 hours may not owe this amount of openness, but since it is a value heavily emphasises in EA, it seems slightly alienating.
Furthermore, the actual change may not be so dramatic, but it has left me grappling with the thought that other mass organisations could just as quickly pivot. This isn't necessarily inherently bad, and has advantageous signalling of being 'with the times' and 'putting our money where our mouth is' in terms of cause area risks. However, in an evidence based framework, surely at least some heads up would go a long way in reducing short-term confusion or gaps.
Many introductory programs and fellowships utilise 80k resources, and sometimes as embeds rather than as standalone resources. Despite claiming the backlog content will remain, I see some potential disincentive to take on the resource-intensive linkrot and resource integrity work that they require, leading to loss of information and patchy dissemination.
I also use 80k as a cause agnostic introduction to many new EAs, especially at my university group. Career advising and resources focused on AI on impressionable young people can lead to lack of choice and autonomy, feelings of pressure, alienation and more. I can also see the very claim of 'work on AI as it's a big risk!' to be widely contested in and of itself, let alone the idea a key stakeholder in the EA community should endorse a lone cause so strongly.
The change may be positive, and has been met with some support, but the infrastructure in place to ensure the community is involved seems lacking. We may not have a duty to involve everyone, nor should we try to get total unanimity, but surely the values of EA would be supportive of at least some more shared decision making?
Some concrete pitfalls that I would appreciate discussion on:
- Is a short (mainly unseen) post enough to justify/communicate the change?
- Do they even owe justification?
- Should this have been discussed before the change?
- How can information content on non-AGI be safely preserved for linked resources?
- What resources (e.g. probably good) can take the place?
- Do they have enough runway to patch the gap?
- With sudden shifts in Bluedot (less biosec, focus on AI), Rethink Priorities, Atlas and other high school outreach, CEA and OP ceasing U18 and uni organiser funding, EA funds and EV changing main scopes etc- what safeguards (or should there even be safeguards) are in place in the community to prevent sudden upheaval
- Does this signal a further divide between longermists/X-risk rationalists and the more concrete level health/biosec/nuclear/welfare EA community?
Finally, the least knowledgable aspect I would appreciate clarification on is:
Should we actually all shift to considering direct AI alignment work over just reassessing what risks change in an AGI impacted future?
I know 80k aren't claiming direct AI work for all is the 'correct' choice, but it surely incentivises a 'norm' pro AI cause area work.
I see some issues adapted from an anecdotal reasoning. I appreciate AI may be a higher X risk and may be increasing in scale and impact, but compared to S-risks from pandemics or health/biology impacts, I see more tangible routes for good (maybe because I'm more risk averse) plus higher chance of suffering even if it is smaller in scale that leads to nearly as great overall expected risk.
E.g. multiple deaths from preventable diseases compounding to be nearly the same as entire society value lock in my misaligned AI... (in my naive view)
Plus, AI focus doesn't account for interest or personal experience, there's diminishing returns of mass career shifts and dilution of roles and zero sum grant funding to less advantageous candidates who feel forced into it... It's an extreme but also seems like 80k holds a big generalist pull and lot of norm setting power, so is something to bear in mind with how this may affect smaller entities and orgs.
It's not as much a pivot as a codification of what has been long true.
"EA is (experientially) about AI" has been sorta true for a long time. Money and resources do go to other causes. But the most influential and engaged people have always been focused on AI. EA institutions have long systematically emphasized AI. For example many editions of the EA handbook spend a huge fraction of their introductions to other cause areas effectively arguing why you should work on AI instead. CEA staffers very heavily favor AI. This all pushes things very hard in one direction.
I strongly prefer the blatant honestly of the 80k announcement. Much easier to think about. And much easier for young people to make informed opinions.
I think you actually shifted me slightly to the 'announcement was handled well' side (even if not fully) with the idea that blatant honesty (since their work was mainly AI anyway for the last year or so) plus the very clear change descriptors.
I am a bit wary of such a prominent resource such as 80k endorsing a sudden cause shift without first reconstructing the gap- I know they don't owe it to anyone, especially during such a tumultous time of AI risk, and there are other orgs (Probably Good, etc) but to me, 80k seemed like a very good intro into 'EA Cause Areas' that I can't think of another current substitute for. The problem profiles for example not being featured/promoted is fine for individuals already aware of their existence, but when I first navigated to 80k, I saw the big list of problem profiles and that's how I actually started getting into them, and what led to my shift from clinical medicine to a career in biosec/pandemics.
Hi BiologyTranslated, thanks for sharing your thoughts, and welcome to the forum! A minor format note: your TL;DR could perhaps be more focused on the contents of your post, and less on the 80k post.
It sounds like you disagree with their change. I do as well, so I won't focus on that.
It sounds like you have other concerns or critiques too, and I'm not sure I share those, so in the interest of hashing out my thoughts, I'll respond to some of what you've written.
I understand it came as a shock, but I'm not sure what 80k giving advanced notice of the change would have actually accomplished here. Hypothetically, presume they said in December that they would pivot in March. How would that benefit others? I think the biggest impact would be less "shock", but I'm not sure, and we would still need to grapple with the new reality. Perhaps some kind of extra consultation with the community would be useful, but that does seem quite resource intensive and haphazard, especially if they think that this is an especially critical time. I presume that they had many discussions with experts and other community members before making such a change anyway. This is a guess and I may be wrong - others in the comments seem to feel it was a rushed decision.
You've written about two gaps/confusions I can identify:
1) Introductory programs may decline in quality, due to things like out of date info/linkrot
2) Because people don't feel like they have autonomy in career choice/cause area, new people may bounce off the movement.
On the first, I don't know much about linkrot, but I expect it won't be a major issue for a few years at least, though it depends on the cause area. My model is that most things don't move that quickly, and things like "number of animals eaten per year", "best guess of risk of imminent nuclear war" and "novel biohazards" are probably roughly static, or at least static enough that the intro material found in those sections are fine. 80k's existing introductory resources will probably be fine here for a while. If there are serious updates to things like "number of countries with nuclear weapons", I do hope that they reconsider and update things there.
On the second, they have discussed that they are ok with shrinking the funnel of new people coming in the 80k/the movement more generally to some degree, and it is still their best bet. I agree it's disappointing though.
I don't think there are any, and I think this is largely a strength of the movement, so I don't think it should change. They're an independent entity, and I think they should do what they think is best. It's not a democratic movement and while a more cause neutral org will be missed in future years, I do hope Probably Good or another competitor fills that gap. My guess is that 80k expected that people focused on other cause areas would disagree with this change anyway.
I guess I'm not very interested in what it signals, but what it actually does. I don't think it divides people further. People in EA already disagree on various matters of ethics/fact, and I don't think an org saying "we've considered the arguments and believe one side is correct" is a significant issue. On an interpersonal level, I'm friends with people working in different cause areas despite my disagreements, and on an organisational level I think it's good that we try to decouple impact from other things where possible.
I might be off here, but I think an unwritten concern of yours is that there was a tonal issue with the communication. I don't think I had an issue with how it was communicated, especially considering the org members chatting in the comments, but others did seem to feel off-put but it and considered it almost callous. I can understand where they are coming from.
This is something that we should all think about, but I don't think so. I would be curious to hear 80k talk more about it though.
That's a long response. but you wrote about some interesting ideas and I liked thinking about them too. If you have time/interest, I'd be particularly interested in hearing about things you think they could do differently on a more specific level (presuming they were going to make the change) and what counterfactual impact you think it would have.
Firstly, thanks for the heads up about TLDR! I suppose my one should have been:
I'm unsure how I feel about the recent 80k pivot, however I think there are potential negatives from the way the change was communicated that may cause wariness or alienation.
I'm actually unsure if I disagree with their change, honestly. I personally consider AI to be a huge potential X-risk, even non AGI could potentially cause mass media distrust, surveillance states, value lock in, disinformation factories etc
I also really appreciate the long response!
The first point about the value of prior warning:
The gaps are less 'resources out of date' as I think the metrics are very good symbolically rather than numerically e.g. 'look at this intervention! orders of magnitude more!'. I'm more worried if they are moved, and I tell a newish EA to 'go to 80k and click on the problem profiles', it's harder now to say 'go to this link and scroll down to find the old problem profiles'.
Plus, linkrot is very high if intro fellowships all use embeds rather than downloading each resource. Sometimes, changes to link distributions when you change website formatting can cause invalid embeds, and furthermore, many intro fellowships now have to quickly scramble to either check or change.
I know this is unlikely to be a problem for a while, but without details e.g. 'we will host all of these pages at the exact same address', there is no way to know.
On your second point- I never knew that! Would love to see any mention of this.
I disagree a bit with the narrowing of EA newcomers, because I think 80k was never seen as a 'specialist' funnel, but rather the very first 'are you at all into these ideas or not'. 80K was more a binary screen to see if someone held an interest, and then other orgs and resources were useful to funnel into causes/pick 'highly engaged' EAs. I hope their handbook remains unchanged at least, as that was the most useful resource in all of my events and discussion groups to just hand to new EA-potentials and see what they thought.
All of this is generalisation, and I'm using 'specialist' and 'HEA' as more symbolic examples, but honestly 80k didn't act as a 'funnel' but rather an intro, imo.
However, now, I'm unsure what the new 'intro' is. There are many disparate resources that are great, but I loved sending each new person who attended 1-2 EA events at the uni group/people who asked me 'what is EA?' to one site which I knew (Despite being AI heavy) was still a good intro to EA as a whole.
On the safeguards and independence- I completely agree EA orgs aren't a democracy and owe the people nothing, and that 80k's actual change is controversial anyway, but I meant more in the manner of, should we add an implicit norm of communicating a change?
I know consulting with the community may be resource-intensive and not useful, but I personally feel less comfortable with orgs that suddenly shift with no warning.
I have no inherent negatives of a 'sudden shift' or even a shift to AI, it's more I don't like the gut feeling I get when the shift occurs without prewarning, even if it's simply a unilateral 'heads up'.
I agree in the signal vs does. Now this next part is not about the shift as a whole but anecdotal:
Finally, I reiterate that I am young, new, naive and haven't even started uni yet- but here goes with some suggestions and counterfactuals: