That said, I basically agree we could make these views more obvious! E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should.
Update: we added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.
Love this, thanks Catherine! Great way of structuring a career story for being useful to the audience btw, might copy it at some point.
Arden here - I lead on the 80k website and am not on the one-on-one team, but thought I could field this one. This is a big question!
We have several different programmes, which face different bottlenecks. I'll just list a few here, but it might be helpful to check out our most recent two-year review for more thoughts – especially the "current challenges" sections for each programme (though that's from some months ago).
Some current bottlenecks:
I think there are themes around time/capacity, feedback loops, and empirical uncertainties, some of which are a matter of spending more research time, some of which are harder to make progress on.
Thanks : ) we might workshop a few ways of getting something about this earlier in the user experience.
Hey, I wasn’t a part of these discussions, but from my perspective (web director at 80k), I think we are transparent about the fact that our work comes from a longtermist perspective that suggests that existential risks are the most pressing issues. The reason we try to present, which is also the true reason, is that we think these are the areas where many of our readers, and thereofre we, can make the biggest positive impact.
Here are some of the places we talk about this:
1. Our problem profiles page (one of our most popular pages) explicitly says we rank existential risks as most pressing (ranking AI first) and explains why - both at the very top of the page "We aim to list issues where each additional person can have the most positive impact. So we focus on problems that others neglect, which are solvable, and which are unusually big in scale, often because they could affect many future generations — such as existential risks. This makes our list different from those you might find elsewhere." and more in the FAQ, as well as in the problem profiles themselves.
2. We say at the top of our "priority paths" list that these are aimed at people who "want to help tackle the global problems we think are most pressing", linking back to the problems ranking.
3. We also have in-depth discussions of our views on longtermism and the importance of existential risk in our advanced series.
So we are aiming to be honest about our motivations and problem prioritization, and I think we succeed. For what it's worth I don't often come across cases of people who have misconceptions about what issues we think are most pressing (though if you know of any such people please let me know!).
That said, I basically agree we could make these views more obvious! E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should.
One way of interpreting the call to make our longtermist perspective more “explicit": I think some people think we should pitch our career advice exclusively at longtermists, or people who already want to work on x-risk. We could definitely move further in this direction, but I think we have some good reasons not to, including:
[1] Contrast with unopinionated about causes. Cause neutrality in this usage means being open to prioritising whatever causes you think will allow you to help others the most, which you might have an opinion on.
Love this post -- thanks Rocky! I feel like 5-7 are especially well explained // I haven't seen them explained that way before.
However many elements of the philosophical case for longtermism are independent of contingent facts about what is going to happen with AI in the coming decades.
Agree, though there are arguments from one to the other! In particular:
[1] basically because you can't have reasons to do things that are impossible.
[1] since "existential risk" on the toby ord definition by definition is anything that reduces humanity's potential (&therefore affects the ltf in expectation) I think it'd be confusing to use that term in this context so I'm going to talk about extinction even though people think there are non-extinction existential catastrophe scenarios from AI as well.
Thanks for this post! One thought on what you wrote here:
"My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have."
I feel unsure about this. Or like, I think it's true we have those downsides, but we also probably get upsides from being in the middle here, so I'm unsure we're in the worst of both worlds rather than e.g. the best (or probably just in the middle of both worlds)
e.g. We have upsides of fairly tightly knit information/feedback/etc. networks between people/entities, but also the upsides of there being no red tape on people starting new projects and the dynamism that creates.
Or as another example, entities can compete for hires which incentives excellence and people doing roles where they have the best fit, but also freely help one another become more excellent by e.g. sharing research and practices (as if they are part of one thing).
Maybe it just feels like we're in the worst of both worlds because we focus on the negatives.
I don't know the answer to this, because I've only been working at 80k since 2019 - but my impression is this isn't radically different from what might have been written in those years.
Copying from my comment above:
Update: we've now added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.