A

Ardenlk

2207 karmaJoined Aug 2017

Comments
131

Copying from my comment above:

Update: we've now added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.

That said, I basically agree we could make these views more obvious! E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should.

Update: we added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.

Love this, thanks Catherine! Great way of structuring a career story for being useful to the audience btw, might copy it at some point.

Arden here - I lead on the 80k website and am not on the one-on-one team, but thought I could field this one. This is a big question!

We have several different programmes, which face different bottlenecks. I'll just list a few here, but it might be helpful to check out our most recent two-year review for more thoughts – especially the "current challenges" sections for each programme (though that's from some months ago). 

Some current bottlenecks:

  • More writing and research capacity to further improve our online career advice and keep it up to date.
  • Better web analytics – we have trouble getting good data on what different groups of users like most and what works best in marketing, so aren't able to iterate and scale as decisively as we'd like.
  • More great advisors to add our one-on-one team, so we can do more calls – in fact, we're hiring for this right now! 
  • There are uncertainties about the world that create strategic uncertainties for the organisation as a whole - e.g. what we should expect to happen with TAI and when. These affect the content of our careers advice as well as overall things like 'which audiences should the different programmes focus on?' (For example, in the AI timelines case, if we were confident in very short timelines it'd suggest focusing on older audiences, all else equal).
  • We're also a growing, mid-sized org, so have to spend more time on processes and coordination than we used to which takes time. Though we're making good progress here (e.g. we're training up a new set of "middle managers" to scale our programmes).
  • Tracking and evaluating our impact – to know what's working well and where to invest less – is always challenging, as impacts on people's careers are hard to find out about, often take years, and sometimes difficult to evaluate. This means our feedback loops aren't as strong as would be ideal for making plans and evolving our strategy.

I think there are themes around time/capacity, feedback loops, and empirical uncertainties, some of which are a matter of spending more research time, some of which are harder to make progress on.

Thanks : ) we might workshop a few ways of getting something about this earlier in the user experience.

Hey, I wasn’t a part of these discussions, but from my perspective (web director at 80k), I think we are transparent about the fact that our work comes from a longtermist perspective that suggests that existential risks are the most pressing issues. The reason we try to present, which is also the true reason, is that we think these are the areas where many of our readers, and thereofre we, can make the biggest positive impact.

Here are some of the places we talk about this:

1. Our problem profiles page (one of our most popular pages) explicitly says we rank existential risks as most pressing (ranking AI first) and explains why - both at the very top of the page "We aim to list issues where each additional person can have the most positive impact. So we focus on problems that others neglect, which are solvable, and which are unusually big in scale, often because they could affect many future generations — such as existential risks. This makes our list different from those you might find elsewhere." and more in the FAQ, as well as in the problem profiles themselves.

2. We say at the top of our "priority paths" list that these are aimed at people who "want to help tackle the global problems we think are most pressing", linking back to the problems ranking.

3. We also have in-depth discussions of our views on longtermism and the importance of existential risk in our advanced series. 

So we are aiming to be honest about our motivations and problem prioritization, and I think we succeed. For what it's worth I don't often come across cases of people who have misconceptions about what issues we think are most pressing (though if you know of any such people please let me know!). 

That said, I basically agree we could make these views more obvious! E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should. 

One way of interpreting the call to make our longtermist perspective more “explicit": I think some people think we should pitch our career advice exclusively at longtermists, or people who already want to work on x-risk. We could definitely move further in this direction, but I think we have some good reasons not to, including:

  1. We think we offer a lot of value by introducing the ideas of longtermism and x-risk mitigation to people who aren’t familiar with these ideas already, and making the case that they are important – so narrowly targeting an audience that already shares these priorities (a very small number of people!) would mean leaving this source of impact on the table.
  2. We have a lot of materials that can be useful to people who want to do good in their careers but won't necessarily adopt a longtermist perspective. And insofar as having EA be “big tent” is a good thing (which I tend to think it is though am not that confident), I'm happy 80k introduces a lot of people who will take different perspectives to EA.
  3. We are cause neutral[1] – we prioritise x-risk reduction because we think it's most pressing, but it’s possible we could learn more that would make us change our priorities. Since we’re open to that, it seems reasonable not to fully tie our brand to longtermism or existential risk. It might even be misleading to open with x-risk, since it'd fail to communicate that we are prioriritsing that because of our views about the pressingess of existential risk reduction. And since the value proposition of our site for readers is in part to help them have more impact, I think they want to know which issues we think are most pressing.

[1] Contrast with unopinionated about causes. Cause neutrality in this usage means being open to prioritising whatever causes you think will allow you to help others the most, which you might have an opinion on.

Love this post -- thanks Rocky! I feel like 5-7 are especially well explained // I haven't seen them explained that way before.

Answer by ArdenlkAug 02, 202320
4
0

However many elements of the philosophical case for longtermism are independent of contingent facts about what is going to happen with AI in the coming decades.

Agree, though there are arguments from one to the other! In particular:

  1. As I understand it, longtermism requires it to be tractable to, in expectation, affect the long-term future ("ltf").[1]
  2. Some people might think that the only or most tractable way of affecting the ltf is to reduce extinction[2] risk in the coming decades or century (as you might think we can have no idea about the expected effects of basically anything else on the ltf because effects other than "causes ltf to exist or not" are too complicated to predict).
  3. If extinction risk is high, especially from a single source in the near future, it's plausibly easier to reduce. (this seems questionable but far from crazy)
  4. So thinking extinction risk is high especially from a single source in the near future might reasonably increase someone's belief in longtermism.
  5. Thinking AI risk is high in the near future is a way of thinking extinction risk is high from a ~single source in the near future
  6. So thinking AI risk is high in the near future is a reason to believe longtermism.

[1] basically because you can't have reasons to do things that are impossible.

[1] since "existential risk" on the toby ord definition by definition is anything that reduces humanity's potential (&therefore affects the ltf in expectation) I think it'd be confusing to use that term in this context so I'm going to talk about extinction even though people think there are non-extinction existential catastrophe scenarios from AI as well.

Thanks for this post! One thought on what you wrote here:

"My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have."

I feel unsure about this. Or like, I think it's true we have those downsides, but we also probably get upsides from being in the middle here, so I'm unsure we're in the worst of both worlds rather than e.g. the best (or probably just in the middle of both worlds)

e.g. We have upsides of fairly tightly knit information/feedback/etc. networks between people/entities, but also the upsides of there being no red tape on people starting new projects and the dynamism that creates.

Or as another example, entities can compete for hires which incentives excellence and people doing roles where they have the best fit, but also freely help one another become more excellent by e.g. sharing research and practices (as if they are part of one thing).

Maybe it just feels like we're in the worst of both worlds because we focus on the negatives.

I don't know the answer to this, because I've only been working at 80k since 2019 - but my impression is this isn't radically different from what might have been written in those years.

Load more