AK

Arden Koehler

2777 karmaJoined

Comments
158

Anecdote: I'm one of those people -- would say I'd barely heard of ea / basically didn't know what it was, before a friend who already knew of it suggested I come to an EA global (I think at the time one got a free t-shirt for referring friends). We were both philosophy students & I studied ethics, so I think he thought I might be intersted even though we'd never talked about EA.

Thanks as always for this valuable data! 

Given 80k is a large and growing source of people hearing about and getting involved in EA, some people reading this might be worried that 80k will stop contributing to EA's growth, given our new strategic focus on helping people work on safely navigating the transition to a world with AGI. 

tl;dr I don't think it will stop, and might continue as before, though it's possible it will be reduced some.

More:

I am not sure whether 80k's contribution to building ea in terms of sheer numbers of people getting involved is likely to go down due to this focus vs. what it would otherwise be if we simply continued to scale our programmes as they currently are without this change in direction. 

My personal guess at this time is that it will reduce at least slightly.

Why would it? 

  • We will be more focused on helping people work on helping AGI go well - that means that e.g. university groups might be hesitant to recommend us to members who are not interested in AIS as a cause area
  • At a prosaic level, some projects that would have been particularly useful for building EA vs. helping with AGI in a more targeted way are going to be de-prioritised - e.g. I personally dropped a project I began of updating our "building ea" problem profile in order to focus more on AGI targeted things
  • Our framings will probably change. It's possible that the framings we use more going forward will emphasise EA style thinking a little less than our current ones, though this is something we're actively unsure of.
  • We might sometimes link off to the AI safety community in places where we might have linked off to EA before (though it is much less developed, so we're not sure).

However, I do expect us to continue to significantly contribute to building EA – and we might even continue to do so at a similar level vs. before. This is for a few reasons: 

  1. We still think EA values are important, so still plan to talk about them a lot. E.g. we will talk about *why* we're especially concerned about AGI using EA-style reasoning, emphasise the importance of impartiality and scope sensitivity, etc.
  2. We don't currently have any plans for reducing our links to the ea community – e.g. we don't plan to stop linking to the EA forum, or stop using our newsletter to notify people about EAGs.
  3. We still plan to list meta EA jobs on our job board, put advisees in touch with people from the EA community when it makes sense, and by default keep our library of content online
  4. We're not sure whether, in terms of numbers, the changes we're making will cause our audience to grow or shrink. On the one hand, it's a more narrow focus, so will appeal less to people who aren't interested in AI. On the other, we are hoping to appeal more to AI-interested people, as well as older people, who might not have been as interested in our previous framings.

This will probably lead directly and indirectly to a big chunk of our audience continuing to get involved in EA due to engaging with us. This is valuable according to our new focus, because we think that getting involved in EA is often useful for being able to contribute positively to things going well with AGI. 

To be clear, we also think EA growing is valuable for other reasons (we still think other cause areas matter, of course!). But it's actually never been an organisational target[1] of ours to build EA (or at least it hasn't since I joined the org 5 years ago); growing EA has always been something we cause as a side effect of helping people pursue high impact careers (because, as above, we've long thought that getting involved in EA is one useful step for pursuing a high impact career!) 

Note on all the above: the implications of our new strategic focus for our programmes are still being worked out, so it's possible that some of this will change.

Also relevant: FAQ on the relationship between 80k & EA (from 2023 but I still agree with it)

[1] Except to the extent that helping people into careers building EA constitutes helping them pursue a high impact career - & it is one of many ways of doing that (along with all the other careers we recommend on the site, plus others). We do also sometimes use our impact on the growth of EA as one proxy for our total impact, because the data is available, and we think it's often a useful step to having an impactful career, & it's quite hard to gather data on people we've helped pursue high impact careers more directly.

Hey Geoffrey,

Niel gave a response to a similar comment below -- I'll just add a few things from my POV:

  • I'd guess that pausing (incl. for a long time) or slowing downAGI development would be good for helping AGI go well if it could be done by everyone / enforced / etc- so figuring out how to do that would be in scope re this more narrow focus. SO e.g. figuring out how an indefinite pause could work (maybe in a COVID-crisis like world where the overton window shifts?) seems helpful
  • I (& others at 80k) are just a lot less pessimistic vis a vis the prospects for AGI going well / not causing an existential catastrophe. So we just disagree about the premise that "there is actually NO credible path for 'helping the transition to AGI go well'". In my case maybe because I don't believe your (2) is necessary (tho various other governance things probably are) & I think your (1) isn't that unlikely to happen (tho very far from guaranteed!)
  • I'm at the same time more pessimistic about everyone the world stopping development toward this hugely commercially exciting technology, so feel like trying for that would be a bad strategy.

We don’t have anything written/official on this particular issue I don't think (though we have covered other mental health topics here), though this is one reason why we don’t think it’s the case that everyone should work on AIS/trying to help things go well with AGI, such that even though we want to encourage more people to consider it, we don’t blanket recommend it to everyone. We wrote a little bit here about an issue that seems related - what to do if you find the case for an issue intellectually compelling but don't feel motivated by it.

Hi Romain,

Thanks for raising these points (and also for your translation!)

We are currently planning to retain our cause-neutral (& cause-opinionated), impactful careers branding, though we do want to update the site to communicate much more clearly and urgently our new focus on helping things go well with AGI, which will affect our brand.

How to navigate the kinds of tradeoffs you are pointing to is something we will be thinking about more as we propagate through this shift in focus through to our most public-facing programmes. We don't have answers just yet on what that will look like, but do plan to take into account feedback from users on different framings to try to help things resonate as well as we can, e.g. via A/B tests and user interviews.

Thanks for the feedback here. I mostly want to just echo Niel's reply, which basically says what I would have wanted to. But I also want to add for transparency/accountability's sake that I reviewed this post before we published it with the aim of helping it communicate the shift well – I focused mostly on helping it communicate clearly and succinctly, which I do think is really important, but I think your feedback makes sense, and I wish that I'd also done more to help it demonstrate the thought we've put into the tradeoffs involved and awareness of the costs. For what it's worth, & we don't have dedicated comms staff at 80k - helping with comms is currently part of my role, which is to lead our web programme.

Adding a bit more to my other comment:

For what it’s worth, I think it makes sense to see this as something of a continuation of a previous trend – 80k has for a long time prioritised existential risks more than the EA community as a whole. This has influenced EA (in my view, in a good way), and at the same time EA as a whole has continued to support work on other issues. My best guess is that that is good (though I'm not totally sure - EA as a whole mobilising to help things go better with AI also sounds like it could be really positively impactful).

From an altruistic cause prioritization perspective, existential risk seems to require longtermism, including potentially fanatical views (see Christian Tarsney, Rethink Priorities). It seems like we should give some weight to causes that are non-fanatical.

I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn't accept fanatical views to prioritise them (though it may require caring some about potential future beings). (We have a bit on this here)

Existential risk is not most self-identified EAs' top cause, and about 30% of self-identified EAs say they would not have gotten involved if it did not focus on their top cause (EA survey). So it does seem like you miss an audience here.

I agree this means we will miss out on an audience we could have if we fronted content on more causes. We hope to also appeal to new audiences with this shift, such as older people who are less naturally drawn to our previous messaging, and e.g. who are more motivated by urgency. However, it seems plausible this shrinks our audience. This seems worth it because we think in doing so we'll be telling people what we think about how urgent and pressing AI risks seem to us, and that this could still lead us to having more impact overall since impact varies so much between careers, in part based on what causes people focus on.

Hi Håkon, Arden from 80k here.

Great questions.

On org structure:

One question for us is whether we want to create a separate website ("10,000 Hours?"), that we cross-promote from the 80k website, or to change the 80k website a bunch to front the new AI content. That's something we're still thinking about, though I am currently weakly leaning toward the latter (more on why below). But we're not currently thinking about making an entire new organisation.

Why not?

For one thing, it'd be a lot of work and time, and we feel this shift is urgent.

Primarily, though, 80,000 Hours is a cause-impartial organisation, and we think that means prioritising the issues we think are most pressing (& telling our audience about why we think that.)

What would be the reason for keeping one 80k site instead of making a 2nd separate one?

  1. As I wrote to Zach above, I think the site currently doesn't represent the possibility of short timelines or the variety of risks AI poses well, even though it claims to be telling people key information they need to know to have a high impact career. I think that's key information, so want it to be included very prominently.
  2. As a commenter noted below, it'd take time and work to build up an audience for the new site.

But I'm not sure! As you say, there are reasons to make a separate site as well.

On EA pathways: I think Chana covered this well – it's possible this will shrink the number of people getting into EA ways of thinking, but it's not obvious. AI risk doesn't feel so abstract anymore.

On reputation: this is a worry. We do plan to express uncertainty about whether AGI will indeed progress as quickly as we worry it will, and that if people pursue a route to impact that depends on fast AI timelines, that's making a bet that might not pay off. However, we think it's important both for us & for our audience to act under uncertainty, using rules of thumb but also thinking about expected impact.

In other words – yes, our reputation might suffer from this if AI progresses slowly. If that happens, it will probably be worse for our impact, but better for the world, and I think I'll still feel good about expressing our (uncertain) views on this matter when we had them.

Hey Zach. I'm about to get on a plane so won't have time to write a full response, sorry! But wanted to say a few quick things before I do.

Agree that it's not certain or obvious that AI risk is the most pressing issue (though it is 80k's best guess & my personal best guess, and I don't personally have the view that it requires fanaticism.) And I also hope the EA community continues to be a place where people work on a variety of issues -- wherever they think they can have the biggest positive impact.

However, our top commitment at 80k is to do our best to help people find careers that will allow them to have as much positive impact as they can. & We think that to do that, more people should strongly consider and/or try out working on reducing the variety of risks that we think transformative AI poses. So we want to do much more to tell them that!

In particular, from a web specific perspective, I feel that the website doesn't feel consistent right now with the possibility of short AI timelines & the possibility that AI might not only pose risks from catastrophic misalignment, but also other risks, plus that it will probably affect many other cause areas. Given the size of our team, I think we need to focus our new content capacity on changing that.

I think this post I wrote a while ago might also be relevant here!

https://forum.effectivealtruism.org/posts/iCDcJdqqmBa9QrEHv/faq-on-the-relationship-between-80-000-hours-and-the

Will circle back more tomorrow / when I'm off the flight!

Arden from 80k here -- just flagging that most of 80k is currently asleep (it's midnight in the UK), so we'll be coming back to respond to comments tomorrow! I might start a few replies, but will be getting on a plane soon so will also be circling back.

Load more