CF

Cody_Fenwick

207 karmaJoined

Comments
8

One reason we use phrases “making AGI go well,” rather than some alternatives, is because 80k is concerned about risks like lock-in of really harmful values, in addition to human disempowerment and extinction risk — so I sympathise with your worries here.

Figuring out how to avoid these kinds of risks is really important, and recognising that they might arise soon is definitely within the scope of our new strategy. We have written about ways the future can look very bad even if humans have control of AI, for example herehere, and here.

I think it’s plausible to worry that not enough is being done about these kinds of concerns — that depends a lot on how plausible they are and how tractable the solutions are, which I don’t have very settled views on.

You might also think that there’s nothing tractable to do about these risks, so it’s better to focus on interventions that pay off in the short-term. But my view at least is that it is worth putting more effort into figuring out what the solutions here might be.

Hey Rocky —

Thanks for sharing these concerns. These are really hard decisions we face, and I think you’re pointing to some really tricky trade-offs.

We’ve definitely grappled with the question of whether it would make sense to spin up a separate website that focused more on AI. It’s possible that could still be a direction we take at some point. 

But the key decision we’re facing is what to do with our existing resources — our staff time, the website we’ve built up, our other programmes and connections. And we’ve been struggling with the fact that the website doesn’t really fully reflect the urgency we believe is warranted around rapidly advancing AI. Whether we launch another site or not, we want to honestly communicate about how we’re thinking about the top problem in the world and how it will affect people’s careers. To do that, we need to make a lot of updates in the direction this post is discussing.

That said, I’ve always really valued the fact that 80k can be useful to people who don’t agree with all our views. If you’re sceptical about AI having a big impact in the next few decades, our content on pandemics, nuclear weapons, factory farming — or our general career advice — can still be really useful. I think that will remain true even with our strategy shift.

I also think this is a really important point:

If transformative AI is just five years away, then we need people who have spent their careers reducing nuclear risks to be doing their most effective work right now—even if they’re not fully bought into AGI timelines. We need biosecurity experts building robust systems to mitigate accidental or deliberate pandemics—whether or not they view that work as directly linked to AI.

I think we’re mostly in agreement here — work on nuclear risks and biorisks remain really important, and last year we made efforts to make sure our bio and nuclear content was more up to date. We recently made an update about mirror bio risks, because they seem especially pressing.

As the post above says: “When deciding what to work on, we’re asking ourselves ‘How much does this work help make AI go better?’, rather than ‘How AI-related is it?’” So to the extent that other work has a key role to play in the risks that surround a world with rapidly advancing AI, it’s clearly in scope of the new strategy.

But I think it probably is helpful for people doing work in areas like nuclear safety and bio to recognise the way short AI timelines could affect their work. So if 80k can communicate that to our audience more clearly, and help people figure out what that means they should do for their careers, it could be really valuable. 

And if we are truly on the brink of catastrophe, we still need people focused on minimizing human and nonhuman suffering in the time we have left.

I do think we should be absolutely clear that we agree with this — it’s incredibly valuable that work to minimise existing suffering continues. I support that happening and am incredibly thankful to those who do it. This strategy doesn’t change that a bit. It just means 80k thinks our next marginal efforts are best focused on the risks arising from AI.

On the broader issue of what this means for the rest of the EA ecosystem, I think the risks you describe are real and are important to weigh. One reason we wanted to communicate this strategy publicly is so others could assess it for themselves and better coordinate on their paths forward. And as Conor said, we really wish we didn’t have to live in a world where these issues seem as urgent as they do.

But I think I see the costs of the shift as less stark. We still plan to have our career guide up as a central piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.

At the highest level, though, we do face a decision about whether to focus more on AI and the plausibly short timelines to AGI, or to spend time on a wider range of problem areas and take less of a stance on timelines. Focusing more does have the risk that we won’t reach our traditional audience as well, which might even reduce our impact on AI; but declining to focus more has the risk of missing out on other audiences we previously haven’t reached, failing to faithfully communicate our views about the world, and missing out on big opportunities to positively work on what we think is the most pressing problem we face.

As the post notes, while we are committed to making the strategic shift, we’re open to changing our minds if we get important updates about our work. We’ll assess how we’re performing on the new strategy, whether there are any unexpected downsides, and whether developments in the world are matching our expectations. And we definitely continue to be open to feedback from you and others who have a different perspective on the effects 80k is having in the world, and we welcome input about what we can do better.

Hi — thanks for raising this issue.

As has been pointed out, the page where we (80k) detail the definition of “social impact” in depth is explicit that we do consider animals to be a part of impartial social impact. It’s not just in a footnote. The body of the article mentions animals and non-human sentient beings several times, including in this paragraph:

>We mean that we strive to treat equal effects on different beings’ welfare as equally morally important, no matter who they are — including people who live far away or in the future. In addition, we think that the interests of many nonhuman animals, and even potentially sentient future digital beings, should be given significant weight, although we’re unsure of the exact amount. Thus, we don’t think social impact is limited to promoting the welfare of any particular group we happen to be partial to (such as people who are alive today, or human beings as a species).

Also note that in the core argument of our article on longtermism, we strove to make clear that we’re not just concerned with future humans, but all morally relevant beings:

  1. We should care about how the lives of future individuals go.
  2. The number of future individuals whose lives matter could be vast.
  3. We have an opportunity to affect how the long-run future goes — whether there may be many flourishing individuals in the future, many suffering individuals in the future, or perhaps no one at all.

But there can be a trade off between succinctness and complete precision. Being succinct isn’t trivial — writing that is accessible and engaging can be much more effective than verbose academic prose. The page you linked to is a summary of our career planning course, so it's necessarily even more succinct than usual and doesn't delve into the details of each claim. Of course, we don’t want to mislead people about what we believe, so these kinds of decisions are always a balancing act, and we won’t always get it right.

Your post is a good reminder of how some ways of communicating these ideas can give the wrong impression, so we’re going to review whether and to what extent we should make changes to be clearer about this issue. The feedback is much appreciated!

— Cody from 80k

I think the most basic answer is that Scanlon's philosophy doesn't really address the questions the EA community is most interested in, i.e., what are the best opportunities to have a positive impact on the world? What We Owe to Each Other offers a theory of wrongness, which is a very different framing. 

I'm a fan of Scanlon's work, but it has some pretty significant gaps, in my opinion. For example, it doesn't give great guidance on how to think of moral obligations to non-human animals or future generations.

I think you can make a pretty persuasive Scanlonian-style argument for some of the GWWC-style work, global health interventions, etc. But I'm not sure the Scanlonian argument adds all that much to these topics.

I think people could probably get a lot out of reading Scanlon, especially those who want to better understand non-consequentialist approaches to morality. But there are a lot of good and important books to read, and I'm not sure I'd prioritise recommending Scanlon out of all the many possibilities.

Hi — thanks for for the question! That’s definitely what we care about most, but it’s also unsurprisingly very hard to track, as you say. We have different ways we try to assess our impact along these lines, but the best metrics we can share publicly are in an appendix to our two-year review that summarises the results of our user survey. You can also see Brenton's answer in a separate comment for much more detail about our efforts to track these metrics.

Thanks Yonatan! I was the editor of this review.

The section "How to enter infosecurity" has one section which discusses how to enter the field with a university degree. But it also notes: "However, you shouldn’t think of this as a prerequisite — there are many successful security practitioners without a formal degree." The following section discusses how to enter the field without formal training.

Whether any given individual should pursue a degree depends on a bunch of individual factors.

Your suggestion that EA orgs should have a "head of security" of some sort sounds plausible in many cases. But a lot will depend on the size of the organisation, its specific security needs, what other duties this person would be responsible for, etc., so it's hard to be generally prescriptive. As the review lays out, there's likely to be an ongoing security needs for many impactful orgs for the foreseeable future, and expertise in this domain will be needed at a variety of levels.

Thanks for sharing! This is really interesting — we’ve read it and will think about it.

Your updated estimate accords with what we wrote in our career review on founding a tech start-up (“people who have received venture capital funding or entered Y Combinator have on average earned millions of dollars per year”). It’s not as a up to date as we would ideally like, but it’s not among our top priorities right now.

- Cody from 80k