Thank you for doing this analysis!
Would you say this analysis is limited to safety from misalignment related risks, or any (potentially catastrophic) risks from AI, including misuse, gradual disempoerment, etc.?
Far-future effects are the most important determinant of what we ought to do
I agree it's insanely hard to know what will affect the far future, and how. But I think we should still try, often by using heuristics (one I'm currently fond of is "what kinds of actions seem to put us on a good trajectory, e.g. to be doing well in 100 years?")
I think that in cases where we do have reason to think an action will affect the long run future broadly and positively in expectation (i.e. even if we're uncertain) that's an extremely strong reason -- and usually an overr...
I feel unsure I'd be trying hard to do good at all, let alone actually doing things I think have a lot of ex ante value. I wasn't on track when I heard of EA to dedicate much of my resources to positive impact. But hard to be certain ofc! + not sure doing good now, since what I work on has a lot of uncertainty on the impacts (& even sign).
(trying to hit like 80% agree but seem to be missing it)
Hey Matt,
(Context: I run the 80k web programme.)
...if you glorify some relatively-value-neutral conception of AI safety as the summum bonum of what is or used to be EA, there is just a good chance that you will lose the pl
My view is that it's worth it, because there is a danger of people just jumping into jobs that have "AI" or even "AI security/safety" in the name, without grappling with tough questions around what it actually means to help AGI go well or prioritising between options based on expected impact.
I appreciate the dilemma and don't want to imply this is an easy call.
For me the central question is all of this is whether you foreground process (EA) or conclusion (AGI go well). It seems like the whole space is uniformly rushing to foreground the conclus...
Anecdote: I'm one of those people -- would say I'd barely heard of ea / basically didn't know what it was, before a friend who already knew of it suggested I come to an EA global (I think at the time one got a free t-shirt for referring friends). We were both philosophy students & I studied ethics, so I think he thought I might be intersted even though we'd never talked about EA.
Thanks as always for this valuable data!
Given 80k is a large and growing source of people hearing about and getting involved in EA, some people reading this might be worried that 80k will stop contributing to EA's growth, given our new strategic focus on helping people work on safely navigating the transition to a world with AGI.
tl;dr I don't think it will stop, and might continue as before, though it's possible it will be reduced some.
More:
I am not sure whether 80k's contribution to building ea in terms of sheer numbers of people get...
Hey Geoffrey,
Niel gave a response to a similar comment below -- I'll just add a few things from my POV:
We don’t have anything written/official on this particular issue I don't think (though we have covered other mental health topics here), though this is one reason why we don’t think it’s the case that everyone should work on AIS/trying to help things go well with AGI, such that even though we want to encourage more people to consider it, we don’t blanket recommend it to everyone. We wrote a little bit here about an issue that seems related - what to do if you find the case for an issue intellectually compelling but don't feel motivated by it.
Hi Romain,
Thanks for raising these points (and also for your translation!)
We are currently planning to retain our cause-neutral (& cause-opinionated), impactful careers branding, though we do want to update the site to communicate much more clearly and urgently our new focus on helping things go well with AGI, which will affect our brand.
How to navigate the kinds of tradeoffs you are pointing to is something we will be thinking about more as we propagate through this shift in focus through to our most public-facing programmes. We don't have answers jus...
Thanks for the feedback here. I mostly want to just echo Niel's reply, which basically says what I would have wanted to. But I also want to add for transparency/accountability's sake that I reviewed this post before we published it with the aim of helping it communicate the shift well – I focused mostly on helping it communicate clearly and succinctly, which I do think is really important, but I think your feedback makes sense, and I wish that I'd also done more to help it demonstrate the thought we've put into the tradeoffs involved and awareness of the c...
Adding a bit more to my other comment:
For what it’s worth, I think it makes sense to see this as something of a continuation of a previous trend – 80k has for a long time prioritised existential risks more than the EA community as a whole. This has influenced EA (in my view, in a good way), and at the same time EA as a whole has continued to support work on other issues. My best guess is that that is good (though I'm not totally sure - EA as a whole mobilising to help things go better with AI also sounds like it could be really positively impactful).
...From a
Hi Håkon, Arden from 80k here.
Great questions.
On org structure:
One question for us is whether we want to create a separate website ("10,000 Hours?"), that we cross-promote from the 80k website, or to change the 80k website a bunch to front the new AI content. That's something we're still thinking about, though I am currently weakly leaning toward the latter (more on why below). But we're not currently thinking about making an entire new organisation.
Why not?
For one thing, it'd be a lot of work and time, and we feel this shift is urgent.
Primarily, though, 8...
Hey Zach. I'm about to get on a plane so won't have time to write a full response, sorry! But wanted to say a few quick things before I do.
Agree that it's not certain or obvious that AI risk is the most pressing issue (though it is 80k's best guess & my personal best guess, and I don't personally have the view that it requires fanaticism.) And I also hope the EA community continues to be a place where people work on a variety of issues -- wherever they think they can have the biggest positive impact.
However, our top commitment at 80k is to do our best ...
Carl Shulman questioned the tension between AI welfare & AI safety on the 80k podcast recently -- I thought this was interesting! Basically argues AI takeover could be even worse for AI welfare. From the end of the section.
...Rob Wiblin: Maybe a final question is it feels like we have to thread a needle between, on the one hand, AI takeover and domination of our trajectory against our consent — or indeed potentially against our existence — and this other reverse failure mode, where humans have all of the power and AI interests are simply ignored. Is there
Cool project - I tried to subscribed to the podcast, to check it out. But I couldn't find it on pocketcasts, so I didn't (didn't seem worth me using a 2nd platform).
I wanted to subscribe because I've wanted an audio feed that will help me be in touch with events outside my more specific areas of interest that i hear about through niche channels while I commute, while not going quite as broad / un-curated as the BBC news (which I currently use for this) -- and this seemed like potentially a good middle ground.
tiny other feedback: the title feels aggressive ...
The project aligns closely with the fund's vision of a "principles-first EA" community, we’d be excited for the EA community’s outputs to look more like Richard’s.
Is this saying that the move to principle's first EA as a strategic perspective for EAF goes with a belief that more EA work should be "principles first" & not cause specific? (so that more of the community's outputs look like Richard's)? I wouldn't have necessarily inferred that just from the fact that you're making this strategic shift (could be ore of a comp advantage / focus thing) so wanted to clarify.
Speaking in a personal capacity here --
We do try to be open to changing our minds so that we can be cause neutral in the relevant sense, and we do change our cause rankings periodically and spend time and resources thinking about them (in fact we’re in the middle of thinking through some changes now). But how well set up are we, institutionally, to be able to in practice make changes as big as deprioritising risks from AI if we get good reasons to? I think this is a good question, and want to think about it more. So thanks!
Just want to say here (since I work at 80k & commented abt our impact metrics & other concerns below) that I think it's totally reasonable to:
Thanks Arden. I suspect you don't disagree with the people interviewed for this report all that much then, though ultimately I can only speak for myself.
One possible disagreement that you and other commenters brought up that which I meant to respond to in my first comment, but forgot: I would not describe 80,000 hours as cause-neutral, as you try to do here and here. This seems to be an empirical disagreement, quoting from second link:
...We are cause neutral[1] – we prioritise x-risk reduction because we think it's most pressing, but it’s possible we co
Hey, Arden from 80,000 Hours here –
I haven't read the full report, but given the time sensitivity with commenting on forum posts, I wanted to quickly provide some information relevant to some of the 80k mentions in the qualitative comments, which were flagged to me.
Regarding whether we have public measures of our impact & what they show
It is indeed hard to measure how much our programmes counterfactually help move talent to high impact causes in a way that increases global welfare, but we do try to do this.
From the 2022 report the relevant sectio...
Hi Arden,
Thanks for engaging.
(1) Impact measures: I'm very appreciative of the amount of thought that went into developing the DIPY measure. The main concern (from the outside) with respect to DIPY is that it is critically dependent on the impact-adjustment variable - it's probably the single biggest driver of uncertainty (since causes can vary by many magnitudes). Depending on whether you think the work is impactful (or if you're sceptical, e.g. because you're an AGI sceptic or because you're convinced of the importance of preventing AGI risk but wo...
The 2020 EA survey link says "More than half (50.7%) of respondents cited 80,000 Hours as important for them getting involved in EA". (2022 says something similar)
I would also add these results, which I think are, if anything, even more relevant to assessing impact:
I like this post and also worry about this phenomenon.
When I talk about personal fit (and when we do so at 80k) it's basically about how good you are at a thing/the chance that you can excel.
It does increase your personal fit for something to be intuitively motivated by the issue it focuses on, but I agree that it seems way too quick to conclude then that your personal fit with that is higher than other things (since there are tons of factors and there are also lots of different jobs for each problem area), let alone that that means you should work on that issue all things considered (since personal fit is not the only factor).
I think it would be especially valuable to see to which degree they reflect the individual judgment of decision-makers.
The comment above hopefully helps address this.
...I would also be interested in whether they take into account recent discussions/criticisms of model choices in longtermist math that strike me as especially important for the kind of advising 80.000 hours does (tldr: I take one crux of that article to be that longtermist benefits by individual action are often overstated, because the great benefits longtermism advertises require both redu
?I think it would be valuable to include all the additional notes which are not on your website. As a minimum viable product, you may want to link to your comment.
Thanks for your feedback here!
Your previous quantitative framework was equivalent to a weighted-factor model (WFM) with the logarithms of importance, tractability and neglectedness as factors with the same weight, such the sum respects the logarithm of the cost-effectiveness. Have you considered trying a WFM with the factors that actually drive your views?
I feel unsure about whether we sho...
I agree that it might be worthwhile to try to become the president of the US - but that wouldn't mean it's best for us to have an article on it, especially highly ranked. that takes real estate on our site, attention from readers, and time. This specific path is a sub-category of political careers, which we have several articles on. In the end, it is not possible for us to have profiles on every path that is potentially worthwhile for someone. My take is that it's better for us to prioritise options where the described endpoint is achievable for at least a healthy handful of readers.
No, we have lots of external advisors that aren't listed on our site. There are a few reasons we might not list people, including:
We might not want to be committed to asking for someone's advice for a long time or need to remove them at some point.
The person might be happy to help us and give input but not want to be featured on our site.
It's work to add people, and we often will reach out to someone in our network fairly quickly and informally, and it would feel like overkill / too much friction to get a bio, and get permission from them for it,
This is a good question -- we don't have a formal approach here, and I personally think that in general, it's quite a hard problem who to ask for advice.
A few things to say:
the ideal is often to have both.
the bottleneck on getting more people with domain expertise is more often us not having people in our network with sufficient expertise, that we know about and believe are highly credible, and who are willing to give us their time, rather than their values. People who share our values tend to be more excited to work with us.
it depends a lot on th
Hey Vasco —
Thanks for your interest and also for raising this with us before you posted so I could post this response quickly!
I think you are asking about the first of these, but I'm going to include a few notes on the 2nd and 3rd too as well just in case, as there's a way of hearing your question as about them.
Hi Nick —
Thanks for the thoughtful post! As you said, we’ve thought about these kinds of questions a lot at 80k. Striking the right balance of content on our site, and prioritising what kinds of content we should work on next, are really tricky tasks, and there’s certainly reasonable disagreement to be had about the trade-offs.
We’re not currently planning to focus on neartermist content for the website, but:
I think I've become substantially more hardworking!
I think I started from a middle-to-high baseline but I think I am now "pretty hard working" at least (I say as I write this at 8 am on a Tuesday, demonstrating viscerally my not-perfect work ethic).
the big thing for me was going from academic philosophy to working at 80k. Active ingredients in order of importance:
Copying from my comment above:
Update: we've now added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.
That said, I basically agree we could make these views more obvious! E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should.
Update: we added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.
Arden here - I lead on the 80k website and am not on the one-on-one team, but thought I could field this one. This is a big question!
We have several different programmes, which face different bottlenecks. I'll just list a few here, but it might be helpful to check out our most recent two-year review for more thoughts – especially the "current challenges" sections for each programme (though that's from some months ago).
Some current bottlenecks:
Hey, I wasn’t a part of these discussions, but from my perspective (web director at 80k), I think we are transparent about the fact that our work comes from a longtermist perspective that suggests that existential risks are the most pressing issues. The reason we try to present, which is also the true reason, is that we think these are the areas where many of our readers, and thereofre we, can make the biggest positive impact.
Here are some of the places we talk about this:
1. Our problem profiles page (one of our most popular pages) explicitly say...
However many elements of the philosophical case for longtermism are independent of contingent facts about what is going to happen with AI in the coming decades.
Agree, though there are arguments from one to the other! In particular:
Thanks for this post! One thought on what you wrote here:
"My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have."
I feel unsure about this. Or like, I think it's true we have those downsides, but we also probably get upsides from being in the middle here, so I'm unsure we're in the worst of both worlds rather than e.g. the bes...
This seems true to me, although I don't have great confidence here.
For some years at times I had thought to myself "Damn, EA is pulling off something interesting - not being an organization, but at the same time being way more harmonious and organized than a movement. Maybe this is why it's so effective and at the same time feels so inclusive." Not much changed recently that would make me update in a different direction. This always stood out to me in EA, so maybe this is one of its core competencies[1] that made it so successful in comparison to so m...
Hey Joey, Arden from 80k here. I just wanted to say that I don't think 80k has "the answers" to how to do the most good.
But we do try to form views on the relative impact of different things, so we do try to reach working answers, and then act on our views (e.g. by communicating them and investing more where we think we can have more impact).
So e.g. we prioritise cause areas we work most on by our take at their relative pressingness, i.e. how much expected good we think people can do by trying to solve them, and we also communicate these views to our reade...
This feels fairly tricky to me actually -- I think between the two options presented I'd go with (1) (except I'm not sure what you mean by "If we'd focus specifically on EAs it would be even better" -- I do overall endorse our current choice of not focusing specifically on EAs).
However, some aspects of (2) seem right too. For example, I do think that we talk about a lot of things EAs already know about in much of our content (though not all of it). And I think some of the "here's why it makes sense to focus on impact" - type content does fall into that cat...
My interpretation would be that they both tend to buy into the same premises that AGI will occur soon and that it will be godlike in power. Depending on how hard you believe alignment is, this would lead you to believe that we should build AGI as fast as possible (so that someone else doesn't build it first), or that we should shut it all down entirely.
By spreading and arguing for their shared premises, both the doomers and the AGI racers get boosted by the publicity given to the other, leading to growth for them both.
As someone who does not accept these premises, this is somewhat frustrating to watch.
I'm trying out updating some of 80,000 Hours pages iteratively that we don't have time to do big research projects on right now. To this end, I've just released an update to https://80000hours.org/problem-profiles/improving-institutional-decision-making/ — our problem profile on improving epistemics and institutional decision making.
This is sort of a tricky page because there is a lot of reasonable-seeming disagreement about what the most important interventions are to highlight in this area.
I think the previous version had some issues: It was confusing, a...
Hey Holden,
Thanks for these reflections!
Could you maybe elaborate on what you mean by a 'bad actor'? There's some part of me that feels nervous about this as a framing, at least without further specification -- like maybe the concept could be either applied too widely (e.g. to anyone who expresses sympathy with "hard-core utilitarianism", which I'd think wouldn't be right), or have a really strict definition (like only people with dark tetrad traits) in a way that leaves out people who might be likely to (or: have the capacity to?) take really harmful actions.
I think this is part of why EA doesn't invest much here, along with what Ollie said.
I'm pretty excited about EAs doing good work in politics, but (1) it's a hard sell from a tractability / neglectedness perspective, & (2) it's easy to do bad work, so it's kind of hard to boot up much effort.