Cool project - I tried to subscribed to the podcast, to check it out. But I couldn't find it on pocketcasts, so I didn't (didn't seem worth me using a 2nd platform).
I wanted to subscribe because I've wanted an audio feed that will help me be in touch with events outside my more specific areas of interest that i hear about through niche channels while I commute, while not going quite as broad / un-curated as the BBC news (which I currently use for this) -- and this seemed like potentially a good middle ground.
tiny other feedback: the title feels aggressive to me vs. some nearby alternatives (e.g. just "relevance news" or something) - since it nearly states that anything that is not there is not actually relevant at all, which is a fairly strong claim I could see people getting unhappy about.
The project aligns closely with the fund's vision of a "principles-first EA" community, we’d be excited for the EA community’s outputs to look more like Richard’s.
Is this saying that the move to principle's first EA as a strategic perspective for EAF goes with a belief that more EA work should be "principles first" & not cause specific? (so that more of the community's outputs look like Richard's)? I wouldn't have necessarily inferred that just from the fact that you're making this strategic shift (could be ore of a comp advantage / focus thing) so wanted to clarify.
Speaking in a personal capacity here --
We do try to be open to changing our minds so that we can be cause neutral in the relevant sense, and we do change our cause rankings periodically and spend time and resources thinking about them (in fact we’re in the middle of thinking through some changes now). But how well set up are we, institutionally, to be able to in practice make changes as big as deprioritising risks from AI if we get good reasons to? I think this is a good question, and want to think about it more. So thanks!
Just want to say here (since I work at 80k & commented abt our impact metrics & other concerns below) that I think it's totally reasonable to:
Hey, Arden from 80,000 Hours here –
I haven't read the full report, but given the time sensitivity with commenting on forum posts, I wanted to quickly provide some information relevant to some of the 80k mentions in the qualitative comments, which were flagged to me.
Regarding whether we have public measures of our impact & what they show
It is indeed hard to measure how much our programmes counterfactually help move talent to high impact causes in a way that increases global welfare, but we do try to do this.
From the 2022 report the relevant section is here. Copying it in as there are a bunch of links.
We primarily use six sources of data to assess our impact:
- Open Philanthropy EA/LT survey.
- EA Survey responses.
- The 80,000 Hours user survey. A summary of the 2022 user survey is linked in the appendix.
- Our in-depth case study analyses, which produce our top plan changes and DIPY estimates (last analysed in 2020).
- Our own data about how users interact with our services (e.g. our historical metrics linked in the appendix).
- Our and others' impressions of the quality of our visible output.
Overall, we’d guess that 80,000 Hours continued to see diminishing returns to its impact per staff member per year. [But we continue to think it's still cost-effective, even as it grows.]
Some elaboration:
Regarding the extent to which we are cause neutral & whether we've been misleading about this
We do strive to be cause neutral, in the sense that we try to prioritize working on the issues where we think we can have the highest marginal impact (rather than committing to a particular cause for other reasons).
For the past several years we've thought that the most pressing problem is AI safety, so have put much of our effort there (Some 80k programmes focus on it more than others – I reckon for some it's a majority, but it hasn't been true that as an org we “almost exclusively focus on AI risk.” (a bit more on that here.))
In other words, we're cause neutral, but not cause *agnostic* - we have a view about what's most pressing. (Of course we could be wrong or thinking about this badly, but I take that to be a different concern.)
The most prominent place we describe our problem prioritization is our problem profiles page – which is one of our most popular pages. We describe our list of issues this way: "These areas are ranked roughly by our guess at the expected impact of an additional person working on them, assuming your ability to contribute to solving each is similar (though there’s a lot of variation in the impact of work within each issue as well). (Here's also a past comment from me on a related issue.)
Regarding the concern about us harming talented EAs by causing them to choose bad early career jobs
To the extent that this has happened this is quite serious – helping talented people have higher impact careers is our entire point! I think we will always sometimes fail to give good advice (given the diversity & complexity of people's situations & the world), but we do try to aggressively minimise negative impacts, and if people think any particular part of our advice is unhelpful, we'd like them to contact us about it! (I'm arden@80000hours.org & I can pass them on to the relevant people.)
We do also try to find evidence of negative impact, e.g. using our user survey, and it seems dramatically less common than the positive impact (see the stats above), though there are of course selection effects with that kind of method so one can't take that at face value!
Regarding our advice on working at AI companies and whether this increases AI risk
This is a good worry and we talk a lot about this internally! We wrote about this here.
I like this post and also worry about this phenomenon.
When I talk about personal fit (and when we do so at 80k) it's basically about how good you are at a thing/the chance that you can excel.
It does increase your personal fit for something to be intuitively motivated by the issue it focuses on, but I agree that it seems way too quick to conclude then that your personal fit with that is higher than other things (since there are tons of factors and there are also lots of different jobs for each problem area), let alone that that means you should work on that issue all things considered (since personal fit is not the only factor).
I think it would be especially valuable to see to which degree they reflect the individual judgment of decision-makers.
The comment above hopefully helps address this.
I would also be interested in whether they take into account recent discussions/criticisms of model choices in longtermist math that strike me as especially important for the kind of advising 80.000 hours does (tldr: I take one crux of that article to be that longtermist benefits by individual action are often overstated, because the great benefits longtermism advertises require both reducing risk and keeping overall risk down long-term, which plausibly exceeds the scope of a career/life).
We did discuss this internally in slack (prompted by David's podcatst https://critiquesofea.podbean.com/e/astronomical-value-existential-risk-and-billionaires-with-david-thorstad/). My take was that the arguments don't mean that reducing existential risk isn't very valuable, even though they do imply it's likely not of 'astronomical' value. So e.g. it's not as if you can ignore all other considerations and treat "whether this will reduce existential risk" as a full substitute for whether something is a top priority. I agree with that.
We do generally agree that many questions in global priorities research remain open — that’s why we recommend some of our readers pursue careers in this area. We’re open to the possibility that new developments in this field could substantially change our views.
I think there would be considerable value in having the biggest career-advising organization (80k) be a non-partisan EA advising organization, whereas I currently take them to be strongly favoring longtermism in their advice. While I feel this explicit stance is a mistake, I feel like getting a better grasp on its motivation would help me understand why it was taken.
We're not trying to be 'partisan', for what it's worth. There might be a temptation to sometimes see longtermism and neartermism as different camps, but what we're trying to do is just figure out all things considered what we think is most pressing / promising and communicate that to readers. We tend to think that propensity to affect the long-run future is a key way in which an issue can be extremely pressing (which we explain in our longtermism article.)
?I think it would be valuable to include all the additional notes which are not on your website. As a minimum viable product, you may want to link to your comment.
Thanks for your feedback here!
Your previous quantitative framework was equivalent to a weighted-factor model (WFM) with the logarithms of importance, tractability and neglectedness as factors with the same weight, such the sum respects the logarithm of the cost-effectiveness. Have you considered trying a WFM with the factors that actually drive your views?
I feel unsure about whether we should be trying to do another WFM at some point. There are a lot of ways we can improve our advice, and I’m not sure this should be at the top of our list but perhaps if/when we have more research capacity. I'd also guess it would still have the problem of giving a misleading sense of precision, so it’s not clear how much of an improvement it would be. But it is certainly true that the ITN framework substantially drives our views.
Carl Shulman questioned the tension between AI welfare & AI safety on the 80k podcast recently -- I thought this was interesting! Basically argues AI takeover could be even worse for AI welfare. From the end of the section.