I think it would be especially valuable to see to which degree they reflect the individual judgment of decision-makers.
The comment above hopefully helps address this.
...I would also be interested in whether they take into account recent discussions/criticisms of model choices in longtermist math that strike me as especially important for the kind of advising 80.000 hours does (tldr: I take one crux of that article to be that longtermist benefits by individual action are often overstated, because the great benefits longtermism advertises require both redu
?I think it would be valuable to include all the additional notes which are not on your website. As a minimum viable product, you may want to link to your comment.
Thanks for your feedback here!
Your previous quantitative framework was equivalent to a weighted-factor model (WFM) with the logarithms of importance, tractability and neglectedness as factors with the same weight, such the sum respects the logarithm of the cost-effectiveness. Have you considered trying a WFM with the factors that actually drive your views?
I feel unsure about whether we sho...
I agree that it might be worthwhile to try to become the president of the US - but that wouldn't mean it's best for us to have an article on it, especially highly ranked. that takes real estate on our site, attention from readers, and time. This specific path is a sub-category of political careers, which we have several articles on. In the end, it is not possible for us to have profiles on every path that is potentially worthwhile for someone. My take is that it's better for us to prioritise options where the described endpoint is achievable for at least a healthy handful of readers.
No, we have lots of external advisors that aren't listed on our site. There are a few reasons we might not list people, including:
We might not want to be committed to asking for someone's advice for a long time or need to remove them at some point.
The person might be happy to help us and give input but not want to be featured on our site.
It's work to add people, and we often will reach out to someone in our network fairly quickly and informally, and it would feel like overkill / too much friction to get a bio, and get permission from them for it,
This is a good question -- we don't have a formal approach here, and I personally think that in general, it's quite a hard problem who to ask for advice.
A few things to say:
the ideal is often to have both.
the bottleneck on getting more people with domain expertise is more often us not having people in our network with sufficient expertise, that we know about and believe are highly credible, and who are willing to give us their time, rather than their values. People who share our values tend to be more excited to work with us.
it depends a lot on th
Hey Vasco —
Thanks for your interest and also for raising this with us before you posted so I could post this response quickly!
I think you are asking about the first of these, but I'm going to include a few notes on the 2nd and 3rd too as well just in case, as there's a way of hearing your question as about them.
Hi Nick —
Thanks for the thoughtful post! As you said, we’ve thought about these kinds of questions a lot at 80k. Striking the right balance of content on our site, and prioritising what kinds of content we should work on next, are really tricky tasks, and there’s certainly reasonable disagreement to be had about the trade-offs.
We’re not currently planning to focus on neartermist content for the website, but:
I think I've become substantially more hardworking!
I think I started from a middle-to-high baseline but I think I am now "pretty hard working" at least (I say as I write this at 8 am on a Tuesday, demonstrating viscerally my not-perfect work ethic).
the big thing for me was going from academic philosophy to working at 80k. Active ingredients in order of importance:
Copying from my comment above:
Update: we've now added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.
That said, I basically agree we could make these views more obvious! E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should.
Update: we added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.
Love this, thanks Catherine! Great way of structuring a career story for being useful to the audience btw, might copy it at some point.
Arden here - I lead on the 80k website and am not on the one-on-one team, but thought I could field this one. This is a big question!
We have several different programmes, which face different bottlenecks. I'll just list a few here, but it might be helpful to check out our most recent two-year review for more thoughts – especially the "current challenges" sections for each programme (though that's from some months ago).
Some current bottlenecks:
Thanks : ) we might workshop a few ways of getting something about this earlier in the user experience.
Hey, I wasn’t a part of these discussions, but from my perspective (web director at 80k), I think we are transparent about the fact that our work comes from a longtermist perspective that suggests that existential risks are the most pressing issues. The reason we try to present, which is also the true reason, is that we think these are the areas where many of our readers, and thereofre we, can make the biggest positive impact.
Here are some of the places we talk about this:
1. Our problem profiles page (one of our most popular pages) explicitly say...
Love this post -- thanks Rocky! I feel like 5-7 are especially well explained // I haven't seen them explained that way before.
However many elements of the philosophical case for longtermism are independent of contingent facts about what is going to happen with AI in the coming decades.
Agree, though there are arguments from one to the other! In particular:
Thanks for this post! One thought on what you wrote here:
"My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have."
I feel unsure about this. Or like, I think it's true we have those downsides, but we also probably get upsides from being in the middle here, so I'm unsure we're in the worst of both worlds rather than e.g. the bes...
This seems true to me, although I don't have great confidence here.
For some years at times I had thought to myself "Damn, EA is pulling off something interesting - not being an organization, but at the same time being way more harmonious and organized than a movement. Maybe this is why it's so effective and at the same time feels so inclusive." Not much changed recently that would make me update in a different direction. This always stood out to me in EA, so maybe this is one of its core competencies[1] that made it so successful in comparison to so m...
I don't know the answer to this, because I've only been working at 80k since 2019 - but my impression is this isn't radically different from what might have been written in those years.
Hey Joey, Arden from 80k here. I just wanted to say that I don't think 80k has "the answers" to how to do the most good.
But we do try to form views on the relative impact of different things, so we do try to reach working answers, and then act on our views (e.g. by communicating them and investing more where we think we can have more impact).
So e.g. we prioritise cause areas we work most on by our take at their relative pressingness, i.e. how much expected good we think people can do by trying to solve them, and we also communicate these views to our reade...
This feels fairly tricky to me actually -- I think between the two options presented I'd go with (1) (except I'm not sure what you mean by "If we'd focus specifically on EAs it would be even better" -- I do overall endorse our current choice of not focusing specifically on EAs).
However, some aspects of (2) seem right too. For example, I do think that we talk about a lot of things EAs already know about in much of our content (though not all of it). And I think some of the "here's why it makes sense to focus on impact" - type content does fall into that cat...
I'm grateful to the people who start new orgs to fill the gaps they see, knowing that's a path with a high chance of not working. I like how dynamic EA is (and think we could stand to be even more dynamic!) and this is largely because new projects keep coming on the scene.
thanks for this post! I'm curious - can you explain this more?
the AGI doom memeplex has, to some extent, a symbiotic relationship with the race toward AGI memeplex
My interpretation would be that they both tend to buy into the same premises that AGI will occur soon and that it will be godlike in power. Depending on how hard you believe alignment is, this would lead you to believe that we should build AGI as fast as possible (so that someone else doesn't build it first), or that we should shut it all down entirely.
By spreading and arguing for their shared premises, both the doomers and the AGI racers get boosted by the publicity given to the other, leading to growth for them both.
As someone who does not accept these premises, this is somewhat frustrating to watch.
I'm trying out updating some of 80,000 Hours pages iteratively that we don't have time to do big research projects on right now. To this end, I've just released an update to https://80000hours.org/problem-profiles/improving-institutional-decision-making/ — our problem profile on improving epistemics and institutional decision making.
This is sort of a tricky page because there is a lot of reasonable-seeming disagreement about what the most important interventions are to highlight in this area.
I think the previous version had some issues: It was confusing, a...
Hey Holden,
Thanks for these reflections!
Could you maybe elaborate on what you mean by a 'bad actor'? There's some part of me that feels nervous about this as a framing, at least without further specification -- like maybe the concept could be either applied too widely (e.g. to anyone who expresses sympathy with "hard-core utilitarianism", which I'd think wouldn't be right), or have a really strict definition (like only people with dark tetrad traits) in a way that leaves out people who might be likely to (or: have the capacity to?) take really harmful actions.
Thanks Vaidehi -- agree! I think another key part of why it's been useful is that it's just really readable/interesting -- even for people who aren't already invested in the ideas.
Hey! Arden here, also from 80,000 Hours. I think I can add a few things here on top of what Bella said, speaking more to the web content side of the question:
(These are additional to the 'there's a headwind on engagement time' part of Bella's answer above – though they less important I think compared to the points Bella already mentioned about a 'covid spike' in engagement time in 2020 and marketing not getting going strong until the latter half of 2022 .)
The career guide (https://80000hours.org/career-guide/) was very popular. In 2019 we deprioritised
Thank you Max for all your hard work and all the good you've done in your role. Your colleagues' testimonials here are lovely to see. I think it's really cool you're taking care of yourself and thinking ahead in this way about handing off responsibility - even though I'm sure it's hard.
Good luck with the transition <3
Nice post. One thought on this - you wrote:
"I’d be especially excited for people to spread messages that help others understand - at a mechanistic level - how and why AI systems could end up with dangerous goals of their own, deceptive behavior, etc. I worry that by default, the concern sounds like lazy anthropomorphism (thinking of AIs just like humans)."
I agree that this seems good for avoiding the anthropomorphism (in perception and in one's own thought!) but I think it'll be important to emphasise when doing this that these are conceivable ways and...
[writing in my personal capacity, but asked an 80k colleague if it seemed fine for me to post this]
Thanks a lot for writing this - I agree with a lot of (most of?) of what's here.
One thing I'm a bit unsure of is the extent to which these worries have implications for the beliefs of those of us who are hovering more around 5% x-risk this century from AI, and who are one step removed from the bay area epistemic and social environment you write about. My guess is that they don't have much implication for most of us, because (though what you say is way bette...
The 5% figure seems pretty common, and I think this might also be a symptom of risk inflation.
There is a huge degree of uncertainty around this topic. The factors involved in any prediction very by many orders of magnitude, so it seems like we should expect the estimates to vary by orders of magnitude as well. So you might get some people saying the odds are 1 in 20, or 1 in 1000, or 1 in a million, and I don't see how any of those estimates can be ruled out as unreasonable. Yet I hardly see anyone giving estimates of 0.1% or 0.001%.
I think people are using 5% as a stand in for "can't rule it out". Like why did you settle at 1 in 20 instead of 1 in a thousand?
Hi vmasarik,
Arden from 80k here. Yes https://80000hours.org/career-reviews/founder-impactful-organisations/ is the general and up to date write-up covering founding a charity. I'm not sure exactly what previous page you are referring to, but it sounds to me like it will be represented by https://80000hours.org/career-reviews/founder-impactful-organisations/ (Though https://80000hours.org/career-reviews/founding-effective-global-poverty-non-profits/ might be helpful as well if you are thinking specifically of global health and development charities -- thoug...
I fulfil my gwwc pledge by donating each month to the EA funds animal welfare fund for the fund managers to distribute as they see fit. I trust them to make a better decision than I will on the individual charities' effectiveness since I don't have that much time/expertise to look into it.
I think the long-run future is incredibly important, and I spend my labour mostly on that. But my guess (though I'm pretty unsure) is that my donations do more good in animal welfare than in longtermism-focused things. Perhaps the new landscape should change that but I ha...
Thank you for writing this - strong +1. At 80k we are going to be thinking carefully about what this means for our career advice and our ways of communicating - how this should change things and what we should do going forward. But there’s a decent amount we still don’t know and it will also just take time to figure that all out.
It feels like we've just gotten a load of new information, and there’s probably more coming, and I am in favour of updating on things carefully.
Hey, Arden from 80k here -
It'd take more looking into stuff/thinking to talk about the other points, but I wanted to comment on something quickly: thank you for pointing out that the philosophy phd career profile and the competitiveness of the field wasn’t sufficiently highlighted on the GPR problem profile . We’ve now added a note about it in the "How to enter" section.
I wrote the career review when I'd first started at 80k, and for me it was just an oversight not to link to it and its points more prominently on the GPR problem profile.
Nice! I should have mentioned somewhere: the 80K website is huge and has tons of articles on partly-overlapping topics, written over many years by a bunch of different people. If there's an inconsistency, my first guess would have been that one of the articles is out-of-date or they're just different perspectives at 80K that no one noticed need to be brought into contact to hash out who's right.
One reason might be that this framework seems to bake totalist utilitarianism into longtermism (by considering expansion/contraction and average willbeing incrase/decrease) as the two types of longtermist progress/regress, whereas longtermism is compatible with many ethical theories?
Again there doesn’t seem to be a strong reason to think there’s an upper bound to the amount of people that could be killed in a war featuring widespread deployment of AI commanders or lethal autonomous weapons systems.[17]
So on technological grounds, at least, there seem to be no strong reasons to think that the distribution of war outcomes continues all the way to the level of human extinction.
Sounds right!
This made me realise that my post is confusing/miseadling in a particular way -- because of the context of the 80,000 Hours problem profiles pag...
Thanks for this post!
I strongly agree with this:
...This seems odd to consider an ‘existential’ risk - there are many ways in which we can imagine positive or negative changes to expected future quality of life (see for example Beckstead’s idea of trajectory change). Classing low-value-but-interstellar outcomes as existential catastrophes seems unhelpful both since it introduces definitional ambiguity over how much net welfare must be lost for them to qualify, and since questions of expected future quality of life are very distinct from questions of future q
Yay! Glad you're doing this.
Whenish might the results be available? (e.g. by the new year, or considerably after?)
Thanks!
I'd say it's pretty uncertain when we'll start publishing the main series of posts. For example, we might work on completing a large part of the series, before we start releasing individual posts, and we may use a format this year where we put more results on a general dashboard, and then include a smaller set of analyses in the main series of posts. But best guess is early in the new year.
That said, we'll be able to provide results/analyses for specific questions you might want to ask about essentially immediately after the survey closes.
Great post.
One disagreement:
Principle 3: Our explicit, subjective credences are approximately accurate enough, most of the time, even in crazy domains, for it to be worth treating those credences as a salient input into action.
I think for me at least, and I'd guess for other people, the thing that makes the explicit subjective credences worth using is that since we have to make prioritisation decisions//decisions about how to act anyway, and we're going to make them using some kind of fuzzy approximated expected value reasoning , making our probabiliti...
Thanks Rebecca, I see how that's a confusing way to organise things -- will pass on this feedback.
Thanks David - it seems like an important harm to consider if we've caused people who'd otherwise be doing valuable work in global health / animal welfare / other issues to leave the EA community // not do as valuable work.
Thanks! Agree about there being tradeoffs here. Curious if you have more to say on this:
Mainly, I worry that (mainly through social dynamics) some people are pushed out of careers where they would actually have more impact, by moving into careers where they can't thrive as well
Am I right in thinking that the worry that, by raising the status of some careers, 80k creates social pressure to do those rather than the one you have greater personal fit for?
(Do you think there’s a (reasonable) amount of emphasis on personal fit we could present which would mostly ameliorate your worries on this?)
Arden here from 80k -- just wanted to note the figures you cite are from a survey and were not 80k's overall views.
Our articles put AI risk closer to 10% (https://80000hours.org/problem-profiles/artificial-intelligence/) and nano much lower though we don't try to estimate it numerically (we have a mini writeup here https://80000hours.org/problem-profiles/atomically-precise-manufacturing/)
seems like we should update that article anyway though. thanks for drawing my attention to it
https://www.xriskology.com/books
I guess if that were the reason it'd probably be because people worry that implies Rees might agree with a bunch of Torres' views they think are very bad. though I think that forwarding someone's book or blurbing someone's book is pretty consistent with disagreeing strongly with a bunch of their stuff (if you even know about it).
unsure why downvoted. upvoted for being a possible reason he might not be mentioned more (not saying it's a good reason ).
sidenote: if we're so parochial that Cambridge is too far for Oxford-doninated ea to take notice of what goes on there.... that seems like pretty bad news.
Oxford vs Cambridge seems more likely to me than the blurb explanation because Torres' book was published in 2017 and would only explain changes after that time, but I don't have any particular reason to think anything changed at that time. Happy to be corrected though.
I like this post and also worry about this phenomenon.
When I talk about personal fit (and when we do so at 80k) it's basically about how good you are at a thing/the chance that you can excel.
It does increase your personal fit for something to be intuitively motivated by the issue it focuses on, but I agree that it seems way too quick to conclude then that your personal fit with that is higher than other things (since there are tons of factors and there are also lots of different jobs for each problem area), let alone that that means you should work on that issue all things considered (since personal fit is not the only factor).