I work on the 1-on-1 team at 80,000 hours talking to people about their careers; the opinions I've shared here (and will share in the future) are my own.
Epistemic status: I've thought about both how people should thinking about PhDs and how people should think about timelines a fair bit, both in my own time and in my role as an advisor at 80k, but I wrote this fairly quickly. I'm sharing my take on this rather than intending to speak on behalf of the whole organisation, though my guess is that the typical view is pretty similar.
Going into a bit more detail - I think there are a couple of aspects to this question, which I’m going to try to (imperfectly) split up:
In terms of how to think about timelines in general, the main advice I’d give is to try to avoid the mistake of interpreting median estimates as single points. Taking this metaculus question as an example, which has a median of July 2027, that doesn’t mean the community predicts that AGI will arrive then! The median just indicates the date by which the community thinks there’s a 50% chance the question will have resolved. To get more precise about this, we can tell from the graph that the community estimates:
Taking these estimates literally, and additionally assuming that any work that happens post this question resolving is totally useless, you might then conclude that delaying your career by 6 years would cause it to have 41/91 = 45% of the value. If that’s the case, if the delay increased the impact you could have by a bit more than a factor of 2, the delay would be worth it.
Having done all of that work (and glossed over a bunch of subtlety in the last comment for brevity), I now want to say that you shouldn’t take the metaculus estimates at face value though. The reason is that (as I’m sure you’ve noticed, and as you’ve seen in the comments) they just aren’t going to be that reliable for this kind of question. Nothing is - this kind of prediction is really hard.
The net effect of this increased uncertainty should be (I claim) to flatten the probability distribution you are working with. This basically means it makes even less sense than you’d think from looking at the distribution to plan for AGI as if timelines are point estimates.
Before I say anything about how a PhD decision interacts with timelines, it seems worth mentioning that the decision whether to do a PhD is complicated and highly dependent on the individual who’s considering it and the specific situation they are in. Not only that, I think it depends on a lot of specifics about the PhD. A quick babble of things that can vary a lot (and that not everyone will have the same preferences about):
When you then start thinking about how timelines affect things, it’s worth noting that a model which says ‘PhD students are students, so they are learning and not doing any work, doing a PhD is therefore an n-year delay in your career where n is the number of years it takes’ is badly wrong. I think it usually makes sense to think of a PhD as more like an entry-level graduate researcher job than ‘n more years of school’, though often the first year or two of a US-style PhD will involve taking classes, and look quite a lot like ‘more school’, so “it’s just a job” is also an imperfect model. As one example of research output during a PhD, Alex Turner’s thesis seems like it should count for more than nothing!
The second thing to note is that some career paths require a PhD, and other paths come very close to requiring it. For these paths, choosing to go into work sooner isn’t giving you a 6 year speedup on the same career track - you’re just taking a different path. Often, the first couple of years on that path will involve a lot learning the basics and getting up to speed, certainly compared to 6 years in, which again pushes in the direction of the difference that timelines makes being smaller than it first seems. Importantly though, the difference between the paths might be quite big, and point in either direction. Choosing a different path to the PhD will often be the correct decision for reasons that have nothing to do with timelines.
Having said that, there are some things that are worth bearing in mind:
I'm very happy to see this! Thank you for organising it.
I read this comment as implying that HLI's reasoning transparency is currently better than Givewell's, and think that this is both:
False.
Not the sort of thing it is reasonable to bring up before immediately hiding behind "that's just my opinion and I don't want to get into a debate about it here".
I therefore downvoted, as well as disagree voting. I don't think downvotes always need comments, but this one seemed worth explaining as the comment contains several statements people might reasonably disagree with.
I'm keen to listen to this, thanks for recording it! Are you planning to make the podcast available on other platforms (stitcher, Google podcasts etc - I haven't found it)
whether you have a 5-10 year timeline or a 15-20 year timeline
Something that I'd like this post to address that it doesn't is that to have "a timeline" rather than a distribution seems ~indefensible given the amount of uncertainty involved. People quote medians (or modes, and it's not clear to me that they reliability differentiate between these) ostensibly as a shorthand for their entire distribution, but then discussion proceeds based only on the point estimates.
I think a shift of 2 years in the median of your distribution looks like a shift of only a few % in your P(AGI by 20XX) numbers for all 20XX, and that means discussion of what people who "have different timelines" should do is usually better framed as "what strategies will turn out to have been helpful if AGI arrives in 2030".
While this doesn't make discussion like this post useless, I don't think this is a minor nitpick. I'm extremely worried by "plays for variance", some of which are briefly mentioned above (though far from the worst I've heard). I think these tend to look good only on worldviews which are extremely overconfident, and treat timelines as point estimates/extremely sharp peaks). More balanced views, even those with a median much sooner than mine, should typically realise that the EV gained in the worlds where things move quickly is not worth the expected cost in worlds where they don't. This is in addition to the usual points about co-operative behaviour when uncertain about the state of the world, adverse selection, the unilateralist's curse etc.
Huh, I took 'confidently' to mean you'd be willing to offer much better odds than 1:1.
I'm going to try to stop paying so much attention to the story while it unfolds, which means I'm retracting my interest in betting. Feel free to call this a win (as with Joel).
This seems close to what you're looking for.
If there's any money left over after you've agree a line with Joel and Nuno, I've got next.
No worries on the acknowledgement front (though I'm glad you found chatting helpful)!
One failure mode of the filtering idea is that the AGI corporation does not use it because of the alignment tax, or because they don't want to admit that they are creating something that is potentially dangerous
I think it's several orders of magnitude easier to get AGI corporations to use filtered safe data than to agree to stop using any electronic communication for safety research. Why is it appropriate to consider the alignment tax of "train on data that someone has nicely collected and filtered for you so you don't die", which is plausibly negative, but not the alignment tax of "never use googledocs or gmail again"?
I think preserving the secrecy-based value of AI safety plans will realistically be a Swiss cheese approach that combines many helpful but incomplete solutions (hopefully without correlated failure modes)
Several others have made this point, but you can't just say "well anything we can do to make the model safer must be worth trying because it's another layer of protection" if adding that layer massively hurts all of your other safety efforts. Safety is not merely a function of the number of layers, but also how good they are, and the proposal would force every other alignment research effort to use completely different systems. That the Manhattan Project happened at all does not constitute evidence that the cost to this huge shift would be trivial.
[context: I'm one of the advisors, and manage some of the others, but am describing my individual attitude below]
FWIW I don't think the balance you indicated is that tricky, and think that conceiving of what I'm doing when I speak to people as 'charismatic persuasion' would be a big mistake for me to make. I try to:
in a work context, that is. I'm unfortunately usually pretty anxious about, and therefore paying a bunch of attention to, whether people are angry/upset with me, though this is getting better, and easy to mostly 'switch off' on calls because the person in front of me takes my full attention.