email: jurkovich.nikola@gmail.com
Yup those conditions seem roughly right. I'd guess the cost to train will be somewhere between $30B and $3T. I'd also guess the government will be very willing to get involved once AI becomes a major consideration for national security (and there exist convincing demonstrations or common knowledge that this is true).
I'm guessing that open weight models won't matter that much in the grand scheme of things - largely because once models start having capabilities which the government doesn't want bad actors to have, companies will be required to make sure bad actors don't get access to models (which includes not making the weights available to download). Also, the compute needed to train frontier models and the associated costs are increasing exponentially, meaning there will be fewer and fewer actors willing to spend money to make models they don't profit from.
I get that it can be tricky to think about these things.
I don't think the outcomes are overdetermined - there are many research areas that can benefit a lot from additional effort, policy is high leverage and can absorb a lot more people, and advocacy is only starting and will grow enormously.
AGI being close possibly decreases tractability, but on the other hand increases neglectedness, as every additional person makes a larger relative increase in the total effort spent on AI safety.
The fact that it's about extinction increases, not decreases, the value of marginally shifting the needle. Working on AI safety saves thousands of present human lives on expectation.
I think grant evaluators should take into account their intuitions on what kinds of research are most valuable rather than relying on expected value calculations.
In case of EV calculations where the future is part of the equation, I think using microdooms as a measure of impact is pretty practical and can resolve some of the problems inherent with dealing with enormous numbers, because many people have cruxes which are downstream of microdooms. Some think there'll be 10^40 people, some think there'll be 10^20. Usually, if two people disagree on how valuable the long-term future is, they don't have a common unit of measurement for what to do today. But if they both use microdooms, they can compare things 1:1 in terms of their effect on the future, without having to flesh out all of the post-agi cruxes.
Yup, I'd say that from the perspective of someone who wants a good AI safety (/EA/X-risk) student community, Harvard is the best place to be right now (I say this as an organizer, so grain of salt). Not many professional researchers in the area though which is sad :(
As for the actual college side of Harvard, here's my experience (as a sophomore planning to do alignment):
If community building potential is part of your decision process, then I would consider not going to Harvard, as there are a bunch of people there doing great things. MIT/Stanford/other top unis in general seem much more neglected in that regard, so if you could see yourself doing communty building I'd keep that in mind.
Check out this post. My views from then have slightly shifted (the numbers stay roughly the same), towards:
Building on the space theme, I like Earthrise, as it has very hopeful vibes, but also points to the famous picture that highlights the fragility and preciousness of earth-based life.
I think I'll pass for now but I might change my mind later. As you said, I'm not sure if betting on ASI makes sense given all the uncertainty about whether we're even alive post-ASI, the value of money, property rights, and whether agreements are upheld. But thanks for offering, I think it's epistemically virtuous.
Also I think people working on AI safety should likely not go into debt for security clearance reasons.