Research Engineering Intern at the Center for AI Safety. Helping to write the AI Safety Newsletter. Studying CS and Economics at the University of Southern California, and running an AI safety club there.
Hey, tough choice! Personally I’d lean towards PPE. Primarily that’s driven by the high opportunity cost of another year in school. Which major you choose seems less important than finding something you love and doing good work in it a year sooner.
Two other factors: First, you can learn AI outside of the classroom fairly well, especially since you can already program. I’m an economics major who’s taken a few classes in CS and done a lot of self-study, and that’s been enough to work on some AI research projects. Second, policy is plausibly more important for AI safety than technical research. There’s been a lot of government focus on slowing down AI progress lately, while technical safety research seems like it will need more time to prepare for advanced AI. The fact that you won’t graduate for a few years mitigates this a bit — maybe priorities will have changed by the time you graduate.
What would you do during a year off? Is it studying PPE for one year? I think a lot of the value of education comes from signaling, so without a diploma to show for it this year of PPE might not be worth much. If there’s a job or scholarship or something, that might be more compelling. Some people would suggest self-study, but I’ve spent time working on my own projects at home, and personally I found it much less motivating and educational than being in school or working.
Those are just my quick impressions, don’t lean too much on anyone (including 80K!). You have to understand the motivations for a plan for yourself in order to execute it well. Good luck, always happy to chat about stuff.
Very interesting article. Some forecasts of AI timelines (like BioAnchors) are premised on compute efficiency continuing to progress as it has for the last several decades. Perhaps these arguments are less forceful against 5-10 year timelines to AGI, but they're still worth exploring.
I'm skeptical of some of the headwinds you've identified. Let me go through my understanding of the various drivers of performance, and I'd be curious to hear how you think of each of these.
Parallelization has driven much of the recent progress in effective compute budgets. Three factors enable parallelization:
Overall, I don't see strong evidence that any of these factors are hitting strong barriers. Instead, the most relevant trend I see in the next 5 years is the rise of AI programming assistants, which could significantly accelerate progress in kernel optimization and algorithms.
I'd highlight two other factors affecting effective compute budgets:
Overall, I used to argue that AI progress will soon slow. But I've lost a lot of Bayes points to folks like Elon, Sam Altman, and Daniel Kokotajlo. A slowdown is entirely possible, perhaps even likely. But it's a live possibility that the world could be transformed in a span only a few years by human-level AI. Safety efforts should address the full range of possible outcomes, but short timelines scenarios are the most dangerous and most neglected, so that's where I'm focusing most of my attention right now.
I built a preliminary model here: https://colab.research.google.com/drive/108YuOmrf18nQTOQksV30vch6HNPivvX3?authuser=2
It’s definitely too simple to treat as strong evidence, but it shows some interesting dynamics. For example, levels of alignment rise at first, then rapidly falling when AI deception skills exceed human oversight capacity. I sent it to Tyler and he agreed — cool, but not actual evidence.
If anyone wants to work on improving this, feel free to reach out!
Some argue that the computational demands of deep learning coupled with the end of Moore's Law will limit AI progress. The most convincing counterargument in my opinion is that algorithms could become much more efficient in using compute. Historically, every 9 months algorithmic improvements have halved the amount of compute necessary to achieve a given level of performance in image classification. AI is currently being used to improve the rate of AI progress (including to improve hardware), meaning full automation could further speed up AI progress.
Yep, this is a totally reasonable question. People have worked on it before: https://www.brookings.edu/research/aligned-with-whom-direct-and-social-goals-for-ai-systems/
Many people concerned with existential threats from AI believe that hardest technical challenge is aligning an AI to do any specific thing at all. They argue that we will have little control over the goals and behavior of superhuman systems, and that solving the problem of aligning AI with any one human will eliminate much of the existential risk associated with AI. See here and here for explanations.
That’s a good point! Joe Carlsmith makes a similar step by step argument, but includes a specific step about whether the existence of rogue AI would lead to catastrophic harm. Would have been nice to include in Bengio’s.
Carlsmith: https://arxiv.org/abs/2206.13353