A

aogara

2803 karmaJoined Jan 2019

Bio

Research Engineering Intern at the Center for AI Safety. Helping to write the AI Safety Newsletter. Studying CS and Economics at the University of Southern California, and running an AI safety club there. 

Posts
15

Sorted by New
5
aogara's Shortform
aogara
· 2y ago · 1m read
19

Comments
360

aogara
6d40

That’s a good point! Joe Carlsmith makes a similar step by step argument, but includes a specific step about whether the existence of rogue AI would lead to catastrophic harm. Would have been nice to include in Bengio’s.

Carlsmith: https://arxiv.org/abs/2206.13353

aogara
6d20

Very interesting. Another discussion of the performance distribution here

aogara
7d50

Ah okay, if it doesn't delay your graduation then I'd probably lean more towards CS. Self study can be great, but I've found classes really valuable too in getting more rigorous. Of course there's a million factors I'm not aware of -- best of luck in whichever you choose!

aogara
8d20

Hey, tough choice! Personally I’d lean towards PPE. Primarily that’s driven by the high opportunity cost of another year in school. Which major you choose seems less important than finding something you love and doing good work in it a year sooner.

Two other factors: First, you can learn AI outside of the classroom fairly well, especially since you can already program. I’m an economics major who’s taken a few classes in CS and done a lot of self-study, and that’s been enough to work on some AI research projects. Second, policy is plausibly more important for AI safety than technical research. There’s been a lot of government focus on slowing down AI progress lately, while technical safety research seems like it will need more time to prepare for advanced AI. The fact that you won’t graduate for a few years mitigates this a bit — maybe priorities will have changed by the time you graduate.

What would you do during a year off? Is it studying PPE for one year? I think a lot of the value of education comes from signaling, so without a diploma to show for it this year of PPE might not be worth much. If there’s a job or scholarship or something, that might be more compelling. Some people would suggest self-study, but I’ve spent time working on my own projects at home, and personally I found it much less motivating and educational than being in school or working.

Those are just my quick impressions, don’t lean too much on anyone (including 80K!). You have to understand the motivations for a plan for yourself in order to execute it well. Good luck, always happy to chat about stuff.

aogara
10d122

Very interesting article. Some forecasts of AI timelines (like BioAnchors) are premised on compute efficiency continuing to progress as it has for the last several decades. Perhaps these arguments are less forceful against 5-10 year timelines to AGI, but they're still worth exploring. 

I'm skeptical of some of the headwinds you've identified. Let me go through my understanding of the various drivers of performance, and I'd be curious to hear how you think of each of these. 

Parallelization has driven much of the recent progress in effective compute budgets. Three factors enable parallelization:

  • Hardware
    • GPUs are more easily parallelized than CPUs, as they have more cores and higher memory bandwidth. Will hardware continue its current pace of improvement?  
    • You cite an interesting paper on Nvidia GPU progress over time; it seems that the greatest speedups in consumer hardware came in the most recent generation, but improvements in industry-grade hardware peaked earlier, with the P100 in 2016. 
    • This doesn't strike me as strong evidence in any direction. Industrial progress has slowed, consumer progress has accelerated, and there are wide error bars on both of those statements because they're drawn from only four data points. 
    • Stronger evidence seems to come from Epoch's Trends in GPU Price Performance, showing that FLOP/s per dollar has doubled every two or three years for nearly two decades. Do you expect this trend to continue, and if not, why? 
  • Kernels
    • Software like CUDA allows developers to specify the ordering of computations and memory transfers, which reduces idling time and improves performance. You say that "CUDA optimization...generated significant improvements but has exhausted its low-hanging fruit," but I'm not sure what the argument is for that. 
    • You do argue that the importance of kernel optimization reduces experimentation with new algorithms. I agree, but I see a different upshot. One of the biggest reasons to be bullish on ML performance is the rise of AI programming assistants. If AI programming assistants learn kernel optimization, they'll reduce the cost and runtime of experiments. New algorithms will be on a level playing field with incumbents, and we'll be more likely to see algorithmic progress that was previously bottlenecked by writing CUDA kernels. 
  • Algorithms
    • Some algorithms are easy to parallelize; others, not so much. For example, a key benefit of transformers is they're more easily parallelized than RNNs, allowing them to scale. 
    • Neil Thompson has some interesting work on algorithmic progress, showing that many fundamental algorithms are provably optimal or close to it. I'm not sure if this is a relevant reference class for ML algorithms though, as runtime guarantees are far less important than measured performance. 
    • Overall, will future algorithms be easier to parallelize? It seems likely. We've done it before, and I don't have any particular reason to expect that it won't happen again.

Overall, I don't see strong evidence that any of these factors are hitting strong barriers. Instead, the most relevant trend I see in the next 5 years is the rise of AI programming assistants, which could significantly accelerate progress in kernel optimization and algorithms. 

I'd highlight two other factors affecting effective compute budgets:

  • Spending. Maybe nobody will spend more than $10B on a training run, and the current trend will slow. But if we're in a very short timelines world, then AI could be massively profitable in the next few years, and OpenAI might get the $100B investment they've been talking about. 
  • Better ML models. Some models learn more efficiently than others. Right now, algorithmic progress halve the compute necessary to reach a fixed level of performance every 16 months or every 9 months, depending on how you look at it. (This research focuses on efficiently reaching an existing level of performance -- I'm not sure if how we should expect it to generalize to improvements in SOTA performance.) Again, AI coders could accelerate this. 

Overall, I used to argue that AI progress will soon slow. But I've lost a lot of Bayes points to folks like Elon, Sam Altman, and Daniel Kokotajlo. A slowdown is entirely possible, perhaps even likely. But it's a live possibility that the world could be transformed in a span only a few years by human-level AI. Safety efforts should address the full range of possible outcomes, but short timelines scenarios are the most dangerous and most neglected, so that's where I'm focusing most of my attention right now. 

aogara
21d160

I built a preliminary model here: https://colab.research.google.com/drive/108YuOmrf18nQTOQksV30vch6HNPivvX3?authuser=2

It’s definitely too simple to treat as strong evidence, but it shows some interesting dynamics. For example, levels of alignment rise at first, then rapidly falling when AI deception skills exceed human oversight capacity. I sent it to Tyler and he agreed — cool, but not actual evidence.

If anyone wants to work on improving this, feel free to reach out!

aogara
21d30

Very cool. You may have seen this but Robin Hanson makes a similar argument in this paper. 

aogara
23d80

Some argue that the computational demands of deep learning coupled with the end of Moore's Law will limit AI progress. The most convincing counterargument in my opinion is that algorithms could become much more efficient in using compute. Historically, every 9 months algorithmic improvements have halved the amount of compute necessary to achieve a given level of performance in image classification. AI is currently being used to improve the rate of AI progress (including to improve hardware), meaning full automation could further speed up AI progress. 

aogara
23d50

Yep, this is a totally reasonable question. People have worked on it before: https://www.brookings.edu/research/aligned-with-whom-direct-and-social-goals-for-ai-systems/

Many people concerned with existential threats from AI believe that hardest technical challenge is aligning an AI to do any specific thing at all. They argue that we will have little control over the goals and behavior of superhuman systems, and that solving the problem of aligning AI with any one human will eliminate much of the existential risk associated with AI. See here and here for explanations. 

Load more