(Uncertain) My guess would be that a global conflict would increase AI investment considerably, as (I think) R&D typically increases in war times. And AI may turn out to be particularly strategically relevant.
Though you need to consider the counterfactual where the talent currently at OAI, DM, and Anthropic all work at Google or Meta and have way less of a safety culture.
I think a central idea here is that superintelligence could innovate and thus find more energy-efficient means of running itself. We already see a trend of language models with the same capabilities getting more energy efficient over time through algorithmic improvement and better parameters/data ratios. So even if the first Superintelligence requires a lot of energy, the systems developed in the period after it will probably need much less.
Weakly held opinion that you could be investing too much into this progress. I'd expect to hit diminishing returns after ~50-100 hours (though have no expertise whatsoever)
Sounds good! Thanks for the reply.
I think I would additionally find it helpful to get some insights into what you are prioritizing to give feedback on project plans (or maybe your future post on impact metrics will include that?) - but I know communication takes a lot of effort, so it may be easier to just receive that feedback as you're rolling out the projects. Looking forward to your next post
Thanks!
I think even for community building, there is a rather broad category, though most of the ones I know of are online, so they probably don't apply here. It would be helpful to have some examples.
Thanks for the collection! (Note there is a typo in the title "Should you should focus on the EU if you're interested in AI governance for longtermist/x-risk reasons?'")