|Expectations||AGI will be built by an organization that’s already trying to build it (85%)|
Some governments will be in the race (80%)
|Compute will still be centralized at the time AGI is developed (60%)||More companies will be in the race (90%)|
|National government policy won’t have strong positive effects (70%)|
China is more likely to lead than pre-2030 (85%)
|The best strategies will have more variance (75%)||There will be more compute suppliers (90%)|
|Comparatively More Promising Strategies (under timelines X)||Aim to promote a security mindset in the companies currently developing AI (85%)||Focus on general community building (90%)|
|Focus on corporate governance (75%)|
|Build the AI safety community in China (80%)|
|Target outreach to highly motivated young people and senior researchers (80%)|
|Avoid publicizing AGI risk (60%)|
|Coordinate with national governments (65%)|
|Beware of large-scale coordination efforts (80%)|
Probability estimates in the "Promising Strategies" category have to be interpreted as the likelihood that this strategy/consideration is more promising/important under timelines X than timelines Y.
Miles Brundage recently argued that AGI timeline discourse might be overrated. He makes a lot of good points, but I disagree with one thing. Miles says: “I think the correct actions are mostly insensitive to timeline variations.”
Unlike Miles, I think that if the timeline differences are greater than a couple of years, the choice of actions does depend on timeline differences. In particular, our approach to governance should be very different depending on whether we think that AGI will be developed in ~5-10 years or after that. In this post, I list some of the likely differences between a world in which AGI is developed before, and after ~2030 and discuss how those differences should affect how we approach AGI governance. I discuss most of the strategies and considerations in relative terms, i.e. arguing why they’re likely to be significantly more crucial under certain timelines than others. I am discussing these specific strategies and considerations because I believe they are important for AI governance, or at least likely to be effective strategies under one of the two timelines I am considering.
I chose 2030 as a cut-off point because it is easy to remember and it seems to make sense to differentiate between the actions that should be prioritized in the time leading up to 2030 (~5-10 years timelines) and those that should be prioritized after 2030 (15-20 years and beyond timelines). But perhaps a better way to read this post is ‘the sooner you think AGI will be developed, the more likely my points about the pre-2030 AGI world are to be true’, and vice versa.
Epistemic status and reasoning behind publishing this:
- It seems to me that not many people have given detailed thought to this issue, which is why I wrote this post. To me, this is a major consideration. I included probability estimates to help you assess my level of certainty for each claim, even though not all claims may be easy to verify.
- If there were no trade-offs to consider, we could implement almost all interventions at the same time. However, in practice, talent is limited and I expect that there will be trade-offs in terms of how resources are allocated among different timelines. I expect to update a significant portion of the claims I have made based on feedback and new information that becomes available in the coming years. This is because there may be considerations that I have missed and AI governance is complex and uncertain. Therefore, my probability estimates have low resilience.
Thanks to Nicole Nohemi, Felicity Reddel, Andrea Miotti, Fabien Roger and Gerard van Smeden for the feedback on this post.
If AGI is developed before 2030, the following is more likely to be true:
AGI will be built by an organization that’s already trying to build it (85%)
Building huge AI models takes a lot of accumulated expertise. The organizations currently working on AI rely on huge internal libraries and repositories of tricks that they’ve built up over time. It’s unlikely that a new organization or actor, starting from scratch, could achieve this within a couple of years. This means that if AGI is developed before 2030, it’s likely to be first developed by one of the (<15) companies that are currently working on it.
In decreasing order:
- Most likely: OpenAI or DeepMind, since their experience will be hard for others to beat within 7 years.
- Possibly: one of the FAANG companies
- Least likely: a recently created lab (Adept, Inflection, Keen Technologies, Cohere, Character, etc.)
This is relevant because DeepMind and OpenAI are more concerned about safety than others. They both have alignment teams and their leaders have expressed commitments to safety, whereas (for example) Meta and Amazon seem less interested in safety.
Compute will still be centralized at the time AGI is developed (60%)
The compute supply chain is currently highly centralized at several points of the supply chain. This is partly because, even though selling compute is lucrative, the machines and fabs needed to make computer parts are extremely expensive. Therefore, companies need to make a massive initial investment just to get started. On top of that, the initial R&D investments that are required are huge.
This is relevant because we can leverage the compute supply chain for AI governance. For example, we could encourage suppliers to put on-chip safety mechanisms in place. However, this is more likely to work if there are fewer companies in the supply chain.
National government policy won’t have strong positive effects (70%)
Governments are slow, and the policy cycle is long. Advocacy efforts usually take years to bear fruit. First, advocates have to raise awareness and shift public opinion in the right direction, and politicians will only take note if their constituents care. If you think that AGI will be developed by 2030, there is likely not enough time to influence national governments in a way that lead them to take strong measures, so governance interventions that rely on national policy or law are less likely to be useful.
I think this matters particularly for the US because the contribution of the US government seems indispensable for most policies that can significantly impact AGI timelines or AGI governance. However, the US government will likely only get involved in X-risk related topics if there is strong support from the public. Achieving the necessary levels of support would require a significant shift in public opinion. Unfortunately, such major shifts probably take more than 7 years and are highly uncertain processes.
The best strategies will have more variance (75%)
If timelines are short, we should be more willing to tolerate variance since we have much less time to explore the possible strategies and can’t wait for slower, less risky strategies to pan out. Timelines seem pretty crucial to calibrate our risk aversion, especially for funders. I think that this consideration is one of the most important effects of timelines on macro-strategy.
Here are some decisions on which this should have a significant effect:
- We absolutely want org X to exist. Founders Y and Z seem good but not ideal. Should we fund them?
- In a world with post-2030 timelines, delaying the creation of a crucial organization for a couple of years to get better founders is most likely the best thing to do. It’s probably not the case with pre-2030 timelines.
- As an organization, should we wait a few more years before entering the AI governance space, in order to have a better and clearer understanding of what we ought to do?
- Shorter timelines should decrease the threshold for determining whether an idea is worth implementing.
If you think that AGI will be developed before 2030, it would make sense to:
Aim to promote a security mindset in the companies currently developing AI (85%)
Some governance strategies involve pushing for a security mindset among AI developers (using outreach) so that they voluntarily decide to do things that make AGI less dangerous. Any researcher at DeepMind, Google Brain, or OpenAI who starts taking AI risks seriously is a huge win because it:
- Increases the expected amount of work in alignment
- Decreases the expected amount of capabilities work
- Increases the chances that, given their lab develops the first AGI, it will be an aligned one.
If you think AGI will be developed very soon, these strategies are more likely to be promising since there are still relatively few companies aiming to develop AGI. Later, there will be more such companies, increasing coordination difficulty and decreasing the cost-effectiveness of efforts that will target companies individually. For example, you might encourage companies to create an alignment team if they don’t have one or, if they do have one, increase their funding.
Prioritize targeted outreach to highly motivated young people and senior researchers (80%)
If timelines are short, prioritize outreach efforts on senior researchers, or on people who’ll be able to make contributions within the next few years, i.e. highly motivated young people. Community building that focuses on younger people who are undergrads and want to do a standard curriculum before working on the problem is likely to have a much lower EV under short timelines. EA general community building would also have much less time to pay off than more targeted AI safety outreach.
Avoid publicizing AGI risk among the general public (60%)
It’s difficult to explain why AI is dangerous without also explaining why it’s powerful. This means that if you try to mitigate risk by raising awareness, it might backfire. You might inadvertently persuade governments to enter the race sooner than they otherwise would. Governments and national defense have a worrying track record of not caring whether developing a powerful technology is dangerous. If your timelines are short, it therefore might make sense not to publicize AGI risk to the general public enough for governments to enter the race. On the other hand, if your timelines are longer, governments will likely be aware of AGI’s power anyway, and thus it might make more sense to publicize AGI risks, putting an emphasis on risks.
Note that this advice holds only when governments don’t know a lot about AGI. If AGI is already being discussed or is already an important consideration, then it is likely that talking about accidental risks is a good strategy.
Beware of large-scale coordination efforts (80%)
Large-scale coordination efforts involving many actors usually take a lot of time to have effects. Therefore, if you’re relying on such a mechanism in your governance plan for pre-2030 timelines, you should probably begin implementing the plan in the next few years and thus start building the necessary coalitions now you would need to succeed. Preferring actors that are moving faster might also be good.
Focus on corporate governance (75%)
There will be more AI companies in the future, and governments will also be in the race. This means that achieving cultural change and coordination between AI companies leveraging corporate governance is a much less promising strategy under longer timelines than under shorter ones. compared to governance that involves governments. On the other hand, some of the top labs’ governance teams are genuinely concerned by AGI risks and seem to be acting to make AGI development as safe as possible. Thus I think that engaging with these actors and ensuring that they have all the tools and ideas they need to actually cut the right risks seems promising to me.
If AGI is developed after 2030, the following is more likely to be true:
Some governments will be in the race (80%)
National governments are likely to eventually realize that AGI is incredibly powerful and will try to build it. In particular, national defense organizations may try to develop it. If you believe that AGI will be developed after 2030, it is possible that it will be developed by a government, as they may have had time to catch up with the organizations currently working on it by that point.
More companies will be in the race (90%)
If AGI is developed later than 2030, it may be developed by a new company that has not yet started building it. Given the number of companies that started racing in 2022, it seems plausible that in 2030 there will be more than 50 companies in the race.
China is more likely to lead (85%)
Chinese companies and the government are currently lagging in AI development, but they’re making progress quite quickly. I think they’re decently likely to catch up to Western companies eventually (I’d put 35% by 2035). The recent export controls on semiconductors may have made that a lot more difficult, but they’ll probably try harder than ever to develop their own chip supply chain. This seems to be a crucial consideration because Chinese AI developers currently don’t care much about safety, and the safety community doesn’t have much influence in China.
There will be more compute suppliers (90%)
Despite the high barriers to entry, because it became clear in recent years that compute would be hugely important, there are likely to be more companies at all stages of the compute supply chain in future. For example, the Chinese government is trying to build its own computing supply chain at the moment. There are also startups, such as Cerebras, that are trying to enter the market. This means that the strategies of compute governance that rely on compute companies will probably be less promising.
If you think that AGI will be developed after 2030, it would make sense to:
Focus on general community building (90%)
The later AI is developed, the more useful it is to do community building now, because many of the results of community building take a while to bear fruit. If a community builder gets an undergraduate computer scientist interested in AI safety, it may be many years before they make their greatest contributions. Great community builders also recruit and/or empower new community builders, who go on to form their own cohorts, which means that a community builder today might be counterfactually responsible for many new AI researchers in 20 years. If you think that AGI won’t be developed for 10 years, building the AGI safety community (or the EA community in general) is probably one of the most effective things for you to do.
Note that community building is promising even on shorter time scales but is particularly exciting under post-2030 timelines (potentially more than anything else).
Build the AI safety community in China (80%)
If your timelines are longer, AGI is more likely to be developed by the Chinese government or a Chinese company. There is currently not a large EA or AI safety community in China. So if you think AGI will be developed after 2030, you should try to build bridges with Chinese ML researchers and AI developers. It’s especially important not to frame AI governance questions adversarially, as ‘US vs China’, as this could make it harder for the US and European safety communities to build alliances with Chinese developers. AI safety may become politicized as ‘an annoying thing that domineering Americans are trying to impose on us’ rather than common sense.
Coordinate with national governments (65%)
This is a more promising strategy if your timelines are longer, because national governments are more likely to be both, developing AGI themselves and generally interested in AGI policy. A way you might be able to have some influence on AGI governance in national governments is by being a civil servant or politician. Other ways could involve trying to become a recognized expert in AGI governance in the relevant country.
I am unsure if theories of change that utilize compliance mechanisms will be more or less effective after 2030. The lengthy process of policy development, including setting standards and establishing an audit system that prevents loopholes, suggests that compliance mechanisms may be more effective after 2030. However, the possibility that China may be in a leadership position could mean that compliance mechanisms will rely heavily on the Brussels effect, which is not a very reliable compliance mechanism.
I would say that post-2030 timelines probably favor these theories of change, but not very confidently.
To summarize, whether you have a 5-10 year timeline or a 15-20 year timeline changes the strategic landscape in which we operate and thus changes some of the strategies we should pursue.
Under pre-2030 timelines:
- National policy matters less (governments are not involved in the race)
- Corporate governance matters more
- There are fewer than 15 key labs that are most likely to develop AGI, and they are located in the US and the UK
- AI safety field building should be very focused on people who can contribute in the next few years (i.e. senior researchers & highly motivated people)
- China is much less likely to lead the race at any point
- Compute is centralized and thus lets room for compute governance
I'm looking forward to reading your comments and disagreements about this important topic. I'm also happy to make a call if you want to talk more in-depth about this topic (https://calendly.com/simeon-campos/).
This post was written collaboratively by Siméon Campos and Amber Dawn Ace. The ideas are Siméon’s; Siméon explained them to Amber, and Amber wrote them up. Then Siméon partly rewrote the post on that basis. We would like to offer this service to other EAs who want to share their as-yet unwritten ideas or expertise.
If you would be interested in working with Amber to write up your ideas, fill out this form.
This is a prediction about the number of suppliers that represent more than 1% of the market they operate in, not the size of the market or the total production. Some events could lead to some supply chain disruptions that could overall decrease the total production of chips.
Probability estimates in this category have to be interpreted as the likelihood that this strategy/consideration is more promising/important under timelines X than timelines Y.
Naturally, if timelines turn out to be longer, the same "couple of years" estimation differences make a smaller difference in what actions would be best.
Main caveat: Recent startups such as Adept.ai and Cohere.ai were built by team leads or major researchers from leader labs. Thanks to the expertise they have, they’re fairly likely to reach the state of the art in at least one subfield of deep learning. That said, most of these organizations are quite likely to not have the compute and money that OpenAI and DeepMind have.
By strong, I mean measures in the reference class of “Constrain labs to airgap and box their SOTA models while they train them”.
In the exploration vs exploitation dilemma, you should start exploiting earlier and thus tolerate a) more downside risks and b) to have chances of not having chosen the maximum.
And wants to contribute to survive alignment.
The senior researchers that are the most relevant are probably those working in top labs and those who are highly regarded in the ML community. It’s much less tractable than young people but it’s probably at least 10 times more valuable in the next 5 years to have a senior researcher who starts caring about AI safety than a junior one. Thus, I’d expect this intervention to be highly valuable under short timelines.
Obviously, how talented the people are matters a lot. I mostly want to underline the fact that for someone to start contributing in the next couple of years, the most important factor is probably motivation.
Note that under post-2030 timelines, the effect of having a lot more PhD students in AI safety in the next few years is probably quite high, mostly due to cultural effects of “AI safety is legible and is a big thing in academia”.
One key consideration here is the medium you’re using to do that publicization. AI alignment is a very complex problem and thus you need to find the media that maximize the complexity you can successfully transmit. Movies seem to be a promising avenue in that respect.
This is a prediction about the number of suppliers that represent more than 1% of the market they operate in, not the size of the market or the total production. Some events could lead to some supply chain disruptions that could overall decrease the total production of chips.
Note that it’s recommended to talk to people with experience on the topic if you want to do that.
Something that I'd like this post to address that it doesn't is that to have "a timeline" rather than a distribution seems ~indefensible given the amount of uncertainty involved. People quote medians (or modes, and it's not clear to me that they reliability differentiate between these) ostensibly as a shorthand for their entire distribution, but then discussion proceeds based only on the point estimates.
I think a shift of 2 years in the median of your distribution looks like a shift of only a few % in your P(AGI by 20XX) numbers for all 20XX, and that means discussion of what people who "have different timelines" should do is usually better framed as "what strategies will turn out to have been helpful if AGI arrives in 2030".
While this doesn't make discussion like this post useless, I don't think this is a minor nitpick. I'm extremely worried by "plays for variance", some of which are briefly mentioned above (though far from the worst I've heard). I think these tend to look good only on worldviews which are extremely overconfident, and treat timelines as point estimates/extremely sharp peaks). More balanced views, even those with a median much sooner than mine, should typically realise that the EV gained in the worlds where things move quickly is not worth the expected cost in worlds where they don't. This is in addition to the usual points about co-operative behaviour when uncertain about the state of the world, adverse selection, the unilateralist's curse etc.
There's a balance here for communication purposes - concrete potential timelines are easier for some of us to understand than distributions. Perhaps both could be used?
Thanks for your comment!
That's an important point that you're bringing up.
My sense is that at the movement level, the consideration you bring up is super important. Indeed, even though I have fairly short timelines, I would like funders to hedge for long timelines (e.g. fund stuff for China AI Safety). Thus I think that big actors should have in mind their full distribution to optimize their resource allocation.
That said, despite that, I have two disagreements:
Finally, I agree that "the best strategies will have more variance" is not a good advice for everyone. The reason I decided to write it rather than not is because I think that the AI governance community tends to have a too high degree of risk adverseness (which is a good feature in their daily job) which penalizes mechanically a decent amount of actions that are way more useful under shorter timelines.
This is a really useful and interesting post that I'm glad you've written! I agree with a lot of it, but I'll mention one bit I'm less sure about.
I think we can have more nuance about governments "being in the race" or their "policy having strong effects". I agree that pre-2030, a large, centralised, government-run development programme like the Apollo Project is less likely (I assume this is the central thing you have in your mind). However there are other ways governments could be involved, including funding, regulating and 'directing' development and deployment.
I think cyber weapons and cyber defence is a useful comparison. Much of the development - and even deployment - is led by the private sector: defence contractors in the US, criminals in some other states. Nevertheless, much of it is funded, regulated and directed by states. People didn't think this would happen in the late 1990s and 2000s - they thought it would be private sector led. But nevertheless with cyber, we're now in a situation where the major states (e.g. those in the P5, with big economies, militaries and nuclear weapons) have the preponderance of cyber power - they have directed and are responsible for all the largest cyber attacks (Stuxnet, 2016 espionage, NotPetya, WannaCry etc). It's a public-private partnership, but states are in the driving seat.
Something similar might happen with AI this side of 2030, without the situation resembling the Apollo Project.
For much more on this, Jade Leung's thesis is great: Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies
Thanks for your comment!
A couple of remarks:
And finally, I like the example you gave on cyber. The point I was making was something like "Your theories of change for pre-2030 timelines shouldn't rely too much on national government policy" and my understanding of what you're saying is something like "that may be right, but national governments are still likely to have a lot of (bad by default) influence, so we should care about them".
I basically had in mind this kind of scenario where states don't do the research themselves but are backing some private labs to accelerate their own capabilities, and it makes me more worried about encouraging states to think about AGI. But I don't put that much weight on these scenarios yet.
How confident are you that governments will get involved in meaningful private-public collaboration around AGI by 2030? A way of operationalizing that could be "A national government spends more than a billion $ in a single year on a collaboration with a lab with the goal to accelerate research on AGI".
If you believe that it's >50%, that would definitely update me towards "we should still invest a significant share of our resources in national policy, at least in the UK and the US so that they don't do really bad moves".
I think my point is more like "if anyone gets anywhere near advanced AI, governments will have something to say about it - they will be a central player in shaping its development and deployment." It seems very unlikely to me that governments would not notice or do anything about such a potentially transformative technology. It seems very unlikely to me that a company could train and deploy an advanced AI system of the kind you're thinking about without governments regulating and directing it. On funding specifically, I would probably be >50% on governments getting involved in meaningful private-public collaboration if we get closer to substantial leaps in capabilities (though it seems unlikely to me that AI progress will get to that point by 2030).
On your regulation question, I'd note that the EU AI Act, likely to pass next year, already proposes the following requirements applying to companies providing (eg selling, licensing or selling access to) 'general purpose AI systems' (eg large foundation models):
So they'll already have to do (post-training) safety testing before deployment. Regulating the training of these models is different and harder, but even that seems plausible to me at some point, if the training runs become ever huger and potentially more consequential. Consider the analogy that we regulate biological experiments.
I think that our disagreement comes from what we mean by "regulating and directing it."
My rough model of what usually happens in national governments (and not the EU, which is a lot more independent from its citizen than the typical national government) is that there are two scenarios:
I feel like we're extremely likely to be in scenario 2 regarding AI. And thus that no significant measure will be taken, which is why I put the emphasis of "no strong [positive] effect" on AI safety. So basically I feel like the best you can probably do in national policy is something like "avoid that they do bad things" (which is really good if it's a big risk) or "do mildly good things". But to me, it's quite unlikely that we go from a world where we die to a world where we don't die thanks to a theory of change which is focused on national policy.
The EU AI Act is a bit different in that as I said above, the EU is much less tied to the daily worries of citizen and thus is operating under less constraints. Thus I think that it's indeed plausible that the EU does something ambitious on GPAIS but I think that unfortunately it's unlikely that the US will replicate something locally and that the EU compliance mechanisms are not super likely to cut the worst risks for the UK and US companies.
I think that it's plausible but not likely, and given that it would be the intervention that would cut the most risks, I tend to prefer corporate governance which seems significantly more tractable and neglected to me.
Out of curiosity, could you refer to a specific event you'd expect to see "if we get closer to substantial leaps in capabilities"? I think that it's a useful exercise to disagree fruitfully on timelines and I'd be happy to bet on some events if we disagree on one.
It could be that I love this because it's what I'm working on (raising safety awareness in corporate governance) but what a great post. Well structured, great summary at the end.
I am quite confused about what probabilities here mean, especially with prescriptive sentences like "Build the AI safety community in China" and "Beware of large-scale coordination efforts."
I also disagree with the "vibes" of probability assignment to a bunch of these, and the lack of clarity on what these probabilities entail makes it hard to verbalize these.
Hey Misha! Thanks for the comment!
As I wrote in note 2, I'm here claiming that this claim is more likely to be true under these timelines than the other timelines. But how could I make it clearer without bothering too much? Maybe putting note 2 under the table in italic?
I see, I hesitated in the trade-off (1) "put no probabilities" vs (2) "put vague probabilities" because I feel like that the second gives a lot more signal on how confident I am in what I say and allow people to more fruitfully disagree but at the same time it gives a "seriousness" signal which is not good when the predictions are not actual predictions.
Do you think that putting no probabilities would have been better?
By "I also disagree with the vibes of probability assignment to a bunch of these", do you mean that it seems over/underconfident in a bunch of ways when you try to do a similar exercise?
Well, yeah, I struggle with interpreting that:
Sorry for the lack of clarity: I meant that despite my inability to interpret probabilities, I could sense their vibes, and I hold different vibes. And disagreeing with vibes is kinda difficult because you are unsure if you are interpreting them correctly. Typical forecasting questions aim to specify the question and produce probabilities to make underlying vibes more tangible and concrete — maybe allowing to have a more productive discussion. I am generally very sympathetic to the use of these as appropriate.
Thanks so much for this! I'm a short-term global development kind of guy, but this post was so well written and made so much sense it got me interested in this AGI stuff. Mad respect for your ability to clearly communicate quite complicated topics - you have a future in writing.
Ah ah you probably don't realize it but "you" is actually 4 persons: Amber Dawn for the first draft of the post, me (Simeon) for the ideas, the table and the structure of the post, and me, Nicole Nohemi & Felicity Riddel for the partial rewriting of the draft to make it clearer.
So the credits are highly distributed! And thanks a lot, it's great to hear that!
I strongly disagree with "Avoid publicizing AGI risk among the general public" (disclaimer: I'm a science fiction novelist about to publish a novel about AGI risk, so I may be heavily biased). Putin said in 2017 that "the nation that leads in AI will be the ruler of the world". If anyone who could play any role at all in developing AGI (or uncontrollable AI as I prefer to call it) isn't trying to develop it by now, I doubt very much that any amount of public communication will change that.
On the other hand, I believe our best chance of preventing or at least slowing down the development of uncontrollable AI is a common, clear understanding of the dangers, especially among those who are at the forefront of development. To achieve that, a large amount of communication will be necessary, both within development and scientific communities and in the public.
I see various reasons for that. One is the availability heuristic: People don't believe there is an AI x-risk because they've never seen it happen outside of science fiction movies and nobody but a few weird people in the AI safety community is talking seriously about it (very similar to climate change a few decades ago). Another reason is social acceptance: As long as everyone thinks AI is great and the nation with the most AI capabilities wins, if you're working on AI capabilities, you're a hero. On the other hand, if most people think that strong AI poses a significant risk to their future and that of their kids, this might change how AI capabilities researchers are seen, and how they see themselves. I'm not suggesting disparaging people working at AI labs, but I think working in AI safety should be seen as "cool", while blindly throwing more and more data and compute at a problem and see what happens should be regarded as "uncool".
Thanks for your comment!
First, you have to have in mind that when people are talking about "AI" in industry and policymaking, they usually have mostly non-deep learning or vision deep learning techniques in mind simply because they mostly don't know the ML academic field but they have heard that "AI" was becoming important in industry. So this sentence is little evidence that Russia (or any other country) is trying to build AGI, and I'm at ~60% Putin wasn't thinking about AGI when he said that.
I think that you're deeply wrong about this. Policymakers and people in industry, at least till ChatGPT had no idea what was going on (e.g at the AI World Summit, 2 months ago very few people even knew about GPT-3). SOTA large language models are not really properly deployed, so nobody cared about them or even knew about them (till ChatGPT at least). The level of investment right now in top training runs probably doesn't go beyond $200M. The GDP of the US is 20 trillion. Likewise for China. Even a country like France could unilaterally put $50 billion in AGI development and accelerate timelines quite a lot within a couple of years.
Even post ChatGPT, people are very bad at projecting what it means for next years and still have a prior on the fact that human intelligence is very specific and can't be beaten which prevents them from realizing all the power of this technology.
I really strongly encourage you to go talk to actual people from industry and policy to get a sense of their knowledge on the topic. And I would strongly recommend not publishing your book as long as you haven't done that. I also hope that a lot of people who have thought about these issues have proofread your book because it's the kind of thing that could really increase P(doom) substantially.
I think that to make your point, it would be easier to defend the line that "even if more governments got involved, that wouldn't change much". I don't think that's right because if you gave $10B more to some labs, it's likely they'd move way faster. But I think that it's less clear.
I agree that it would be something good to have. But the question is: is it even possible to have such a thing?
I think that within the scientific community, it's roughly possible (but then your book/outreach medium must be highly targeted towards that community). Within the general public, I think that it's ~impossible. Climate change, which is a problem which is much easier to understand and explain is already way too complex for the general public to have a good idea of what are the risks and what are the promising solutions to these risks (e.g. a lot people's top priorities is to eat organic food, recycle and decrease plastic consumption).
I agree that communicating with the scientific community is good, which is why I said that you should avoid publicizing only among "the general public". If you really want to publish a book, I'd recommend targeting the scientific community, which is not at all the same public as the general public.
"On the other hand, if most people think that strong AI poses a significant risk to their future and that of their kids, this might change how AI capabilities researchers are seen, and how they see themselves"
I agree with this theory of change and I think that it points a lot more towards "communicate in the ML community" than "communicate towards the general public". Publishing great AI capabilities is mostly cool for other AI researchers and not that much for the general public. People in San Francisco (where most of the AGI labs are) also don't care much about the general public and whatever it thinks ; the subculture there and what is considered to be "cool" is really different from what the general public thinks is cool. As a consequence, I think they mostly care about what their peers are thinking about them. So if you want to change the incentives, I'd recommend focusing your efforts on the scientific & the tech community.
Strongly agree, upvoted.
Just a minor point on the Putin quote, as it comes up so often, he was talking to a bunch of schoolkids, encouraging them to do science and technology. He said similarly supportive things about a bunch of other technologies. I'm at >90% he wasn't referring to AGI. He's not even that committed to AI leadership: he's taken few actions indicating serious interest in 'leading in AI'. Indeed, his Ukraine invasion has cut off most of his chip supplies and led to a huge exodus of AI/CS talent. It was just an off-the-cuff rhetorical remark.
Oh that's fun, thanks for the context!
As you point out yourself, what makes people interested in developing AGI is progress in AI, not the public discussion of potential dangers. "Nobody cared about" LLMs is certainly not true - I'm pretty sure the relevant people watched them closely. That many people aren't concerned about AGI or doubting its feasibility by now only means that THOSE people will not pursue it, and any public discussion will probably not change their minds. There are others who think very differently, like the people at OpenAI, Deepmind, Google, and (I suspect) a lot of others who communicate less openly about what they do.
I don't think you can easily separate the scientific community from the general public. Even scientific papers are read by journalists, who often publish about them in a simplified or distorted way. Already there are many alarming posts and articles out there, as well as books like Stuart Russell's "Human Compatible" (which I think is very good and helpful), so keeping the lid on the possibility of AGI and its profound impacts is way too late (it was probably too late already when Arthur C. Clarke wrote "2001 - A Space Odyssey"). Not talking about the dangers of uncontrollable AI for fear that this may lead to certain actors investing even more heavily in the field is both naive and counterproductive in my view.
I will definitely publish it, but I doubt very much that it will have a large impact. There are many other writers out there with a much larger audience who write similar books.
I'm currently in the process of translating it to English so I can do just that. I'll send you a link as soon as I'm finished. I'll also invite everyone else in the AI safety community (I'm probably going to post an invite on LessWrong).
Concerning the Putin quote, I don't think that Russia is at the forefront of development, but China certainly is. Xi has said similar things in public, and I doubt very much that we know how much they currently spend on training their AIs. The quotes are not relevant, though, I just mentioned them to make the point that there is already a lot of discussion about the enormous impact AI will have on our future. I really can't see how discussing the risks should be damaging, while discussing the great potential of AGI for humanity should not.
What do you mean by "the relevant people"? I would love that we talk about specifics here and operationalize what we mean. I'm pretty sure E. Macron haven't thought deeply about AGI (i.e has never thought for more than 1h about timelines) and I'm at 50% that if he had any deep understanding of what changes it will bring, he would already be racing. Likewise for Israel, which is a country which has strong track record of becoming leads in technologies that are crucial for defense.
I think here you wrongly assume that people have even understood what are the implications of AGI and that they can't update at all once the first systems will start being deployed. The situation where what you say could be true is if you think that most of your arguments hold because of ChatGPT. I think it's quite plausible that since ChatGPT and probably even more in 2023 there will be deployments that may make mostly everyone that matter aware of AGI. I don't have a good sense yet of how policymakers have updated yet.
Yeah, I realize thanks to this part that a lot of the debate should happen on specifics rather that at a high-level as we're doing here. Thus, chatting about your book in particular will be helpful for that.
Great! Thanks for doing that!
FYI I don't think that it's true.
Regarding all our discussion, I realized I didn't mention a fairly important argument: a major failure mode specifically regarding risks is the following reaction from ~any country: "Omg, China is developing bad AGIs, so let's develop safe AGIs first!".
This can happen in two ways:
Thanks a lot for engaging with my arguments. I still think that you're substantially overconfident about the positive aspects of communicating AGI X-risks to the general public but I appreciate the fact that you took the time to consider and answer to my arguments.