Thanks for the comment. I think the ways an aligned AGI could make the world safer against unaligned AGIs can be divided in two categories: preventing unaligned AGIs from coming into existence or stopping already existing unaligned AGIs from causing extinction. The second is the offense/defense balance. The first is what you point at.
If an AGI would prevent people from creating AI, this would likely be against their will. A state would be the only actor who could do so legally, assuming there is regulation in place, and also most practically. Therefore, I think your option falls under what I described in my post as "Types of AI (hardware) regulation may be possible where the state actors implementing the regulation are aided by aligned AIs". I think this is indeed a realistic option and it may reduce existential risk somewhat. Getting the regulation in place at all, however, seems more important at this point than developing what I see as a pretty far-fetched and - at the moment - intractable way to implement it more effectively.
Hi Peter, thanks for your comment. We do think the conclusions we draw are robust based on our sample size. If course it depends on the signal: if there's a change in e.g. awareness from 5% to 50%, a small sample size should be plenty to show that. However, if you're trying to measure a signal of only 1% difference, your sample size should be much larger. While we stand by our conclusions, we do think there would be significant value in others doing similar research, if possible with larger sample sizes.
Again, thanks for your comments, we take the input into account.
Thanks Gabriel! Sorry for the confusion. TE stands for The Economist, so this item: https://www.youtube.com/watch?v=ANn9ibNo9SQ
Thanks for your reply. I mostly agree with many of the things you say, but I still think work to reduce the amount of emission rights should at least be on the list of high-impact things to do, and as far as I'm concerned, significantly higher than a few other paths mentioned here.
If you'd still want to do technology-specific work, I think offshore solar might also be impactful and neglected.
As someone who has worked in sustainable energy technology for ten years (wind energy, modeling, smart charging, activism) before moving into AI xrisk, my favorite neglected topic is carbon emission trading schemes (ETS).
ETSs such as implemented by the EU, China, and others, have a waterbed effect. The total amount of emissions is capped, and trading sets the price of those emissions for all sectors under the scheme (in the EU electricity, heavy industry, expanding to other sectors). That means that:
It's just crazy to think about all the good-hearted campaigning, awareness creation, hard engineering work, money, etc that is being directed to decreasing emissions for a sector that's covered by an ETS. To my best understanding, as long as ETS is working correctly, this effort is completely meaningless. At the same time, I knew of exactly one person trying to reduce ETS emission rights based in my country, the Netherlands. This was the only person potentially actually achieving something useful for the climate.
If I would want to do something neglected in the climate space, I would try to inform all those people currently wasting their energy that what they should really do is trying to reduce the amount of ETS emission rights and let the market figure out the rest. (Note that several of the trajectories recommended above, such as working on nuclear power, reducing industry emissions, and deep geothermal energy (depending on use case) are all contained in ETS (at least in the EU) and improvements would therefore not benefit the climate).
If countries or regions have an ETS system, successful emission reduction should really start (and basically stop) there. It's also quite a neglected area so plenty of low hanging fruit!
I don't know if everyone should drop everything else right now, but I do agree that raising awareness about AI xrisks should be a major cause area. That's why I quit my work on the energy transition about two years ago to found the Existential Risk Observatory, and this is what we've been doing since (resulting in about ten articles in leading Dutch newspapers, this one in TIME, perhaps the first comms research, a sold out debate, and a passed parliamentary motion in the Netherlands).
I miss two significant things on the list of what people can do to help:
1) Please, technical people, work on AI Pause regulation proposals! There is basically one paper now, possibly because everyone else thought a pause was too far outside the Overton window. Now we're discussing a pause anyway and I personally think it might be implemented at some point, but we don't have proper AI Pause regulation proposals, which is a really bad situation. Researchers (both policy and technical), please fix that, fix it publicly, and fix it soon!
2) You can start institutes or projects that aim to inform the societal debate about AI existential risk. We've done that and I would say it worked pretty well so far. Others could do the same thing. Funders should be able to choose from a range of AI xrisk communication projects to spend their money most effectively. This is currently really not the case.
Hi Vasco, thank you for taking the time to read our paper!
Although we did not specify this in the methodology section, we addressed the "mean variation in likelihood" between countries and surveys throughout the research such as in section 4.2.2. I hope this clarifies your question. This aspect should have been better specified in the methodology section.
If you have any more questions, do not hesitate to ask.
Hi Joshc, thanks and sorry for the slow reply, it's a good idea! Unfortunately we don't really have time right now, but we might do something like this in the future. Also, if you're interested in receiving the raw data, let us know. Thanks again for the suggestion!
It's definitely good to think about whether a pause is a good idea. Together with Joep from PauseAI, I wrote down my thoughts on the topic here.
Since then, I have been thinking a bit on the pause and comparing it to a more frequently mentioned option, namely to apply model evaluations (evals) to see how dangerous a model is after training.
I think the difference between the supposedly more reasonable approach of evals and the supposedly more radical approach of a pause is actually smaller than it seems. Evals aim to detect dangerous capabilities. What will need to happen when those evals find that, indeed, a model has developed such capabilities? Then we'll need to implement a pause. Evals or a pause is mostly a choice about timing, not a fundamentally different approach.
With evals, however, we'll move precisely to the brink, look straight into the abyss, and then we plan to halt at the last possible moment. Unfortunately, though, we're in thick mist and we can't see the abyss (this is true even when we apply evals, since we don't know which capabilities will prove existentially dangerous, and since an existential event may already occur before running the evals).
And even if we would know where to halt: we'll need to make sure that the leading labs will practically succeed in pausing themselves (there may be thousands of people working there), that the models aren't getting leaked, that we'll implement the policy that's needed, that we'll sign international agreements, and that we gain support from the general public. This is all difficult work that will realistically take time.
Pausing isn't as simple as pressing a button, it's a social process. No one knowns how long that process of getting everyone on the same page will take, but it could be quite a while. Is it wise to start that process at the last possible moment, namely when the evals turn red? I don't think so. The sooner we start, the higher our chance of survival.
Also, there's a separate point that I think is not sufficiently addressed yet: we don't know how to implement a pause beyond a few years duration. If hardware and algorithms improve, frontier models could democratize. While I believe this problem can be solved by international (peaceful) regulation, I also think this will be hard and we will need good plans (hardware or data regulation proposals) for how to do this in advance. We currently don't have these, so I think working on them should be a much higher priority.