Great work, thanks a lot for doing this research! As you say, this is still very neglected. Also happy to see you're citing our previous work on the topic. And interesting finding that fear is such a driver! A few questions:
- Could you share which three articles you've used? Perhaps this is in the dissertation, but I didn't have the time to read that in full.
- Since it's only one article per emotion (fear, hope, mixed), perhaps some other article property (other than emotion) could also have led to the difference you find?
- What follow-up research would yo...
Congratulations on a great prioritization!
Perhaps the research that we (Existential Risk Observatory) and others (e.g. @Nik Samoylov, @KoenSchoen) have done on effectively communicating AI xrisk, could be something to build on. Here's our first paper and three blog posts (the second includes measurement of Eliezer's TIME article effectiveness - its numbers are actually pretty good!). We're currently working on a base rate public awareness update and further research.
Best of luck and we'd love to cooperate!
Nice post! Yet another path to impact could be to influence international regulation processes, such as the AI Safety Summit, through influencing the EU and member states positions. In a positive scenario, the EU could even take a mediation role between the US and China.
It's definitely good to think about whether a pause is a good idea. Together with Joep from PauseAI, I wrote down my thoughts on the topic here.
Since then, I have been thinking a bit on the pause and comparing it to a more frequently mentioned option, namely to apply model evaluations (evals) to see how dangerous a model is after training.
I think the difference between the supposedly more reasonable approach of evals and the supposedly more radical approach of a pause is actually smaller than it seems. Evals aim to detect dangerous capabilities. What will ...
Thanks for the comment. I think the ways an aligned AGI could make the world safer against unaligned AGIs can be divided in two categories: preventing unaligned AGIs from coming into existence or stopping already existing unaligned AGIs from causing extinction. The second is the offense/defense balance. The first is what you point at.
If an AGI would prevent people from creating AI, this would likely be against their will. A state would be the only actor who could do so legally, assuming there is regulation in place, and also most practically. Therefore, I ...
Hi Peter, thanks for your comment. We do think the conclusions we draw are robust based on our sample size. If course it depends on the signal: if there's a change in e.g. awareness from 5% to 50%, a small sample size should be plenty to show that. However, if you're trying to measure a signal of only 1% difference, your sample size should be much larger. While we stand by our conclusions, we do think there would be significant value in others doing similar research, if possible with larger sample sizes.
Again, thanks for your comments, we take the input into account.
Thanks Gabriel! Sorry for the confusion. TE stands for The Economist, so this item: https://www.youtube.com/watch?v=ANn9ibNo9SQ
Thanks for your reply. I mostly agree with many of the things you say, but I still think work to reduce the amount of emission rights should at least be on the list of high-impact things to do, and as far as I'm concerned, significantly higher than a few other paths mentioned here.
If you'd still want to do technology-specific work, I think offshore solar might also be impactful and neglected.
As someone who has worked in sustainable energy technology for ten years (wind energy, modeling, smart charging, activism) before moving into AI xrisk, my favorite neglected topic is carbon emission trading schemes (ETS).
ETSs such as implemented by the EU, China, and others, have a waterbed effect. The total amount of emissions is capped, and trading sets the price of those emissions for all sectors under the scheme (in the EU electricity, heavy industry, expanding to other sectors). That means that:
I don't know if everyone should drop everything else right now, but I do agree that raising awareness about AI xrisks should be a major cause area. That's why I quit my work on the energy transition about two years ago to found the Existential Risk Observatory, and this is what we've been doing since (resulting in about ten articles in leading Dutch newspapers, this one in TIME, perhaps the first comms research, a sold out debate, and a passed parliamentary motion in the Netherlands).
I miss two significant things on the list of what people can do to help:
1...
Hi Vasco, thank you for taking the time to read our paper!
Although we did not specify this in the methodology section, we addressed the "mean variation in likelihood" between countries and surveys throughout the research such as in section 4.2.2. I hope this clarifies your question. This aspect should have been better specified in the methodology section.
If you have any more questions, do not hesitate to ask.
Hi Joshc, thanks and sorry for the slow reply, it's a good idea! Unfortunately we don't really have time right now, but we might do something like this in the future. Also, if you're interested in receiving the raw data, let us know. Thanks again for the suggestion!
I hope that this article sends the signals that pausing the development of the largest AI-models is good, informing society about AGI xrisk is good, and that we should find a coordination method (regulation) to make sure we can effectively stop training models that are too capable.
What I think we should do now is:
1) Write good hardware regulation policy proposals that could reliably pause the development towards AGI.
2) Campaign publicly to get the best proposal implemented, first in the US and then internationally.
This could be a path to victory.
Crossposting a comment: As co-author of one of the mentioned pieces, I'd say it's really great to see the AGI xrisk message mainstreaming. It doesn't nearly go fast enough, though. Some (Hawking, Bostrom, Musk) have already spoken out about the topic for close to a decade. So far, that hasn't been enough to change common understanding. Those, such as myself, who hope that some form of coordination could save us, should give all they have to make this go faster. Additionally, those who think regulation could work should work on robust regulation proposals w...
Thanks Peter for the compliment! If there is something in particular you're interested in, please let us know and perhaps we can take it into account in future research projects!
I agree that this strategy is underexplored. I would prioritize the following work in this direction as follows:
Awesome initiative! At the Existential Risk Observatory, we are also focusing on outreach to the societal debate, I think that should be seen as one of the main opportunities to reduce existential risk. If you want to connect and exchange thoughts, that's always welcome.
Great idea to look into this!
It sounds a lot like what we have been doing at the Existential Risk Observatory (posts from us, website). We're more than willing to give you input insofar that helps, and perhaps also to coordinate. In general, we think this is a really positive action and the space is wide open. So far, we have good results. We also think there is ample space for other institutes to do this.
Let's further coordinate by email, you can reach us at info@existentialriskobservatory.org. Looking forward to learn from each other!
Enough happened to write a small update about the Existential Risk Observatory.
First, we made progress in our core business: informing the public debate. We have published two more op-eds (in Dutch, one with a co-author from FLI) in a reputable, large newspaper. Our pieces warn against existential risk, especially from AGI, and propose low-hanging fruit type of measures the Dutch government could take to reduce risk (e.g. extra AI safety research).
A change w.r.t. the previous update, is that we see serious, leading journalists become interested in th...
Anyway I posted this here because I think it somewhat resembles the policy of buying and closing coal mines. You're deliberately creating scarcity. Since there are losers when you do that, policymakers might respond. I think creating scarcity in carbon rights is more efficient and much more easy to implement than creating scarcity in coal, but indeed suffers from some of the same drawbacks.
Possibly, in the medium term. To counter that, you might want to support groups who lobby for lower carbon scheme ceilings as well.
Hey I wasn't saying it wasn't that great :)
I agree that the difficult part is to get to general intelligence, also regarding data. Compute, algorithms, and data availability are all needed to get to this point. It seems really hard to know beforehand what kind and how much of algorithms and data one would need. I agree that basically only one source of data, text, could well be insufficient. There was a post I read on a forum somewhere (could have been here) from someone who let GPT3 solve questions including things like 'let all odd rows of your answer be...
If you want to spend money quickly on reducing carbon dioxide emissions, you can buy emission rights and destroy them. In schemes such as the EU ETS, destroyed emission rights should lead to direct emission reduction. This has technically been implemented already. Even cheaper is probably to buy and destroy rights in similar schemes in other regions.
Hi AM, thanks for your reply.
Regarding your example, I think it's quite specific, as you notice too. That doesn't mean I think it's invalid, but it does get me thinking: how would a human learn this task? A human intelligence wasn't trained on many specific tasks in order to be able to do them all. Rather, it first acquired general intelligence (apparently, somewhere), and was later able to apply this to an almost infinite amount of specific tasks with typically only a few examples needed. I would guess that an AGI would solve problems in a similar way. So...
Thanks for the reply, and for trying to attach numbers to your thoughts!
So our main disagreement lies in (1). I think this is a common source of disagreement, so it's important to look into it further.
Would you say that the chance to ever build AGI is similarly tiny? Or is it just the next hundred years? In other words, is this a possibility or a timeline discussion?
Hi Ada-Maaria, glad to have talked to you at EAG and congrats for writing this post - I think it's very well written and interesting from start to finish! I also think you're more informed on the topic than most people who are AI xrisk convinced in EA, surely including myself.
As an AI xrisk-convinced person, it always helps me to divide AI xrisk in these three steps. I think superintelligence xrisk probability is the product of these three probabilities:
1) P(AGI in next 100 years)
2) P(AGI leads to superintelligence)
3) P(superintelligence destroys humanity)...
Thanks for that context and for your thoughts! We understand the worries that you mention, and as you say, op-eds are a good way to avoid those. Most (>90%) of the other mainstream media articles we've seen about existential risk (there's a few dozen) did not suffer from these issues either, fortunately.
Thank you for the heads up! We would love to have more information about general audience attitudes towards existential risk, especially related to AI and other novel tech. Particularly interesting for us would be research into which narratives work best. We've done some of this ourselves, but it would be interesting to see if our results match others'. So yes please let us know when you have this available!
High impact startup idea: make a decent carbon emissions model for flights.
Current ones simply use flight emissions which makes direct flights look low-emission. But in reality, some of these flights wouldn't even be there if people could be spread over existing indirect flights more efficiently, which is why they're cheaper too. Emission models should be relative to counterfactual.
The startup can be for-profit. If you're lucky, better models already exist in scientific literature. Ideal for the AI for good-crowd.
My guess is that a few man-years work could have a big carbon emissions impact here.