Founder of the Existential Risk Observatory here. We've focused on informing the public about xrisk for the last four years. We mostly focused on traditional media, perhaps that's a good addition to the social media work discussed here.
We also focused on measuring our impact from the beginning. Here are a few of our EA forum posts detailing AI xrisk comms effectiveness.
We did not only measure exposure, but also effectiveness of our interventions, using surveys. Our main metric was the conversion rate (called Human Extinction Events...
Depopulation is Bad
Population has soared from 1 to 8 billion in 200 years and is set to rise to 10. There is no depopulation, there is a population boom. That population boom is partially responsible for the climate crisis, biodiversity loss, and many other problems. It would be quite healthy to have a bit more moderate amounts of people.
If you're right, I think that would point to xrisk space funders trusting individuals way too much and institutions way too little. Thomas is a great guy but one guy losing belief in his work (which happens all the time mostly for private reasons, and mostly independent of the actual meaning of the work) should never be a reason to defund an otherwise functioning org, doing seemingly crucial work.
If the alternative theory is correct and the hit pieces are to blame, that still seems like an incorrect decision. When you're lobbying for something important you can expect some pushback, that shouldn't be a reason to pull out immediately.
I love this post, I think this is a fundamental issue for intent-alignment. I don't think value-alignment or CEV are any better though, mostly because they seem irreversible to me, and I don't trust the wisdom of those implementing them (no person is up to that task).
I agree it would be good to I implement these recommendations, although I also think they might prove insufficient. As you say, this could be a reason to pause that might be easier to grasp by the public than misalignment. (I think currently, the reason some do not support a pause is perceived...
I'm aware and I don't disagree. However, in xrisk, many (not all) of those who are most worried are also most bullish about capabilities. Reversely, many (not all) who are not worried are unimpressed with capabilities. Being aware of the concept of AGI, that it may be coming soon, and of how impactful it could be, is in practice often a first step towards becoming concerned about the risks, too. This is not true for everyone unfortunately. Still, I would say that at least for our chances to get an international treaty passed, it is perhaps hopeful that the power of AGI is on the radar of leading politicians (although this may also increase risk through other paths).
Otto Barten here, director of the Existential Risk Observatory.
We reduce AI existential risk by informing the public debate. Concretely, we do media work, organize events, do research, and give policy advice.
Currently, public awareness of AI existential risk among the US public is around 15% according to our measurements. Low problem awareness is a major reason why risk-reducing regulation such as SB-1047, or more ambitious federal or global proposals, do not get passed. Why solve a problem one does not see in the first place?
Therefore, we do media work to...
Thanks for your comment.
I changed the title, the original one came from TIME. Still, we do believe there is a solution to existential risk. What we want to do is outlining the contours of such a solution. A lot has to be filled in by others, including the crucial question of when to pause. We acknowledge this in the piece.
Nice study!
At first glance, results seem pretty similar to what we found earlier (https://www.existentialriskobservatory.org/papers_and_reports/Trends%20in%20Public%20Attitude%20Towards%20Existential%20Risk%20And%20Artificial%20Intelligence.pdf), giving confidence in both studies. The question you ask is the same as well, great for comparison! Your study seems a bit more extensive than what we did, which seems very useful.
Would be amazing to know whether a tipping point in awareness, according to (non xrisk) literature expected to occur somewhere between 10% and 25% awareness, will also occur for AI xrisk!
I sympathize with working on a topic you feel in your stomach. I worked on climate and switched to AI because I couldn't get rid of a terrible feeling about humanity going to pieces without anyone really trying to solve the problem (~4 yrs ago, but I'd say this is still mostly true). If your stomach feeling is in climate instead, or animal welfare, or global poverty, I think there is a case to be made that you should be working in those fields, both because your effectiveness will be higher there and because it's better for your own mental health, which is always important. I wouldn't say this cannot be AI xrisk: I have this feeling about AI xrisk, and I think many eg. PauseAI activists and others do, too.
Skimmed it and mostly agree, thanks for writing. Especially takeover and which capabilities are needed for that is a crux for me, rather than human-level. Still, one realistically needs a shorthand for communication and AGI/human-level AI is time tested and understood relatively easily. For policy and other more advanced comms, and as more details become available on what capabilities are and aren't important for takeover, making messaging more detailed is a good next step.
High impact startup idea: make a decent carbon emissions model for flights.
Current ones simply use flight emissions which makes direct flights look low-emission. But in reality, some of these flights wouldn't even be there if people could be spread over existing indirect flights more efficiently, which is why they're cheaper too. Emission models should be relative to counterfactual.
The startup can be for-profit. If you're lucky, better models already exist in scientific literature. Ideal for the AI for good-crowd.
My guess is that a few man-years work could have a big carbon emissions impact here.
Great work, thanks a lot for doing this research! As you say, this is still very neglected. Also happy to see you're citing our previous work on the topic. And interesting finding that fear is such a driver! A few questions:
- Could you share which three articles you've used? Perhaps this is in the dissertation, but I didn't have the time to read that in full.
- Since it's only one article per emotion (fear, hope, mixed), perhaps some other article property (other than emotion) could also have led to the difference you find?
- What follow-up research would yo...
Congratulations on a great prioritization!
Perhaps the research that we (Existential Risk Observatory) and others (e.g. @Nik Samoylov, @KoenSchoen) have done on effectively communicating AI xrisk, could be something to build on. Here's our first paper and three blog posts (the second includes measurement of Eliezer's TIME article effectiveness - its numbers are actually pretty good!). We're currently working on a base rate public awareness update and further research.
Best of luck and we'd love to cooperate!
It's definitely good to think about whether a pause is a good idea. Together with Joep from PauseAI, I wrote down my thoughts on the topic here.
Since then, I have been thinking a bit on the pause and comparing it to a more frequently mentioned option, namely to apply model evaluations (evals) to see how dangerous a model is after training.
I think the difference between the supposedly more reasonable approach of evals and the supposedly more radical approach of a pause is actually smaller than it seems. Evals aim to detect dangerous capabilities. What will ...
Thanks for the comment. I think the ways an aligned AGI could make the world safer against unaligned AGIs can be divided in two categories: preventing unaligned AGIs from coming into existence or stopping already existing unaligned AGIs from causing extinction. The second is the offense/defense balance. The first is what you point at.
If an AGI would prevent people from creating AI, this would likely be against their will. A state would be the only actor who could do so legally, assuming there is regulation in place, and also most practically. Therefore, I ...
Hi Peter, thanks for your comment. We do think the conclusions we draw are robust based on our sample size. If course it depends on the signal: if there's a change in e.g. awareness from 5% to 50%, a small sample size should be plenty to show that. However, if you're trying to measure a signal of only 1% difference, your sample size should be much larger. While we stand by our conclusions, we do think there would be significant value in others doing similar research, if possible with larger sample sizes.
Again, thanks for your comments, we take the input into account.
Thanks for your reply. I mostly agree with many of the things you say, but I still think work to reduce the amount of emission rights should at least be on the list of high-impact things to do, and as far as I'm concerned, significantly higher than a few other paths mentioned here.
If you'd still want to do technology-specific work, I think offshore solar might also be impactful and neglected.
As someone who has worked in sustainable energy technology for ten years (wind energy, modeling, smart charging, activism) before moving into AI xrisk, my favorite neglected topic is carbon emission trading schemes (ETS).
ETSs such as implemented by the EU, China, and others, have a waterbed effect. The total amount of emissions is capped, and trading sets the price of those emissions for all sectors under the scheme (in the EU electricity, heavy industry, expanding to other sectors). That means that:
I don't know if everyone should drop everything else right now, but I do agree that raising awareness about AI xrisks should be a major cause area. That's why I quit my work on the energy transition about two years ago to found the Existential Risk Observatory, and this is what we've been doing since (resulting in about ten articles in leading Dutch newspapers, this one in TIME, perhaps the first comms research, a sold out debate, and a passed parliamentary motion in the Netherlands).
I miss two significant things on the list of what people can do to help:
1...
Hi Vasco, thank you for taking the time to read our paper!
Although we did not specify this in the methodology section, we addressed the "mean variation in likelihood" between countries and surveys throughout the research such as in section 4.2.2. I hope this clarifies your question. This aspect should have been better specified in the methodology section.
If you have any more questions, do not hesitate to ask.
I hope that this article sends the signals that pausing the development of the largest AI-models is good, informing society about AGI xrisk is good, and that we should find a coordination method (regulation) to make sure we can effectively stop training models that are too capable.
What I think we should do now is:
1) Write good hardware regulation policy proposals that could reliably pause the development towards AGI.
2) Campaign publicly to get the best proposal implemented, first in the US and then internationally.
This could be a path to victory.
Crossposting a comment: As co-author of one of the mentioned pieces, I'd say it's really great to see the AGI xrisk message mainstreaming. It doesn't nearly go fast enough, though. Some (Hawking, Bostrom, Musk) have already spoken out about the topic for close to a decade. So far, that hasn't been enough to change common understanding. Those, such as myself, who hope that some form of coordination could save us, should give all they have to make this go faster. Additionally, those who think regulation could work should work on robust regulation proposals w...
I agree that this strategy is underexplored. I would prioritize the following work in this direction as follows:
Awesome initiative! At the Existential Risk Observatory, we are also focusing on outreach to the societal debate, I think that should be seen as one of the main opportunities to reduce existential risk. If you want to connect and exchange thoughts, that's always welcome.
Great idea to look into this!
It sounds a lot like what we have been doing at the Existential Risk Observatory (posts from us, website). We're more than willing to give you input insofar that helps, and perhaps also to coordinate. In general, we think this is a really positive action and the space is wide open. So far, we have good results. We also think there is ample space for other institutes to do this.
Let's further coordinate by email, you can reach us at info@existentialriskobservatory.org. Looking forward to learn from each other!
Enough happened to write a small update about the Existential Risk Observatory.
First, we made progress in our core business: informing the public debate. We have published two more op-eds (in Dutch, one with a co-author from FLI) in a reputable, large newspaper. Our pieces warn against existential risk, especially from AGI, and propose low-hanging fruit type of measures the Dutch government could take to reduce risk (e.g. extra AI safety research).
A change w.r.t. the previous update, is that we see serious, leading journalists become interested in th...
Anyway I posted this here because I think it somewhat resembles the policy of buying and closing coal mines. You're deliberately creating scarcity. Since there are losers when you do that, policymakers might respond. I think creating scarcity in carbon rights is more efficient and much more easy to implement than creating scarcity in coal, but indeed suffers from some of the same drawbacks.
Hey I wasn't saying it wasn't that great :)
I agree that the difficult part is to get to general intelligence, also regarding data. Compute, algorithms, and data availability are all needed to get to this point. It seems really hard to know beforehand what kind and how much of algorithms and data one would need. I agree that basically only one source of data, text, could well be insufficient. There was a post I read on a forum somewhere (could have been here) from someone who let GPT3 solve questions including things like 'let all odd rows of your answer be...
If you want to spend money quickly on reducing carbon dioxide emissions, you can buy emission rights and destroy them. In schemes such as the EU ETS, destroyed emission rights should lead to direct emission reduction. This has technically been implemented already. Even cheaper is probably to buy and destroy rights in similar schemes in other regions.
Hi AM, thanks for your reply.
Regarding your example, I think it's quite specific, as you notice too. That doesn't mean I think it's invalid, but it does get me thinking: how would a human learn this task? A human intelligence wasn't trained on many specific tasks in order to be able to do them all. Rather, it first acquired general intelligence (apparently, somewhere), and was later able to apply this to an almost infinite amount of specific tasks with typically only a few examples needed. I would guess that an AGI would solve problems in a similar way. So...
Thanks for the reply, and for trying to attach numbers to your thoughts!
So our main disagreement lies in (1). I think this is a common source of disagreement, so it's important to look into it further.
Would you say that the chance to ever build AGI is similarly tiny? Or is it just the next hundred years? In other words, is this a possibility or a timeline discussion?
Hi Ada-Maaria, glad to have talked to you at EAG and congrats for writing this post - I think it's very well written and interesting from start to finish! I also think you're more informed on the topic than most people who are AI xrisk convinced in EA, surely including myself.
As an AI xrisk-convinced person, it always helps me to divide AI xrisk in these three steps. I think superintelligence xrisk probability is the product of these three probabilities:
1) P(AGI in next 100 years)
2) P(AGI leads to superintelligence)
3) P(superintelligence destroys humanity)...
Thanks for that context and for your thoughts! We understand the worries that you mention, and as you say, op-eds are a good way to avoid those. Most (>90%) of the other mainstream media articles we've seen about existential risk (there's a few dozen) did not suffer from these issues either, fortunately.
Thank you for the heads up! We would love to have more information about general audience attitudes towards existential risk, especially related to AI and other novel tech. Particularly interesting for us would be research into which narratives work best. We've done some of this ourselves, but it would be interesting to see if our results match others'. So yes please let us know when you have this available!
Hi Jamie, thanks for your comment, glad you like it!
It's hard to go into this without answering your question anyway a bit, but we appreciate the user feedback too.
We got some quick data on the project yesterday (n=15, tech audience but not xrisk, data here). We asked, among other questions: "In your own words, what is this website tracking or measuring?" Almost everyone gave a correct answer. Also from the other answers, I think the main points get across pretty well, so we're not really planning to modify too much.
The percentage that you're asking about ... (read more)