O

Otto

572 karmaJoined

Comments
39

I'm aware and I don't disagree. However, in xrisk, many (not all) of those who are most worried are also most bullish about capabilities. Reversely, many (not all) who are not worried are unimpressed with capabilities. Being aware of the concept of AGI, that it may be coming soon, and of how impactful it could be, is in practice often a first step towards becoming concerned about the risks, too. This is not true for everyone unfortunately. Still, I would say that at least for our chances to get an international treaty passed, it is perhaps hopeful that the power of AGI is on the radar of leading politicians (although this may also increase risk through other paths).

Answer by Otto10
0
0

Otto Barten here, director of the Existential Risk Observatory.

We reduce AI existential risk by informing the public debate. Concretely, we do media work, organize events, do research, and give policy advice.

Currently, public awareness of AI existential risk among the US public is around 15% according to our measurements. Low problem awareness is a major reason why risk-reducing regulation such as SB-1047, or more ambitious federal or global proposals, do not get passed. Why solve a problem one does not see in the first place?

Therefore, we do media work to increase awareness of AI existential risk and propose helpful regulation. Today, we published our fourth piece in TIME Magazine, arguing AI is an existential risk and proposing the Conditional AI Safety Treaty. According to survey-based measurements (n=50 per media item), our 'conversion rate', measuring how many readers newly connect AI to human extinction after reading our articles, is between 34% and 50%, of which about half remains over time. We have published four TIME pieces and around 20 other media items in the last two years. Although we cannot cleanly separate media work from other work, we could estimate that $35k should get a funder roughly two leading media pieces, plus 10 supporting ones.

In addition to media work, we also organize events. Our track record contains four debates with leading existential risk voices such as Yoshua Bengio, Stuart Russell, Max Tegmark, and Jaan Tallinn on one hand, and journalists from e.g. TIME and The Economist and MPs on the other. Our events aim to inform leading voices of the societal debate and policymakers about existential risk and give experts the chance to propose helpful policy. We have organized events ahead of the AI Safety Summits in Bletchley Park, Korea/remote, and will do so again in Paris. These events have helped and will help to shape the summits' narratives towards concern for existential risk. We can organize one event for around $20k, including venue costs, traveling/hotel costs, and organization hours.

We are also doing policy research. In the coming year, we will focus on what the optimal Conditional AI Safety Treaty should look like exactly, and how we can get it implemented. We are uniquely positioned to not only do leading research, but also communicate this directly to a large audience, including e.g. MPs and leading journalists. We are planning to write a paper on what the optimal shape should be for the Conditional AI Safety Treaty, working together with other institutes. We can produce such a paper for around $18k.

As an organization, we are heavily funding constrained. We have been supported by established funders such as SFF, LTFF, and ICFG in the past, but only for relatively modest amounts. Our current runway is therefore about five months. Additional funding would mostly enable us to keep doing what we are doing (and get even better at it!): media work, organizing events, and doing research. Within these three focus areas, we are also open to receiving earmarked funding, or additional funding to scale up our work.

For donations, best to contact us by email. Your support is much appreciated!

Thanks for your comment.

I changed the title, the original one came from TIME. Still, we do believe there is a solution to existential risk. What we want to do is outlining the contours of such a solution. A lot has to be filled in by others, including the crucial question of when to pause. We acknowledge this in the piece.

Nice study!

At first glance, results seem pretty similar to what we found earlier (https://www.existentialriskobservatory.org/papers_and_reports/Trends%20in%20Public%20Attitude%20Towards%20Existential%20Risk%20And%20Artificial%20Intelligence.pdf), giving confidence in both studies. The question you ask is the same as well, great for comparison! Your study seems a bit more extensive than what we did, which seems very useful.

Would be amazing to know whether a tipping point in awareness, according to (non xrisk) literature expected to occur somewhere between 10% and 25% awareness, will also occur for AI xrisk!

I sympathize with working on a topic you feel in your stomach. I worked on climate and switched to AI because I couldn't get rid of a terrible feeling about humanity going to pieces without anyone really trying to solve the problem (~4 yrs ago, but I'd say this is still mostly true). If your stomach feeling is in climate instead, or animal welfare, or global poverty, I think there is a case to be made that you should be working in those fields, both because your effectiveness will be higher there and because it's better for your own mental health, which is always important. I wouldn't say this cannot be AI xrisk: I have this feeling about AI xrisk, and I think many eg. PauseAI activists and others do, too.

Skimmed it and mostly agree, thanks for writing. Especially takeover and which capabilities are needed for that is a crux for me, rather than human-level. Still, one realistically needs a shorthand for communication and AGI/human-level AI is time tested and understood relatively easily. For policy and other more advanced comms, and as more details become available on what capabilities are and aren't important for takeover, making messaging more detailed is a good next step.

The recordings of our event are now online! 

High impact startup idea: make a decent carbon emissions model for flights.

Current ones simply use flight emissions which makes direct flights look low-emission. But in reality, some of these flights wouldn't even be there if people could be spread over existing indirect flights more efficiently, which is why they're cheaper too. Emission models should be relative to counterfactual.

The startup can be for-profit. If you're lucky, better models already exist in scientific literature. Ideal for the AI for good-crowd.

My guess is that a few man-years work could have a big carbon emissions impact here.

Great work, thanks a lot for doing this research! As you say, this is still very neglected. Also happy to see you're citing our previous work on the topic. And interesting finding that fear is such a driver! A few questions:

- Could you share which three articles you've used? Perhaps this is in the dissertation, but I didn't have the time to read that in full.
- Since it's only one article per emotion (fear, hope, mixed), perhaps some other article property (other than emotion) could also have led to the difference you find?
- What follow-up research would you recommend?
- Is there anything orgs like ours (Existential Risk Observatory) (or, these days, MIRI, that also focuses on comms) should do differently?

As a side note, we're conducting research right now on where awareness has gone after our first two measurements (that were 7% and 12% in early/mid '23, respectively). We might also look into the existence and dynamics of a tipping point.

Again, great work, hope you'll keep working in the field in the future!

Congratulations on a great prioritization!

Perhaps the research that we (Existential Risk Observatory) and others (e.g. @Nik Samoylov, @KoenSchoen) have done on effectively communicating AI xrisk, could be something to build on. Here's our first paper and three blog posts (the second includes measurement of Eliezer's TIME article effectiveness - its numbers are actually pretty good!). We're currently working on a base rate public awareness update and further research.

Best of luck and we'd love to cooperate!

Load more