Alexander Saeri

Co-founder @ Ready Research; Research Project Manager @ The University of Queensland
619 karmaJoined Working (6-15 years)Melbourne VIC, Australia
aksaeri.com

Comments
37

Yes, this was a cheeky or sarcastic comment. I wrote it to share with some colleagues unfamiliar with AI safety who were wondering what it looked like to have 'good' outcomes in AI policy & governance.

Good AI governance is pretty easy.

We just have to

  1. solve a bunch of 2000+ year old moral philosophy questions (e.g., 'what is good', 'what is the right action in a given circumstance', 'what are good rules for action'), then
  2. figure out how to technically implement them into a non-deterministic software / algorithmic form, then
  3. get international agreement on complex systems of regulation and governance to ensure that technical implementation is done correctly and monitored for compliance without compromising values of democracy, right to privacy, free expression, etc; then
  4. ensure whatever governance arrangements we establish are sufficiently robust or flexible to respond to the transformative impacts of powerful AI on every part of society and industry
  5. within the next ~5-20 years before the technical capacity of these systems outpace our ability to affect them.

Thanks for writing this up, Emily. I think your decision to do this helped me feel more secure about taking a career break of my own - including some time set aside to do no work or career planning!

I'm glad that Australia has signed this statement.

It's worth noting that until quite recently, the idea of catastrophic misuse or misalignment risks from AI have been dismissed or made fun of in Australian policy discourse. The delegate from Australia, Ed Husic who is Minister for Industry, Science, and Resources, actually wrote an opinion piece in a national newspaper in June 2023 that dismissed concerns about catastrophic risk 

In an August 2023 public Town Hall discussion that formed part of the Australian Government's consultation on 'Safe and Responsible AI', a senior advisor to Husic's department said that trying to regulate risks from advanced AI was like the Wright Brothers trying to plan regulations for a Mars colony, and another key figure dismissed the dual-use risks from AI by likening AI to a 'kitchen knife', suggesting that both could be used for good and for harm.

So it was never certain that somewhere like Australia would sign on to a declaration like this, and I'm relieved and happy that we've done so.

I'd like to think that the work that's been happening in the Australian AI Safety community and had an impact on Australia's decision to agree to the declaration, including 

  • organising Australian experts to call for serious consideration of catastrophic risks from AI and make plans to address those risks, 
  • arranging more than 70 well-researched community submissions to the 'Safe and Responsible AI' consultation that called for better institutions to govern risks and concrete action to address them. 
  • A lead long-term focused policy development & advocacy organisation in Australia, Good Ancestors, also created a rigorous submission for the process.

The declaration needs to be followed by action but the combination of this declaration and Australia's endorsement of the US executive order on AI Safety has led me to feel more hopeful about things going well.

Glad that fasting works for you! I have tried it a couple of times and have found myself too hungry or uncomfortable to sleep at the times I need to (eg, a nap in the middle of the flight).

Great points on equipment; I think they are necessary and think that the bulk of a good neck pillow in carry on luggage is justified because I can't sleep without it. I also have some comically ugly and oversized sunglasses that fit over my regular glasses and block light from all sides.

Thanks for the post!

I'm familiar with EGMs in the spaces you mentioned. I can see EGMs being quite useful if the basic ideas in an area are settled enough to agree on outcomes (eg the thing that the interventions are trying to create)

Right now I'm unsure what this would be. That said, I support the use of EGMs for synthesising evidence and pointing out research directions. So it could be useful to construct one or some at the level of "has anyone done this yet?"

Thanks for this guide!

One thing that I appreciated when attending a GWWC event was that expectations of responsible conduct were made clear with an explicit announcement at the beginning of the event. I thought this was a good way to create a social agreement among attendees.

I think that some people are reluctant to do this because they think it might bring the mood down, or it feels awkward to call attention to the possibility of harmful behaviour at what is supposed to be a fun or professional event. They might also not be sure exactly what to say. One idea for addressing these barriers would be to provide a basic script that organisers could say, or rewrite in their own words.

Thanks for writing up this work, Zoe. I'm pleased to see a list of explicit recommendations for effective charities to consider in framing their requests for donations.

Selfishly, I'm also pleased that our paper (Saeri et al 2022) turned up in your search!

It's be interesting to understand your motivations for the literature review and what you might do next with these findings / recommendations.

One thing that our paper necessarily didn't do was aggregate from individual studies (it only included systematic reviews and meta-anlayses). So it's interesting to see some of the other effects out there that haven't yet been subject to a review.

I was motivated to write this story for two reasons.

First, I think that there is a lack of clear visual metaphors, stories, or other easy accessible analogies for concepts in AI and its impacts on society. I am often speaking with intelligent non-technical people - including potential users or "micro-regulators" (e.g., organisational policymaker) of AI tools - who have read about AI in the news but don't have good handles on how to think about these tools and how they interact with existing organisational processes or social understandings. 

Second, this specific story was motivated by a discussion with a highly qualified non-technical user of LLMs who expressed skepticism about the capabilities of LLMs (in this case, chatGPT 3.5)  because when they prompted the LLM to provide citations for a topic area that the user was an expert in, the research citations provided in the LLM response were wrong or misleading / hallucinations. 

One insight that came from our follow-up conversation was that the user were imagining that writing prompts for an LLM to be similar to writing a Google search query. In their understanding, they were requesting a pre-existing record that was stored in the LLM's database, and so for the LLM to respond with an incorrect list of records indicated that the LLM was fundamentally incapable of a 'basic' research task.

Load more