People affiliated with EA who are interested in improving AI policy and governance in Australia have been meeting to learn from each other, create a shared understanding, and investigate opportunities to coordinate our impact.
We're sharing these notes to help increase awareness in other communities and countries that these conversations are happening, as some of the themes may resonate or be of use for other groups having similar discussions. We'd be open to speaking with and collaborating with other groups, in the spirit of improving coordination in AI Safety movement building.
At a recent meeting, we had an initial discussion reflecting on the prompt:
What challenges does Australia face in transitioning to a world with advanced AI?
The structured discussion involved
- generating reactions to the question in Miro sticky notes, including attempting to answer the question, offering reflections on their experience prompted by the question, or critiquing / questioning the question itself (e.g., what do we mean by 'advanced');
- commenting on others' reactions in Miro;
- discussing the reactions and comments, with the goal of increasing our shared understanding of salient issues.
Here are some of the themes that emerged from this initial discussion. We intend to explore these themes in more detail and may use them to guide future work by this group.
Themes from an initial discussion
Note. The intention of this discussion was to elicit reactions and points of agreement / disagreement, rather than identify or prioritise areas that the group thought was most important to address.
Australia’s Economy is exposed to automation; but are there pathways to benefit from transformative AI?
One major challenge identified was for Australia's labour and economy, particularly the risk of high unemployment in the knowledge-work-heavy economy due to the automation of knowledge tasks from AI. This may be exacerbated by Australia having less technology capacity & leadership than similar other countries, meaning that structural unemployment is a live possibility due to Australia's inability to develop 'home grown' AI systems, or effectively adapt to international companies that offer AI / automation products to replace our workers. At the same time, Australia is also relatively agile and, with good policy and regulation, productivity could increase with automation. Whether these policies can be identified and implemented before a terminal loss of tax revenue needs further work.
Australia as a Potential Global Thought Leader in AI
Some reactions considered or questioned the role of Australia in the global AI transition. On one hand, AI governance can be considered a multi-layered coordination problem, some elements of which are relevant nationally / domestically, even if other parts are international, global, or reliant on so-called 'key actors' like firms developing advanced AI. This would suggest that people wanting to have the greatest impact should either focus on Australian national / domestic policy, regulation, and governance, or ignore Australia to work on international issues or for the 'bigger fish'.
An alternative view was to consider the track record of Australian groups in spurring global agreements or regulation in other areas, especially nuclear non-proliferation - despite never developing or holding nuclear weapons (ICAN), and the Australia Group, which seeks to control the trade of precursors for biological and chemical weapons. This suggests there may be a role for Australia to act as a 'thought leader' or 'policy leader' in international AI governance.
Advocacy and Policy: Strategic postures
The Australian government and Australian policymaking with respect to AI was focusing on robotics, automation, and competition - not global catastrophic or existential risks. Strategic postures were discussed: should advocacy initially address these issues to build a track record and credibility; what do good futures look like for Australia, and how could they be shaped by the group’s actions?
Navigating Diverse Risk Categories and Sources in AI Transition
Various risk categories and sources associated with AI transition were raised. These ranged from humanity's general lack of wisdom and perverse incentives to the democratisation of AI technologies, security concerns, coordination challenges, and geopolitical tensions. The specific relevance of these to Australia and Australian policy need further unpacking.
How could the group improve its knowledge infrastructure, coordination, and impact?
The role of the group in addressing some of these challenges was examined. Some members of the group are already actively advocating for policy change through political channels; others have made opportunistic policy submissions; and others still are considering how and in what direction they can act. Several ideas were raised for useful activities the group could do. Some of these ideas were focused on improving the group's shared understanding, such as comparing estimated timelines for advanced AI and identifying points of agreement or disagreement for goals and strategies. Other ideas were focused building capacity or direction to act, such as mapping stakeholders, creating templates for policy submissions, or identifying and recruiting members.
We intend to continue to meet, discuss, and act to influence AI policy & governance in Australia. If you'd like to stay informed or get involved, please message or email me (email@example.com).
An initial analysis of themes was created by GPT-4 from notes typed by meeting attendees. I (Alexander Saeri) edited and significantly rewrote the initial analysis (>90% words changed), and wrote all other text.