Hide table of contents

Convergence 2024 Impact Review home page.

Impact overview

2024 marked the first full year with the new Convergence Analysis 9-person team. This year we published 20 articles on understanding and governing transformative AI. Our research impacted regulatory frameworks internationally. In the US we provided consultation to the Bureau of Industry and Security that directly informed their proposed rule on reporting requirements for dual-use AI, while in the EU we saw specific recommendations incorporated into the EU AI Act GPAI Code of Practice. We led expert field-building around AI’s economic impacts through the Threshold 2030 conference, and AI scenario modeling via the AI Scenarios Network. Our work reached mainstream media, universities, and over 184,000 viewers on social platforms.
We organized our activities into three programs: AI Clarity, AI Governance, and AI Awareness.

1. AI Clarity: performing AI Scenario Planning

  • Published over 170 pages of AI Clarity research across 10 articles and reports.
  • Hosted the Threshold 2030 Conference together with Metaculus and FLI, convening 25 senior experts from frontier AI labs, intergovernmental organisations, academia and leading AI safety research organisations to evaluate economic impacts of short AI timelines.
  • Developed the AI Scenarios Network of 30+ experts, the first cross-organizational coalition of AI Scenario researchers.

2. AI Governance: producing concrete AI policy recommendations

3. AI Awareness: raising public awareness of AI risks

  • Authored Building a God, a general-audience book by Christopher DiCarlo addressing key AI issues. Published January 2025.
  • Featured in ten major media outlets and educational platforms, including: Politico, Forbes & CBS towards the end of 2024 and early 2025.
  • AI Awareness content was viewed over 184 000 times on TikTok.
  • Led two introductory courses on AI ethics in collaboration with Toronto Metropolitan University/Life Institute, and held numerous general-audience lectures on ‘AI and the Future of Humanity’.
  • Created and hosted 23 episodes of All Thinks Considered, with interviewees including Steven Pinker, Peter Singer and important AI safety researchers such as Steve Omohundro and Robert Trager.

Convergence’s mission

Our mission is to design a safe and flourishing future for humanity in a world with transformative AI. We consider this a sociotechnical problem: that in addition to addressing the technical considerations of AI, governing institutions and the public need to be involved in order to solve these problems. Our work, following our theory of change, cuts across three interrelated programs:

  • AI Clarity: We research potential AI development scenarios and their implications, to guide AI safety reasoning and discourse. We work to create new fields of inquiry, such as AI Scenario Planning and AGI Economics, through publishing guiding research, coordinating experts, and springboarding new researchers.
  • AI Governance: We conduct rapid-response research on emerging developments and neglected areas of AI governance, generating actionable recommendations for reducing harms from AI.
  • AI Awareness: We build public understanding around AI risk through strategic initiatives, including a book on AI futures and a podcast featuring discussions with thought leaders.

List of outputs

AI Clarity:

AI Governance:

AI Awareness:

Outcomes and impacts in more detail

AI Clarity

The AI Clarity program explores future scenarios and evaluates strategies for mitigating AI risks. In 2024, AI Clarity projects (1) addressed gaps in foundational knowledge around AI scenario modelling and in gathering practitioners, (2) formalised theories of victory for AI safety work, (3) analyzed consensus on timelines to AGI, and (4) hosted the AGI economics field-building conference Threshold 2030, building on our prior work in AI scenarios. Beyond general field building of AI safety and governance, we are seeding and coordinating specific high value fields of inquiry, especially our work on AI Scenario Planning and AGI Economics.

AI Scenario Planning

Our Scenario Planning work addressed neglected challenges that traditional AI forecasting methods struggle with. We present a complimentary approach to forecasting to support decision-makers preparing for an uncertain future. Our field-building work established the AI Scenarios Network of 30+ researchers across organizations and produced several publications, listed below. This research also directly formed the basis for our Theory of Victory work and our broader research agenda, including a paper AI Emergency Preparedness with external collaborators, and the Threshold 2030 conference.

Key Outputs:

  • AI Clarity: An Initial Research Agenda (April 2024): Introduced a research agenda for exploring AI scenarios and evaluating strategies across them.
  • Scenario Planning for AI X-risk (February 2024): Explained AI scenario planning and argued for its importance as a complement to AI forecasting and for informing AI governance.
  • Transformative AI and Scenario Planning For AI X-risk (March 2024): Argued that TAI is a valuable milestone for AI scenario analysis because this characterisation of AI focuses on socio-technical impact and is well-defined in existing literature.
  • Investigating the Role of Agency in AI X-risk (April, 2024): Outlined four future AI scenarios and corresponding mitigations based on whether agency and power-seeking emerge from TAI by default.
  • AI Scenarios Network (July 2024): The first cross-organizational network of AI scenario researchers. The network aims to build awareness for AI scenario research and advocate for its importance for AI governance.

Theories of Victory

The lack of clearly defined success criteria for AI governance makes long-term strategic planning difficult. In 2024 we highlighted a lack of stated theories of victory in AI governance, and examined practical preparedness for best and worst-case scenarios globally. Our work on Theories of Victory  and Emergency Preparedness  was well-received in the research community, receiving strong positive feedback from peers, and generating good engagement on the EA Forum and SSRN. This reception led to a follow-up post on Analysis of Global AI Governance Strategies in collaboration with Sammy Martin (Polaris Ventures). AI Emergency Preparedness was also presented at IAAA's 39th Conference.

Key Outputs:

AGI Economics

We hosted the Threshold 2030 conference in Boston (October 2024) to study the economic impacts of near-term TAI, together with Metaculus and FLI. The conference developed practical AI impact forecasting methods and mapped areas of expert consensus and disagreement. This work established new research priorities and cross-organizational collaborations now informing new projects at Convergence and partner organizations. A 200-page conference report on the findings was published in February 2025.

Key Outputs:

  • Threshold 2030 Conference (October 2024): Hosted a workshop for 25 leading economists, AI policy experts, and forecasters considering a set of three plausible scenarios regarding the trajectory of AI development. Based on these scenarios, we had attendees conduct worldbuilding, economic causal modelling and forecasting exercises.

 

AI Timelines

We evaluated forecasts, models and arguments for and against TAI timelines, and made technical approaches more accessible to researchers and policymakers, in conclusion providing further basis for taking short TAI timelines seriously.

Key Outputs:

AI Governance

The AI Governance Program evaluates and makes critical and neglected policy recommendations in AI governance. Our governance work in 2024 produced foundational research into AI governance frameworks and specific policy recommendations.

Technical Controls & Infrastructure

We developed foundational regulatory tools for frontier AI oversight using registration systems and technical attribution mechanisms. Our technical control frameworks directly influenced policy development in multiple jurisdictions:

  • The AI Model Registries report directly influenced US regulatory development through formal consultation with the Bureau of Industry and Security, while specific language from our recommendations was incorporated into the EU's GPAI Code of Practice. The report also served as a consultation document for the Paris AI Action Summit and was cited numerous times in the summit's consultation report.
  • Additionally, our chip registration policy proposal led to a collaboration with researchers from RAND, CNAS, and IAPS on a proposal for an AI chip registry that advanced to the House Foreign Affairs Committee in April 2024, via Michael McCaul. The proposal was ultimately tabled and may be resurfaced in future.

The Training Data Attribution report was based on research originally commissioned by FLF, who gave highly positive feedback on the commissioned work and expressed strong interest in future partnerships.

Key Outputs:

National Policy Frameworks

Our 2024 research examined emerging approaches to national AI governance in the US, China and the EU. Our national policy frameworks had good traction in both academic and policy spheres in 2024. Soft Nationalization saw use from researchers at the US AI Safety Institute, Harvard AI Student Team, and LawAI, and led to us giving a presentation on the topic at Harvard. The State of the AI Regulatory Landscape report had the highest readership of our publications in 2024 and was integrated into BlueDot Impact's AI governance curriculum. This analysis also identified model registration as a neglected area of research, directly informing our subsequent report on the topic.

Key Outputs:

  • Soft Nationalization (August 2024): Presented a nuanced approach to public control over AI labs in the US, as opposed to direct nationalization. Describes what policy levers and plausible scenarios might redistribute power from AI labs to the US government.
  • State of the AI Regulatory Landscape (May 2024): Produced a comparative overview of AI regulatory approaches in the US, China, and the EU, providing policymakers and researchers with an accessible analysis of governance across jurisdictions.
  • China's AI Industry and Regulations (March 2024): Examined China's AI industry and regulation through three pieces of legislation, highlighting their focus on algorithmic control, social stability, and technological leadership.
  • Aligning AI Safety Projects with a Republican Administration (November 2024): Analyzed how AI safety initiatives align with a Republican US administration, showing overlaps between national security interests and AI safety goals, and making recommendations for AI safety work accordingly.

Strategic Governance Research

Our publications in this area explored international coordination, public administration, and the power dynamics between public and private actors. We also led the publication of The Oxford Handbook of AI Governance, totalling 50 chapters by 75 leading contributors including Anthony Aguirre, Anton Korinek, Allan Dafoe, Ben Garfinkel & Jack Clark. The handbook, work on which started in 2020, has shaped a number of early conversations about AI governance. 

Key Outputs:

  • The Brave New World of AI (May 2024): Analyzed emerging challenges for public administration, examining how increasing AI capabilities are transforming organizational structures and governance needs across three core areas: agents, organizations, and governance frameworks.
  • AI, Global Governance, and Digital Sovereignty (October 2024): Examined how AI systems are becoming more integral to international affairs by affecting how global governors exert power and pursue digital sovereignty.
  • The Oxford Handbook of AI Governance (October 2024): Published a comprehensive resource with 49 chapters over 9 sections, with global, interdisciplinary perspectives on AI governance.

AI Awareness

The AI Awareness program works to increase public understanding of AI risks and how to address them, through books, teaching, and media engagement. In 2024, our work to raise public awareness of AI safety reached major platforms, including coverage across 10 leading media outlets including Politico, Forbes, and CBS. Building a God received early feature coverage from Forbes Books, with additional major outlet features confirmed. We produced 23 episodes of the podcast All Thinks Considered featuring leading thinkers in AI and societal betterment, and with content generating 184,000 views on TikTok. We also led two courses at the Toronto Metropolitan University, delivered multiple lectures on 'AI and the Future of Humanity,' and received 200+ subscriptions to our newsletter.

Key Outputs:

  • Building a God (December, 2024): A  352-page guide to AI for a general audience by Christopher DiCarlo, addressing AI history, risks, benefits, and ethical implications. The book provides accessible frameworks for understanding AI issues and actionable steps for public engagement with AI safety. Building a God was authored in 2024 and published in January 2025.
  • All Thinks Considered (January 2024): Created an audio-video podcast series featuring in-depth conversations with leading thinkers in AI safety, ethics, and science. The series includes interviews with notable figures such as Robert Trager, Steve Omohundro, Peter Singer, Steven Pinker, and others, providing simple and accessible breakdowns of AI safety and socially impactful topics.

Operations

Convergence is an international AI x-risk strategy think tank, spanning the UK, US, Canada and Portugal. In 2024, we expanded our team from 8 to 9 members. We had one staff member leave, and added two new people to the team. Harry Day, our first COO left, and Michael Keough took up the mantle to lead Operations. Gwyn Glasser joined as a new Researcher.

Convergence is funded by individual philanthropists and granting bodies concerned about x-risk, such as FLI and SFF.

2024 budget

2024 Budget: $950k.

  • Salaries and associated costs: $768k
  • Travel: $25k
  • Threshold 2030 conference: $77k
  • Other: $80k (including $40k for Building a God publicity and $15k for offices)

Funds raised in 2024: $800k.

  • Fulfilled commitments from earlier funders: $300k.
  • Survival and Flourishing Fund: $87k.
  • FLI Power Concentration RFP: $280k.
  • FLI funding for Threshold 2030: $77k.
  • New individual donations: $50k.

2025 budget

2025 budget projection: $875k.

  • Salaries and associated costs: $825k.
  • Travel: $25k.
  • Other: $25k.

Funds raised in Jan-Feb 2025: $200k.

  • New individual donations: $200k

2025: January and February outcomes

As this impact review is being published in March 2025, we will also here outline the major works we have released in January and February 2025:

  • The Manhattan Trap (January 2025): This paper argues that the same assumptions that motivate the US’ race to develop ASI also imply that such a race is extremely dangerous. The paper concludes that international cooperation is preferable, strategically sound and achievable. This project has already elicited strongly positive written feedback and achieved over 400 PDF downloads within a month of publication.
  • Book Launch: Building a God: (January 2025) In December 2024 we published the largest initiative of the AI Awareness Program so far: Building a God. In January 2025 we began the launch of the book, including holding a set of interviews with major media outlets. We are planning a book tour starting in May, including speaking appointments, interviews, town hall discussions across the US and Canada.
  • A Global AGI Agency (January 2025): This report proposes a framework for international AGI governance. The study outlines mechanisms for democratic accountability, and notes challenges to international cooperation and risks of power concentration.
  • Threshold 2030 Full Report (February 2025): This 200-page report showcases the outcomes of the conference in detail, highlighting insights from 25 leading economists, AI policy experts, and forecasters who explored three AI advancement scenarios for 2030. The report covers three main components: Worldbuilding exercises examining AI's potential economic impacts, Economic Causal Modeling and Forecasting exercises.
  • Pathways to Short AI Timelines (February 2025): This 171-page report outlines seven plausible paths to TAI by 2035 through the two mechanisms of compute scaling and recursive improvement. The report argues that the evidence presented for several plausible TAI scenarios motivates preparing for short timelines.

2025: Ongoing initiatives

Continuing into 2025, our largest current initiative is The AGI Social Contract, and other ongoing initiatives of ours include AI and International Security, AGI is Near, AI Scenarios Network, and AI Awareness:

  • The AGI Social Contract. There is a lack of concrete proposals for a post-TAI economy. In 2025 we aim to continue working on the foundations laid by the Threshold 2030. This work is highly neglected, with no concrete policy proposals or organizations working on improving economic outcomes in a post-AGI era. At the same time it will almost certainly be directly impactful in the near term, with implications for billions of human lives. We will continue to produce foundational research and build the field of post-AGI economics. Ongoing initiatives:
  • AI and International Security. Analyzing mechanisms to avoid risks from national securitisation and competitive escalation of AI capabilities. Currently we are facilitating research projects on this topic for a small group of junior researchers in coordination with the Supervised Program for Alignment Research (SPAR). This work follows from The Manhattan Trap, published in January 2025.
  • AI Scenarios Network. We are continuing to nurture this network of 30+ experts in AI strategy and field building AI scenario planning.
  • AGI is Near. In 2025, we may be close to the precipice of AGI. How do we actually succeed in such worlds? Since Chat GPT's release in 2022, AI has advanced rapidly, especially with reasoning models demonstrating some expert-level capabilities in technical fields and problem-solving. AGI or TAI may be likely to arrive within 5 years, and some prominent AI Lab leaders are even more ambitious than that. If AGI arrives within 5 years, this fundamentally changes the playing field. “Standard” approaches, such as normal academic research, advocacy and think tank operations may no longer be effective in a world under transformation. We are currently launching some exploratory research in this area.
  • AI Awareness. In 2024, our AI Awareness work confirmed that the public is curious but widely undereducated about AI developments. Consequently, public support for AI safety work is limited and individuals cannot make informed decisions about their own futures. As these technologies cross certain thresholds, we anticipate major public backlashes. We consider informing the public a neglected and potentially impactful area of AI safety work. Ongoing initiatives:
    • Building a God promotion –  Engage with the media and hold a book tour starting in May 2025, with speaking appointments, interviews, and town hall discussions across major cities in the US and Canada. We will film events to create additional content for our podcast and social media platforms, and may collaborate with recognized science communicators.
    • All Thinks Considered podcast – Produce new episodes that explain complex AI concepts and developments for general audiences, through conversations with a diverse set of experts.

Funding Gaps and opportunities

Convergence's 2025 budget is $875,000, with funding set to run out in June 2025. We need $440,000 to continue operations through year-end and another $440,000 to build a six-month reserve.

Beyond these immediate needs, we see some strong opportunities for growing our impact with additional team members:

  1. Communications Director: Our team produces research efficiently, and a dedicated communications specialist would amplify the impact of that research.
  2. Fundraising Specialist: Recruiting a fundraising specialist would allow our other staff to focus better on core research activities and diversify our funding sources beyond traditional x-risk/EA funding ones. Essentially, recovering the cost of this hire and more.
  3. Expanded operations team: Our current ops team is very small, and a single hire here would unlock more productive hours across the entire organization.
  4. Expanded research team: We are confident we can effectively double our team size, allowing us to cover more neglected research areas at greater depth.

Three funding scenarios

Below are three simplified funding scenarios: (1) Maintenance, for sustaining operations, (2) Moderate Growth for growing the team 50%, and (3) Strategic Growth, for more than doubling the team size.

Base: $880,000

This baseline funding would enable Convergence to maintain our current team of 9 members for an additional 12 months beyond our current runway, into July 2026.

Moderate Growth: $1,850,000

With this increased funding, Convergence would add 5 team members and extend our runway into January 2027. This scenario represents a balanced near-term growth trajectory:

  • 1 Communications Director to develop comprehensive strategies to ensure our findings influence key decision-makers
  • 1 Fundraising Specialist to diversify our funding sources and reducing our reliance on EA funding
  • 1 Senior Researcher
  • 2 Researchers
  • 18 months operational runway

Strategic Growth: $4,150,000

This scenario would position Convergence to scale sustainably to add 11 team members, more than doubling our current size, and extending our runway into Jan 2028.

  • 1 Communications Director to develop comprehensive strategies to ensure our findings influence key decision-makers
  • 1 Fundraising Specialist to diversify our funding sources and reducing our reliance on EA funding
  • 2 Senior Researcher
  • 5 Researchers
  • 2 Operations staff
  • 30 months operational runway

The case for funding Convergence

This year our small team has achieved a significant impact on AI safety through field-defining work, regulatory influence, and cross-sector engagement, on an annual budget of 950k USD. In 2025 we are improving project prioritisation, outreach, and efficiency to further boost our impact, and as of March 2025, we are off to a strong start of the year with five major publications released. With additional resources to address our funding gaps and opportunities, we believe we can significantly scale up our impact. If you may be able to help with supporting our projects, please get in touch at funding@convergenceanalysis.org. We accept major credit cards, PayPal, Venmo, bank transfers, cryptocurrency donations, and stock transfers.

Conclusion

In 2024 we launched a new research institute. We started the year with a new team of 8. There were some hard challenges we were attempting to solve as a research institute for x-risk reduction and future flourishing. Can we combine efficiency with doing deep intellectual research? Combine doing big picture research with actionable research? Combine open academic inquiry with the focus of a startup? And in the end, how do we have a positive impact on x-risk? With the outcomes of the past year, we think we’ve had some very promising successes.

In 2025, we are continuing our work with The AGI Social Contract and other initiatives, orienting ourselves for a world rapidly approaching transformative AI, and building on our proven research model further to make a greater positive impact.

Thank you to all our collaborators and supporters, we wouldn’t be where we are without your help!

We are fundraising! Please get in touch if you are interested in supporting our work or in partnering with us.

Learn more here:

38

0
0
1

Reactions

0
0
1

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities