Hide table of contents

This paper was published as a GPI working paper in December 2024.

Abstract

We propose an empirical approach to identify and measure AI-driven shocks based on the co-movements of relevant financial asset prices. For that purpose, we first calculate the common volatility of the share prices of major US AI-relevant companies. Then we isolate the events that shake this industry only from those that shake all sectors of economic activity at the same time. For the sample analysed, AI shocks are identified when there are announcements about (mergers and) acquisitions in the AI industry, launching of new products, releases of new versions, and AI-related regulations and policies.

Introduction

The World Economic Forum surveys professionals on the likelihood and impact of global risks every year. According to the global risks perception survey, the top risk most likely to present a material crisis in 2024 on a global scale is extreme weather, closely followed by AI-generated misinformation and disinformation (WEF, 2024).

Artificial intelligence (AI) has the potential to revolutionize various aspects of human life, from healthcare and education to transportation and finance. Its rapid development raises significant concerns about potential risks which can have global implications due to the interconnectedness of our world. Several reasons can be pointed out for why AI risks are important globally.

  1. Economic disruption: AI could automate many tasks, leading to job losses in certain sectors (Acemoglu and Restrepo (2019), WEF (2023), Harayama et al. (2021)), and economic inequality as the benefits of AI may not be distributed evenly, exacerbating existing economic disparities (Gmyrek et al., 2023).
  2. Social and ethical concerns: The collection and use of large amounts of personal data can raise privacy concerns (see e.g., O’Neil (2016), IEEE (2020)).
  3. Security and governance: AI can be used to develop more sophisticated cyberattacks, posing a threat to national security and critical infrastructure; AI development can outpace the development of appropriate regulations and governance frameworks (see e.g., CSET (2023), SIPRI (2019)).
  4. Environmental impact: AI systems can be energy-intensive, contributing to climate change, and the mass production of AI hardware can deplete natural resources (Strubell et al. (2019), GPAI (2021)).
  5. Existential risks: The development of superintelligent AI systems that surpass human capabilities can pose an existential threat to humanity if not controlled or aligned with human values (see e.g., Bostrom (2014)).

Addressing these global AI risks requires international cooperation, ethical guidelines, and ongoing research to ensure that AI is developed and used responsibly. But how to identify and measure global AI risks empirically and consistently over time?

Some events impact volatilities of a wide range of assets, asset classes, sectors and countries, with implications for risk management, portfolio allocation and policy-making. The magnitude of such shocks has been defined as global common volatility, or simply COVOL, by Engle and Campos-Martins (2023), which is a broad measure of global financial risk. Because a single factor may not be sufficient to fully capture the structure of the common variation in the volatilities of financial asset returns, Campos-Martins and Engle (2024) paper introduced an extension of the statistical formulation of COVOL to include multiple volatility factors, which allows for clustering structures in the comovements of global financial volatilities that may exist at the industry or regional levels. In addition, this work is also motivated by ? who identified climate change risk drivers in the global carbon transition from analysing common volatility of relevant share prices (e.g., of oil and gas companies).

Financial markets are difficult to predict accurately. Large unexpected outcomes are frequent and often common to a wide range of assets across markets. There is strong evidence of international co-movement of asset prices and uncertainty (see, among others, Bekaert et al. (2020), Bollerslev et al. (2014), Miranda-Agrippino and Rey (2020)). Correlated financial volatilities cannot be explained by traditional factor pricing models (Herskovic et al., 2016). But they do using COVOL (Engle and Campos-Martins, 2023). We use common factors to reduce the dimensionality of financial asset prices worldwide while extracting meaningful information about the driving forces behind AI-driven shocks. The multiplicative decomposition that we consider implies a factor structure of the squared innovation covariance matrix to which factor analysis can be applied. This not only should be more efficient than principal component analysis, it also overcomes the issue of getting negative values for the principal components when dealing with variances. Principal component analysis cannot deal with missing values and has no end point for the optimal number of factors. Moreover, using numerical methods is much easier to implement and to replicate (than, for instance, text-based indicators).

Textual analysis of newspapers is becoming very popular (see Caldara and Iacoviello (2022) for measuring geopolitical risk and Ahir et al. (2022), Baker et al. (2016), Davis (2016) on global uncertainty). Newspapers tend however to reflect what people are worried might happen and not necessarily what has happened. Studies based solely on news reports may thus not address the materiality of news and shocks. An empirical approach using multiplicative volatility factors applied to financial asset prices provides a systematic analysis of global risks that is consistent over time, and that measures in real time big risks as perceived by not only the press, but also the public, global investors, and policy-makers. When assessing the effectiveness of various methods in promptly detecting (shifts in) geopolitical risks, Karagozoglu et al. (2022) found that measures derived from asset prices such as global COVOL are quicker in capturing changes in geopolitical risk compared to textual analysis.

By focussing on particular vocabulary, text-based measures may leave out many events that shake our world. The measure introduced here extends to phenomena such as major political events (e.g., Brexit), global economic occurrences (e.g., COVID-19 and the global financial crisis), climate change (Campos-Martins and Hendry, 2024) or cyberattacks. These events are all of global nature. If the interest lies in identifying and measuring the magnitude of purely AI-driven shocks, one must control for these other events.

We propose a novel methodology to measure common movements of the AI industry and to identify those which have been driven by unexpected increases in AI risks. The model of global COVOL is applied to the daily share prices of AI-relevant companies in the US. We establish the common events that have made these AI equity prices move at the same time and that have had the greatest impact on the industry in the last two decades.

The paper is organised as follows. In Section 2, we review the related literature. The dataset is present in Section 4 and the results, including the AI risk index, are shown in Section 4.2. Finally, Section 5 concludes the paper.

Read the rest of the paper

10

0
0

Reactions

0
0

More posts like this

Comments1


Sorted by Click to highlight new comments since:

Executive summary: This paper proposes a new method to measure AI-related risks by analyzing stock price movements of AI companies, finding that major AI shocks correspond to acquisitions, product launches, and regulatory changes.

Key points:

  1. Traditional risk measurement methods like text analysis have limitations; stock price analysis can provide more immediate and objective risk signals
  2. The study uses "common volatility" (COVOL) analysis to identify AI-specific market movements separate from broader market trends
  3. Five key categories of AI risk identified: economic disruption, social/ethical concerns, security/governance issues, environmental impact, and existential risks
  4. Method improves upon existing approaches by filtering out non-AI related global events (like COVID-19) to isolate AI-specific shocks
  5. Approach provides real-time risk assessment reflecting views of investors, public, and policymakers rather than just media coverage

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 6m read
 · 
This post summarizes a new meta-analysis from the Humane and Sustainable Food Lab. We analyze the most rigorous randomized controlled trials (RCTs) that aim to reduce consumption of meat and animal products (MAP). We conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing MAP consumption. By contrast, reducing consumption of red and processed meat (RPM) appears to be an easier target. However, if RPM reductions lead to more consumption of other MAP like chicken and fish, this is likely bad for animal welfare and doesn’t ameliorate zoonotic outbreak or land and water pollution. We also find that many promising approaches await rigorous evaluation. This post updates a post from a year ago. We first summarize the current paper, and then describe how the project and its findings have evolved. What is a rigorous RCT? There is no consensus, either in our field or between fields, about what counts as a valid, informative design, but we operationalize “rigorous RCT” as any study that: * Randomly assigns participants to a treatment and control group * Measures consumption directly -- rather than (or in addition to) attitudes, intentions, or hypothetical choices -- at least a single day after treatment begins * Has at least 25 subjects in both treatment and control, or, in the case of cluster-assigned studies (e.g. university classes that all attend a lecture together or not), at least 10 clusters in total. Additionally, studies needed to intend to reduce MAP consumption, rather than (e.g.) encouraging people to switch from beef to chicken, and be publicly available by December 2023. We found 35 papers, comprising 41 studies and 112 interventions, that met these criteria. 18 of 35 papers have been published since 2020. The main theoretical approaches: Broadly speaking, studies used Persuasion, Choice Architecture, Psychology, and a combination of Persuasion and Psychology to try to cha
 ·  · 1m read
 · 
[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.] In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.  In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: 1. Those working on AI Safety, because they believe that transformative AI is coming. 2. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources. 
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time. 1. ^ Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other pra
Rasool
 ·  · 1m read
 · 
In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million.  10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for bringing this to my attention   1. ^ 1st February 2023 - 31st January 2024