It's 3 a.m. in the densely packed San Francisco SoMa neighborhood. A burst of sound echoes up the residential high rise buildings. Down below, a group of Waymo autonomous vehicles (AV) are honking at each other as they attempt to stop and recharge. Across the world, 350 people over livestream are taking bets on where each car will park. As the audiences to these regular productions grow, Waymo responds by committing to implement a fix. After a few tries, the issue is put to rest. It's a prelude to the unforeseen challenges of integrating AI into our daily lives and critical infrastructure.

The Biden AI EO established the AI Safety and Security Board (AISSB) to advise on responses to incidents related to AI in critical infrastructure. AISSB should establish practices to track AI critical infrastructure incidents with the appropriate degree of transparency. Doing so will achieve at least two benefits:

  1. Companies that leverage AI will be incentivized to improve their services.
  2. Knowledge of potential AI risks can be shared throughout industry.

Regulatory and public scrutiny into the operations of companies that apply AI in critical infrastructure incentivizes actors to be transparent and honest about incidents caused by the behavior of their AI systems. Compare the safety transparency practices of Waymo and Cruise, two competing AV companies. In October 2023, a pedestrian became stuck under a Cruise robotaxi. The vehicle briefly stopped but then pulled over, dragging the pedestrian 20 feet. Cruise did not initially disclose the pullover maneuver to California regulators. When the omission was discovered, the California DMV suspended Cruise's robot-taxi permit. In contrast, Waymo provides public detailed reports and raw data on all accidents involving their AVs. Waymo has been rewarded for providing a demonstrably secure service with a rapidly increasing number of customers.

Developing and deploying AI systems in a competitive commercial market is analogous to racing through a minefield. Each actor is hoping to be the fastest to market, but anyone moving too quickly can be a disaster for all. Frontier AI companies should be transparent about harm incidents caused by the behavior of their products in order to alert competitors of mutually destructive risks. For example, AV companies could collaborate to identify and guard against targeted adversarial attacks their models might encounter in operation. Such attacks could prevent AVs from detecting stop signs that have been meddled with in ways that may be imperceptible to humans. There is precedent for freely sharing access to vehicle safety features with competitors. Swedish engineer Nils Bohlin developed the three-point seatbelt while working for Volvo in 1959. Volvo made the patent available to other automakers free of charge. This responsible decision helped ensure the widespread adoption of seatbelts and has saved countless lives.

Full public transparency around the risks of AI deployed in critical infrastructure would increase the vulnerability of these systems by alerting malicious actors to potential attack vectors. In this sensitive context, transparency must be in service to security. Some information must be withheld from the general public, and trade secrets must be protected from competitors. AISSB should ensure these considerations do not prevent AI actors from responsibly disclosing incidents to achieve the benefits described in this article.

The honking Waymo incident demonstrates how AI deployed in the transportation sector might behave in undesirable ways when encountering unanticipated scenarios. AISSB must be responsible and vigilant in tracking AI incidents in critical infrastructure in order to anticipate and respond to more serious and harmful risks.

 

image attribution: https://unsplash.com/photos/a-car-that-is-driving-down-the-street-Qr67ewAPBvY 

1

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities