From the report:

The FDA model offers a power lesson in optimizing regulatory design for information production, rather than just product safety. This is urgently needed for AI given lack of clarity on market participants and structural opacity in AI development and deployment. 

→ The FDA has catalyzed and organized an entire field of expertise that has enhanced our understanding of pharmaceuticals and creating and disseminating expertise across stakeholders far beyond understanding incidents in isolation. AI is markedly opaque in contrast: mapping the ecosystem of companies and actors involved in AI development (and thus subject to any accountability or safety interventions) is a challenging task absent regulatory intervention. 

→ This information production function is particularly important for AI, a domain where the difficulty–even impossibility–of interpretability and explainability remain pressing challenges for the field and where key players in the market are incentivized against transparency. Over time, the FDA’s interventions have expanded the public’s understanding of how drugs work by ensuring firms invest in research and documentation to comply with a mandate to do so – prior to the existence of the agency, much of the pharmaceutical industry was largely opaque, in ways that bear similarities to the AI market. 

→ Many specific aspects of information exchange in the FDA model offer lessons for thinking about AI regulation. For example, in the context of pharmaceuticals, there is a focus on multi-stakeholder communication that requires ongoing information exchange between staff, expert panels, patients and drug developers. Drug developers are mandated to submit troves of internal documentation which the FDA reformats for the public. 

→ The FDA-managed database of adverse incidents, clinical trials and guidance documentation also offers key insights for AI incident reporting (an active field of research). It may motivate shifts in the AI development process, encouraging beneficial infrastructures for increasing transparency of deployment and clearer documentation.

6

1
2

Reactions

1
2
Comments2
Sorted by Click to highlight new comments since:

I am confused. It seems like this report does not acknowledge that the FDA should by most reasonable perspectives be considered a pretty major failure responsible for enormous harms to economic productivity and innovation. 

Like, I think it's reasonable to disagree with that take and to think the FDA is good, but somehow completely ignoring the question of FDA efficacy seems kinda crazy. 

The report is focussed on preventing harms of technology to people using or affected by that tech.

It uses FDA’s mandate of premarket approval and other processes as examples of what could be used for AI.

Restrictions to economic productivity and innovation is a fair point of discussion. I have my own views on this – generally I think the negative assymetry around new scalable products being able to do massive harm gets neglected by the market. I’m glad the FDA exists to counteract that.

The FDA’s slow response to ramping up COVID vaccines during the pandemic is questionable though, as one example. Getting a sense there is a lot of problems with bureacracy and also industrial capture with FDA.

The report does not focus on that though.

Curated and popular this week
Relevant opportunities