Summary: "This report highlights a set of approaches that, in concert, will collectively enable us to confront tech power. Some of these are bold policy reforms that underscore the need for bright-line rules and structural curbs. Others identify popular policy responses that, because they fail to meaningfully address power discrepancies, should be abandoned. Several aren’t in the traditional domain of policy at all, but acknowledge the importance of nonregulatory interventions such as collective action, worker organizing, and the role public policy can play in bolstering these efforts. We intend this report to provide strategic guidance to inform the work ahead of us, taking a bird’s eye view of the many levers we can use to shape the future trajectory of AI – and the tech industry behind it – to ensure that it is the public, not industry, that this technology serves."

I came across this report from this Vox article which referred to the report as "a roadmap that specifies exactly which steps policymakers can take [that's] refreshingly pragmatic and actionable". I skimmed the executive summary, and it seems like an interesting general approach, to try to tackle the AI problem from an anti-monopolistic standpoint, arguing big tech has an outsized role in being able to direct where these very important developments are headed. Below are their four "strategic priorities":

  1. Place the burden on companies to affirmatively demonstrate that they are not doing harm, rather than on the public and regulators to continually investigate, identify, and find solutions for harms after they occur.
  2. Break down silos across policy areas, so we’re better prepared to address where advancement of one policy agenda impacts others. Firms play this isolation to their advantage.
  3. Identify when policy approaches get co-opted and hollowed out by industry, and pivot our strategies accordingly. 
  4. Move beyond a narrow focus on legislative and policy levers and embrace a broad-based theory of change.

They also give a couple of tangible ways we can work towards regulation. Below are their 7 specific suggestions for change:

  1. Unwind tech firms’ data advantage.
  2. Reform competition law and enforcement such that they can more capably reduce tech industry concentration.
  3. Regulate ChatGPT, BARD, and other large-scale models.
  4. Displace audits as the primary policy response to harmful AI.
  5. Future-proof against the quiet expansion of biometric surveillance into new domains like cars.
  6. Enact strong curbs on worker surveillance.
  7. Prevent “international preemption” by digital trade agreements that can be used to weaken national regulation on algorithmic accountability and competition policy.

But given that they are focused on some topics tangential or unrelated to AI x-risk, I wonder how easy/possible it will to bootstrap specific AI x-risk protections into this project and framework. Or will these proposed changes perhaps be convergent with taking a reasonable, more near term focused intermediary step towards AI x-risk regulation? Also, what are the thoughts on the company behind this (AI NOW), given they've been funded by DeepMind before?

9

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities