In the face of increasing competition, it seems unlikely that AI companies will ever take their foot off the gas. One avenue to slow AI development down is to make investment in AI less attractive. This could be done by increasing the legal risk associated with incorporating AI in products.

My understanding of the law is limited, but the EU seems particularly friendly to this approach. The European Commission recently proposed the AI Liability Directive, which aims to make it easier to sue over AI products. In the US, companies are at the very least directly responsible for what their chatbots say, and it seems like it's only a matter of time until a chatbot genuinely harms a user, either by gaslighting or by abusive behavior.

A charity could provide legal assistance to victims of AI in seminal cases, similar to how EFF provides legal assistance for cases related to Internet freedom.

Besides helping the affected person, this would hopefully:

  1. Signal to organizations that giving users access to AI is risky business
  2. Scare away new players in the market
  3. Scare away investors
  4. Give the AI company in question a bad rep, and sway the public opinion against AI companies in general
  5. Limit the ventures large organizations would be willing to jump into
  6. Spark policy discussions (e.g. about limiting minor access to chatbots, which would also limit profits)

All of these things would make AI a worse investment, AI companies a less attractive place to work, etc. I'm not sure it'll make a big difference, but I don't think it's less likely to move the needle than academic work on AI safety.

52

0
0

Reactions

0
0
Comments19
Sorted by Click to highlight new comments since: Today at 11:15 AM

Stimulating legal response for AI misuse sounds like a great direction! The legal field around AI is super-vague now, so helping to define it properly could be a really good thing. Though I would adjust that complaining about chat-bot gaslighting can have the opposite effect by creating noise and reducing attention to more important issues. The other potential problem is that if public actions on AI are immediately punished, it would only make all AI research even more closed. It also would strengthen protective mechanism of the big corporation (the 'antifragility' idea).

My impression is that we need to maximize for strong reaction on big fuck-ups from AI use. And those fuck-ups will inevitably follow, as it happens with all experimental technologies. So, maybe, focusing on stronger cases?

So, maybe, focusing on stronger cases?

Yes, and seminal cases.

Saw this morning that Eugene Volokh, a well-respected libertarian-leaning law professor who specializes in U.S. free-speech law, and others are working on an law review article about libel lawsuits against developers of LLMs. The post below explains how he asked GPT-4 about someone, got false information claiming that he pled guilty to a crime, and got fake quotes attributed to major media outlets:

https://reason.com/volokh/2023/03/17/large-libel-models-chatgpt-4-erroneously-reporting-supposed-felony-pleas-complete-with-made-up-media-quotes/

The linked article says -- persuasively, in my view -- that Section 230 generally doesn't shield companies like OpenAI for what their chatbots say. But that merely takes away a shield; you still need a sword (a theory of liability) on top of that.

My guess is that most US courts will rely significantly on analogies in absence of legislative action. Some of those are not super-friendly to litigation. Arguably the broadest analogy is to buggy software with security holes that can be exploited and cause damage; I don't think plaintiffs have had much success with those sorts of lawsuits. If there is an interveining human actor, that also can make causation more difficult to establish. Obviously that is all at the 100,000 foot level and off the cuff! To the extent the harmed person is a user of the AI, they may have signed an agreement that limits their ability to sue (both by waiving certain claims, by limiting potential damages, or by onerous procedural requirements that mandate private arbitration and preclude class actions).

There are some activities at common law that are seen as superhazardous and which impose strict liability on the entity conducting them -- using explosives is the usual example. But -- I don't understand there to be a plausible case that using AI in an application right now is similarly superhazardous in a way that would justify extending those precedents to AI harm. 

bob - I think this is a brilliant idea, and it could be quite effective in slowing down reckless AI development.

For this to be effective, it would require working with experienced lawyers who know relevant national and international laws and regulations (e.g. in US, UK, or EU) very well, who understand AI to some degree, and who are creative in seeing ways that new AI systems might inadvertently (or deliberately) violate those laws and regulations. They'd also need to be willing to sue powerful tech companies -- but these tech companies also have very deep pockets, so litigation could be very lucrative for law firms that have the guts to go after them.

For example, in the US, there are HIPAA privacy rules regarding companies accessing private medical information. Any AI system that allows or encourages users to share private medical information (such as asking questions about their symptoms, diseases, medications, or psychiatric issues when using a chatbot) is probably not going to be very well-designed to comply with these HIPAA regulations -- and violating HIPAA is a very serious legal issue.

More generally, any AI system that offers advice to users regarding medical, psychiatric, clinical psychology, legal, or financial matters might be in violation of laws that give various professional guilds a government-regulated monopoly on these services. For example, if a chatbot is basically practicing law without a license, practicing medicine without a license, practicing clinical psychology without a license, or giving financial advice without a license, then the company that created that chatbot might be violating some pretty serious laws. Moreover, the professional guilds have every incentive to protect their turf against AI intrusions that could result in mass unemployment among their guild members. And those guilds have plenty of legal experience suing interlopers who challenge their monopoly. The average small law firm might not be able to effectively challenge Microsoft's corporate legal team that would help defend OpenAI. But the American Medical Association might be ready and willing to challenge Microsoft.

AI companies would also have to be very careful not to violate laws and regulations regarding production of terrorist propaganda, adult pornography (illegal in many countries such as China, India, etc), child pornography (illegal in most countries), heresy (e.g. violating Sharia law in fundamentalist Muslim countries), etc. I doubt that most devs or managers at OpenAI or DeepMind are thinking very clearly or proactively about how not to fall afoul of state security laws in China, Sharia laws in Pakistan, or even EU privacy laws. But lawyers in each of those countries might realize that American tech companies are rich enough to be worth suing in their own national courts. How long will Microsoft or Google have the stomach for defending their AI subsidiaries in the courts of Beijing, Islamabad, or Brussels?

There are probably dozens of other legal angles for slowing down AI. Insofar as AI systems are getting more general purpose and more globally deployed, the number of ways they might violate laws and regulations across different nations is getting very large, and the legal 'attack surface' that makes AI companies vulnerable to litigation will get larger and larger.

Long story short, rather than focusing on trying to pass new global regulations to limit AI, there are probably thousands of ways that new AI systems will violate existing laws and regulations in different countries. Identifying those, and using them as leverage to slow down dangerous AI developments, might be a very fast, clever, and effective use of EA resources to reduce X risk.

There are definitely a lot of legal angles that AI will implicate, although some of the examples you provided suggest the situation is more mixed:

  • The HIPAA rules don't apply to everyone. See, e.g., 45 C.F.R. § 164.104 (stating the entities to which HIPAA Privacy Rule applies). If you tell me about your medical condition (not in my capacity as a lawyer), HIPAA doesn't stop me from telling whoever I would like. I don't see how telling a generalized version of ChatGPT is likely to be different.
  • I agree that professional-practice laws will be relevant in the AI context, although I think AI companies know that the real money is in providing services to licensed professionals to super-charge their work and not in providing advice to laypersons. I don't think you can realistically monetize a layperson-directed service without creating some rather significant liability concerns even apart from unauthorized-practice concerns.
  • The foreign law problem you describe is about as old as the global Internet. Companies can and do take steps to avoid doing business in countries where the laws are considered unfriendly. Going after a U.S. tech company in a foreign court often only makes sense if (a) the tech company has assets in the foreign jurisdiction; or (b) a court in a country where the tech company has assets will enforce the foreign court order.  For instance, no U.S. court will enforce a judgment for heresy.

More fundamentally, I don't think it will be OpenAI, etc. who are providing most of these services. They will license their technology to other companies who will actually provide the services, and those companies will not necessarily have the deep pockets. Generally, we don't hold tool manufacturers liable when someone uses their tools to break the law (e.g., Microsoft Windows, Amazon Web Services, a gun). So you'd need to find a legal theory that allowed imputing liability onto the AI company that provided an AI tool to the actual service provider. That may be possible but is not obvious in many cases.

Jason - thanks for these helpful corrections, clarifications, and extensions. 

My comment was rather half-baked, and you've added a lot to think about!

I'm skeptical that this would be cost-effective. Section 230 aside, it is incredibly expensive to litigate in the US. Even if you found a somewhat viable claim (which I'm not sure you would), you would be litigating opposite a company like Microsoft. It would most likely cost $ millions to find a good case and pursue it, and then it would be settled quietly. Legally speaking, you probably couldn't be forced to settle (though in some cases you could); practically speaking, it would be very hard if not impossible to pursue a case through trial, and you'd need a willing plaintiff. Settlement agreements often contain confidentiality clauses that would constrain the signaling value of your suit. Judgments would almost certainly be for money damages, not any type of injunctive relief. 

All the big tech players have weathered high-profile, billion-dollar lawsuits. It is possible that you could scare some small AI startups with this strategy, but I'm not sure if the juice is worth the squeeze. Best case scenario, some companies might pivot away from mass market and towards a b2b model. I don't know if this would be good or bad for AI safety.

If you want to keep working on this, you might look to Legal Impact for Chickens as a model for EA impact litigation. Their situation is a bit different though, for reasons I can expand on later if I have time.

Maybe not the most cost-effective thing in the whole world, but possibly still a great project for EAs who already happen to be lawyers and want to contribute their expertise (see organizations like Legal Priorities Project or Legal Impact for Chickens).

This also feels like the kind of thing where EA wouldn't necessarily have to foot the entire bill for an eventual mega-showdown with Microsoft or etc...  we could just fund some seminal early cases and figure out what a general "playbook" should look like for creating possibly-winnable lawsuits that would encourage companies to pay more attention to alignment / safety / assessment of their AI systems.  Then, other people, profit-motivated by seeking a big payout from a giant tech company, would surely be happy to launch their own lawsuits once we'd established enough of a "playbook" for how such cases work.

One important aspect of this project, perhaps, should be trying to craft legal arguments that encourage companies to take useful, potentially-x-risk-mitigating actions in response to lawsuit risk, rather than just coming up with whatever legal arguments will most likely result in a payout.  This could set the tone for the field in an especially helpful direction.

I think your last paragraph hits on a real risk here: litigation response is driven by fear of damages, and will drive the AI companies' interest in what they call "safety" in the direction of wherever their damages exposure is greatest in the aggregate and/or the largest litigation-existential risk to their company

If only there were some sort of new technology that could be harnessed to empower millions of ordinary people who will have small legitimate legal grievances against AI companies to file their own suits as self-represented litigants, with documents that are at least good enough to make it past the initial pleading stages . . . .

(not intended as a serious suggestion)

If people do use chatbots to help with pro se litigation, then that opens a possible legal theory of liability against AI companies, namely that AI chatbots (or the companies that run them) are practicing law without a license.

Of course, this could extend to other related licensure violations, such as practicing medicine without a license.

Yes. The definition of "unauthorized practice of law" is murkier and depends more on context than one might think. For instance, I personally used -- and recommend for most people without complex needs -- the Nolo/Quicken WillMaker will-writing software.

On a more serious note, if there were 25 types of small legal harm commonly caused by AI chatbots, writing 25 books on "How to Sue a Chatbot Company For Harm X, Including Sample Pleadings" is probably not going to constitute unauthorized practice.

I haven't thought hard about how good an idea this is, but those interested might like to compare and contrast with ClientEarth.

A timely question. I have seen some recent media coverage about other possible legal theories of liability:

  • The Wall Street Journal ran an opinion piece this week about a theory of libel for falsely defamatory information produced by AI: ChatGPT Libeled Me. Can I Sue?
  • The Economist published an article this week about the interaction of AI and copyright law, drawing an analogy to the effects of Napster on the market for recorded music and highlighting the lawsuit between Getty Images and Stability AI over data collection: A battle royal is brewing over copyright and AI

@Jason  seems in your area any thoughts?

It seems plausible to me that legal liability issues could be used to slow down AI development, at least in the West. But that doesn't mean that donating to legal assistance would be a good use of funds. My sense is that there are many plaintiffs armed with plenty of money to fund their own lawsuits, and some of those lawsuits have already happened.

What might be helpful, however, would be amicus briefs from AI alignment, development, or governance organizations, arguing that AI developers should face liability for errors in or misuse of their products. That seems like something that EA funders might want to consider?

Actually, there are many plaintiffs I’m in touch with (especially those representing visual artists, writers, and data workers) who need funds to pay for legal advice and to start class-action lawsuits (given having to pay court fees if a case is unsuccessful).

amicus briefs from AI alignment, development, or governance organizations, arguing that AI developers should face liability for errors in or misuse of their products.

Sounds like a robustly useful thing to do to create awareness of the product liability issues of buggy spaghetti code.

More from bob
Curated and popular this week
Relevant opportunities