Hide table of contents

tl;dr  Cases I found against OpenAI. All are US-based. First ten focus on copyright.
 

Coders
 1. Joseph Saveri Firm:  overview, complaint

Writers
 2. Joseph Saveri Firm:  overview, complaint
 3. Authors Guild & Alter:  overview, complaint
 4. Nicholas Gage:  overview & complaint

YouTubers
 5. Millette: overview, complaint

Media
 6. New York Times:  overview, complaint
 7. Intercept Media:  overview, complaint
 8. Raw Story & Alternet:  overview, complaint
 9. Denver Post & seven others:  overview, complaint
 10. Center for Investigative Reporting: overview, complaint

Privacy
11. Clarkson Firm:  overview, complaint
12. Glancy Firm:  overview, complaint

Libel
13. Mark Walters:  overview, complaint

Mission betrayal
14. Elon Musk:  overview, complaint
15. Tony Trupia:  overview, complaint


That last lawsuit by a friend of mine has stalled. A few cases were partially dismissed.
Also, a cybersecurity expert filed a complaint to Polish DPA (technically not a lawsuit).
For lawsuits filed against other AI companies, see this running list.

Most legal actions right now focus on data rights. In the future, I expect many more legal actions focussed on workers' rights, product liability, and environmental regulations.


If you are interested to fund legal actions outside the US:

  • Three projects I'm collaborating on with creatives, coders, and lawyers.
  • Legal Priorities was almost funded last year to research promising legal directions.
  • European Guild for AI Regulation is making headway but is seriously underfunded.
  • A UK firm wants to sue for workplace malpractice during ChatGPT development. 
     

Folks to follow for legal insights:

  • Luiza Jarovsky, an academic who posts AI court cases and privacy compliance tips
  • Margot Kaminski, an academic who posts about harm-based legal approaches
  • Aaron Moss, a copyright attorney who posts sharp analysis of which suits suck
  • Andres Guadamuz, an academic who posts analysis with a techno-positive bent
  • Neil Turkewitz, a recording industry veteran who posts on law in support of artists
  • Alex Champandard, a ML researcher who revealed CSAM in largest image dataset
  • Trevor Baylis, a creative professional experienced in suing and winning
     

Manifold also has prediction markets:


Have you been looking into legal actions?  Curious then for your thoughts.

Comments5


Sorted by Click to highlight new comments since:

Thanks for making the list Remmelt!

Not sure how important this one is, but Air Canada recently had to comply to a refund policy made up by its own chatbot.

Thanks! Also a good example of lots of complaints being prepared now by individuals

Obvious point that it would be neat for someone to write forecasting questions for each one, if there can be some easy way of doing so. 

Workers' rights are usually under the umbrella of systematic violations of rights, a term usually associated with Human rights. We can use similar pointers and forecast questions/solutions. Some would overlap with data mining and fair use —which are hardly followed. It is not very hard for an average company to see the pivots created by OpenAI's crisis management team. OpenAI research leads say their recent model is trained on a combination of data that's publicly available as well as data that OpenAI has licensed, but they can't go into much detail on it.

The last part is no easy feat for anyone to dive into. This conversation came out less than two days ago and seemed quite intentional. We can safely assume that this is going to be the new norm for addressing lawsuits. It is admissible in all the formal proceedings, after all. It is important to note that, statements like: in some ways, we really see modeling reality as the first step to be able to transcend it, are meticulously said in the end. I don't think anyone would want to deal with them and get stuck in an expensive limbo beyond control, which OpenAI can afford.

Actually, looks like there is a thirteenth lawsuit that was filed outside the US.

A class-action privacy lawsuit filed in Israel back in April 2023.

Wondering if this is still ongoing: https://www.einpresswire.com/article/630376275/first-class-action-lawsuit-against-openai-the-district-court-in-israel-approved-suing-openai-in-a-class-action-lawsuit

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
Thomas Kwa
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A