Hide table of contents

Among AI companies that employ AI alignment/policy researchers (e.g. DeepMind, OpenAI, Anthropic, Conjecture), which companies make such researchers sign a non-disparagement clause?

Also, what are the details of such non-disparagement clauses? (Do they aim to restrict such researchers indefinitely, even after they leave the company?)

New Answer
New Comment

1 Answers sorted by

Comments3
Sorted by Click to highlight new comments since: Today at 1:55 PM

Would making public the existence of such a clause violate it (or another agreement)?

The Tech Worker Handbook website has more information about Non-Disclosure Agreements (NDAs). It also cautions people from reading the website on a company device:

I do NOT advise accessing this from a company device. Your employer can, and will likely, track visits to a resource like this Handbook.

Business Insider's review of 36 NDAs in the tech industry:

Some NDAs say explicitly that the confidentiality provisions never sunset, effectively making them lifelong agreements...

More than two-thirds of workers who shared their agreements with Insider said they weren’t exactly sure what the documents prevented them from saying—or whether even sharing them was a violation of the agreement itself.

Well, I'm told the states Washington, California, New York, and New Jersey all have laws that limit what such clauses can require, but they probably only protect employees who report crimes, sexual harassment, and that sort of thing, which probably isn't that helpful in the AI alignment field, since I figure most of the risk in that area would come from companies developing AIs which are extremely powerful, but still (for now) legal to develop.

More from Ofer
Curated and popular this week
Relevant opportunities