Hide table of contents

Crossposted from LessWrong.

AI labs sometimes publicly ask for input on their actions, their products, and the future of AI. See OpenAI's Democratic inputs to AI and ChatGPT Feedback Contest, and maybe bug bounty programs for security vulnerabilities (OpenAI, Google, Meta). I'd like to collect these; please reply with other examples you're aware of. I'm also interested in ideas/recommendations for labs on what they should request input on or how they should do so (e.g. bug bounty for model outputs).


Labs also seek input non-publicly. For example, labs have used external red-teaming and model evals, worked with biosecurity experts to understand how near-future AI systems can contribute to biological weapons, and consulted external forecasters. Various kinds of external audits have been proposed. I'm also interested in collecting examples and ideas for this non-public input, but less so.

7

0
0

Reactions

0
0
New Answer
New Comment
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities