The OWASP Top 10 [1] is the probably the most well-known and recognised reference standard for the most critical web application security risks. This organisation has now started to working on creating a similar list for Large Language Models (LLM) Applications.

I'm posting about it here since I think it would be beneficial for safety alignment researchers to be involved for two reasons:

  1. To provide AI safety and alignment expertise to the security community and standardisation process.
  2. To learn from the cybersecurity community both about standardisation processes since they have a long experience in developing these kinds of standards and about security mindset and vulnerabilities.

I have no idea about how many people within the AI safety and alignment community actually know about this initiative, but I did not find any reference to it on the alignment forum or here on the EA forum, so I thought I might as well post about it.

More information available here:

- https://owasp.org/www-project-top-10-for-large-language-model-applications/

- https://github.com/OWASP/www-project-top-10-for-large-language-model-applications

 

  1. ^

    More info about The Open Web Application Security Project here https://en.wikipedia.org/wiki/OWASP

5

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities