Hide table of contents

Is there any AI ethical committee within EA that individuals can privately and securely consult regarding potentially sensitive AI safety-related issues?

Consider the following two cases:

  • #whistleblowing A ML researcher in a big tech company comes upon information regarding potentially dangerous AI algorithm. The researcher recognizes it as a big capability jump that doesn't seem to be accompanied by appropriate safety measures. Informing police is not an option, as there is high uncertainty about the implications (no imminent danger) so this probably wouldn't be recognized as a crime/threat. Publishing this information is also not an option, as the algorithm is a proprietary protected business secret. Whistleblowing is possible, but the algorithm might be considered an info hazard.
  • #infohazard A ML researcher comes up with an idea in the AI space that has a clearly positive and potentially also big impact on humanity, but is uncertain about its further implications and side-effects. The researcher would like to discuss with the community the viability of the idea as well as possible negative side-effects but is afraid that the idea could be an info hazard.

In both these cases humanity would greatly benefit if the researcher had an option to consult an independent group of individuals proficient in AI safety trusted by the community not to reveal and/or exploit info hazards and business secrets. Even without the group having any legal authority to judge it would be greatly beneficial to at least be able to safely discuss the potential negative implications and whether it is, or is not an info hazard. Does something like this exist in EA? If not, who would be good candidates for such a group?

8

0
0

Reactions

0
0
New Answer
New Comment
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities