Hide table of contents

I'm very convinced about the Importance and Neglectedness of AI risks.

What are the best resources to get convinced about the Tractability?

I'm not concerned about many AI Safety projects having ~0 impact, I'm concerned about projects having negative impact (eg. Thoughts on the impact of RLHF research).

23

3
0

Reactions

3
0
New Answer
New Comment

1 Answers sorted by

I'm also concerned about many projects having negative impact, but think there are some with robustly positive impact:

  1. Making governments and the public better informed about AI risk, including e.g. what x-safety cultures at AI labs are like, and the true state of alignment progress. Geoffrey Irving is doing this at UK AISI and recruiting, for example.
  2. Try to think of important new arguments/considerations, for example a new form of AI risk that nobody has considered, or new arguments for some alignment approach being likely or unlikely to succeed. (But take care to not be overconfident or cause others to be overconfident.)
Curated and popular this week
Relevant opportunities