N

n99

6 karmaJoined Jun 2022

Comments
2

I can't speak for the donors, but only trying to prevent AGI doesn't seem like a good plan. We don't know what's required for AGI. It might be easy, so robustly preventing it would likely have a lot of collateral damage (to narrow AI and computing in general). Doing some alignment research is nowhere near as costly, and aligned AI could be useful.

If you ask me a question that I don't want to answer, and me saying "I don't think I should answer that" would itself reveal information that I don't want to reveal, then I will probably lie.

We could decline to answer some questions that aren't too revealing so that people won't know which is which. The cost of hiding some innocuous things seems much lower than the benefit of being trusted.