49Joined Jun 2022


PhD student in philosophy of science, AI, x-risks, and cognitive science. 


Between pure reason and effectiveness
Cognitive tools for x-risks research


Five types of people on AI risks:

  1. Wants AGI as soon as possible, ignores safety.
  2. Wants AGI, but primarily cares about alignment.
  3. Doesn't understand AGI/doesn't think it'll happen anytime during her lifetime; thinks about robots that might take people's jobs.
  4. Understands AGI, but thinks the timelines are long enough not to worry about it right now.
  5. Doesn't worry about AGI; being locked-in in our choices and "normal accidents" are both more important/risky/scary.
Slowing down AI progress?

Thank you, that's great. I'd be keen to start a project on this. For whoever is interested, please DM me and we can start brainstorming and form a group etc.