EA

Eleni_A

384 karmaJoined Jun 2022

Bio

PhD-ing. I think and write about AI safety, cognitive science, history and philosophy of science/technology. 

Sequences
5

Machine Learning For Scientific Discovery
AI Safety Field-Builduing
Alignment Theory Series
Between pure reason and effectiveness
HPS FOR AI SAFETY

Comments
18

Topic contributions
1

Helpful post, Zach! I think it's more useful and concrete to focus on asking about specific capabilities instead of asking about AGI/TAI etc. and I'm pushing myself to ask such questions (e.g., when do you expect to have LLMs that can emulate Richard Feynmann-level -of-text).  Also, I like the generality vs capability distinction. We already have a generalist (Gato) but we don't consider it to be an AGI (I think).

Answer by Eleni_AJan 19, 20231
0
0

The quick answer is that wanting to do alignment-related work does not depend on a Philosophy PhD, or any graduate degree tbh. I'd say, start thinking about what are your interests more specifically and then there might be different paths to impact with or without the degree. 

Load more