First of all, I am currently studying Philosophy, Neuroscience and Cognition in my Bachelor's degree in Magdeburg, Germany. After being at EAxOxford last week and planning to go to EA London, I am thinking hard about my career planning process.
In doing so, I am considering switching to a more technical Bachelor, Aritificial Intelligence and Cognitive Science.
What would heavily influence me in this decision would be whether a philosophical/neuroscience/cognitive science/psychology (conceptual) perspective on AI Safety would benefit the field more than another technical perspective coming from somebody who studied something technical like Artifical Intelligence.
80000hours says in the technical AI safety career plan that it is possible to contribute to AI Safety research via a neuro (even if they say specifically, computational neuroscience) perspective. However, aside from being possible, is it a good idea?
I've seen a EA Forum post about this topic from 5 years ago, supporting the view that it is. I wonder, if anything has changed on this view in the field yet - are there any updates? What do people in the field think?
What would help me the most:
- What is your personal take on this? Do you believe a philosophical/neuroscience/cognitive science perspective would be useful to the field of AI Safety?
- Do you know any people/arguments that strongly suggest that this is not the case?
- Do you know any people that have tried something (connect neuroscience to AI safety) and were successful/failed?
If you know somebody, that might have an interesting perspective on this, please let me know!