Hide table of contents

Hello! This post (thesis in the title) is a bit of a hot take of mine, so I’m happy to hear pushbacks and for it to be wrong. It’s also written in a casual style, so that I will ever publish it. What follows are some reflections I had after trying for several years to answer: “What should social scientists who are primarily interested in reducing risks from AI be doing, specifically?” My main takeaways are the headings / on the sidebar, so to skim just read those.  

Note: This post is specifically aimed at social science-type EAs (and also neuroscientists[1]) who are most interested in contributing to AI safety, which is meant as a statement of interests (people who like social science) and goals (contributing to AI safety). It’s not meant to apply outside of the set of people who self-identify into this cluster. I wrote the post because I happen to fall into this cluster (PhD in computational cognitive science, interested in AI safety) and have a lot of latent thoughts about career exploration within it. In particular, EAs with social science BAs will sometimes ask me what options I think they should pursue, and I’d like to pass them this doc about how I’m currently thinking about the space. Some final notes: if you’re not interested in dedicating your career to AI safety, I salute you and you are not the target of this post! All opinions are my own and I expect people in the community to disagree with me. 

(Note: I previously attempted to figure out research questions at the intersection of AI safety and social science, and this post reflects thoughts I've had since assembling that compilation and working with some students last fall.)

Many thanks to comments from Abby Novick Hoskin, Aaron Scher, TJ, Nora Ammann, Noemi Dreksler, Linch Zhang, Lucius Caviola, Joshua Lewis, Peter Slattery, and Michael Keenan for making this post better; thank you for raising disagreements and agreements! 

 


 

(1) “AI Safety Needs Social Scientists” 

I think this article was describing a new paradigm that means that AI safety has a limited number of roles (0-2 per year?) open for people who are approximately top computational / quantitative PhD-level cognitive(/neuro) scientists. This is great, but people often take this article to mean something broader than that based on the title, and I think that’s a misleading interpretation. 

In 2019, Geoffrey Irving and Amanda Askell (then at OpenAI) published an article called “AI Safety Needs Social Scientists”. This was great, and the purpose to my eye seemed to be introducing a new paradigm in AI safety that would require hiring for a new role. Specifically, it seemed they were looking to hire approximately PhD-level researchers who’d done a lot of experiments with humans in the past, who could collaborate with machine learning (ML) researchers to help run integrated human / AI experiments. Note, however, that that’s a pretty specific ask: “social science” includes fields like anthropology, sociology, psychology, political science, linguistics, and economics. If I were advertising this position, I’d be looking for “computational / quantitative PhD-level cognitive(/neuro) scientists”, which are academic labels that imply: a researcher who does empirical human experiments, who knows how to program and regularly does data analysis in e.g. Python, who is likely to be familiar with ML paradigms and used to being an interdisciplinary researcher who occasionally publishes in computer science (CS) journals. 

I happen to be one of those! I’m a computational cognitive scientist who did very large-scale human experiments by academia’s lights– I had thousands of people in one of my experiments. I did my PhD with one of the world’s top computational cognitive scientists, at a known university, and I collaborated with computer scientists at UC Berkeley’s CHAI. As far as I know, I’m basically the target population for this job. So: who was hiring for this type of role? 

OpenAI was hiring at the time. Geoffrey Irving moved to DeepMind and seems maybe interested in continuing to pursue this work, but I don’t think they’ve been hiring for this particular role since the OpenAI article. Ought was hiring for similar things at the time. DeepMind is hiring for a role like this (someone to help with human experiments), though I hear they’re done with applications now and they’re hiring for a single job. The number of these types of jobs could grow in the future, both within these specific organizations and through other orgs, but these jobs are likely to be limited and competitive. Which is to say: if you’re a good fit for the role, go for it! But while these roles exist, they are very competitive, and they’re rare (0-2 per year?)

Note: someone who applied for the DeepMind job tells me that they did not get asked any questions about human subject research at the early stages, and were instead asked about their background knowledge of neural networks, and to do coding and algorithm interviews. This sounds like what I would expect for this kind of role, but not what most people would expect by default.

That’s what the “AI Safety Needs Social Scientists” article means to me, but I notice that it’s very commonly used among EA posts to mean that… well, AI safety needs social scientists, meaning the whole span of social scientists, at all of the levels. But if you’re a Master’s level economist who hasn’t interfaced with programming languages before, there’s not a job like this for you in AI safety, because there just aren’t that many jobs like this, and it’s a very specific set of skills and expertise. No harm no foul to the article title, since it’s way more pithy than “AI safety needs a few top computational / quantitative PhD-level cognitive(/neuro) scientists for new paradigms like Debate”, but I think people should keep that in mind when referencing that article. 

 

(2) Can social scientists contribute directly to AI safety research? 

[Yes, if they can go straight into technical AI safety research.] and [Yes, if they happen to be a polymath type of person who can contribute to theoretical AI safety research or technical meta-safety research like forecasting.]

Okay, say you’re not gunning for one of the “Run human experiments in collaboration with ML researchers” jobs above. What then? 

[Yes, if they can go straight into technical AI safety research.] 

If you’ve got the skills and interest to apply to one of the OpenAI / DeepMind jobs above, there’s a good chance you’re already collaborating or working on technical AI safety research. There are a number of interdisciplinary researchers who do what I would call technical AI safety research with a social science focus. Cooperative AI Foundation’s research comes most prominently to mind, and computational cognitive scientists and human-robot interaction (HRI) researchers often have social science flavor to their technical work. So if you’re basically in a CS department or collaborate a lot with a CS department, but have social science interests, you’re probably going to be able to figure out how to directly do technical AI safety research that interests you. 

[Yes, if they happen to be a polymath type of person who can contribute to theoretical AI safety research or technical meta-safety research like forecasting.]

What if you’re not a computer scientist / programming-type social science person, but you spend a lot of time thinking about math, and are really interested in theoretical safety research? What if you’re a polymath-type person who’s in social science because they think it’s interesting, but are also interested in theory and are pretty mathy on the whole? That’s awesome, and I’m very down with this type of person going directly into theoretical AI safety research or technical meta-safety research like forecasting, since we probably need many more people going into theoretical AI safety. To the extent that the PIBBSS summer research fellowship is attempting to capture this type of person (polymath people, mathematically-inclined theoretical AI alignment researchers who happen to be in social science), I think that’s great. 

There’s a tricky subquestion here about whether, if you’re a polymath/generalist-type person, your theoretical work is going to be *better* than if you were a normal math/theory person. This question has implications like “what populations should we try to recruit from”, and I’m personally more on the “just recruit from the math/theory populations, rather than trying to pull interdisciplinary people” end than, for example, the PIBBSS founders are. (I would be more convinced empirically if more of the best researchers had social science backgrounds, and more convinced theoretically if it didn’t seem like pulling people whose main interests are nearer the core of the problem would result in better work near the core of the problem.) But to some extent I don’t think this question matters– at this point, I want all of the people inclined towards theoretical AI that we can get (that’s the real funnel), and I don’t care much about their background, so recruitment from any population that has some high probability of great, interested people seems like a fine choice to me.

I am very, very pro people with social science backgrounds going into direct AI safety roles– technical or theoretical. I think that’s what we’re ultimately aiming to do here– community building, for example, doesn’t ground out in anything unless there are people who end up actually doing direct work at some point. This is the best role one can have, in my opinion, with respect to reducing AI existential risks. If you’re one of the social science people who can do direct work in AI safety, I’d stop reading this post here, and I send you all the kudos. 

 

(3) “I still want to do research, but I don’t want to do direct AI safety research. What about meta research directed at AI safety?” 

Meta research seems good but I don’t think it can absorb many people who don’t have a clear, decently-visionary path to impact in mind. 

As an interesting side point, I looked at the above two types of jobs, and I thought: man, running experiments with humans where I was basically trying to replicate AI experiments as closely as possible, or doing technical research with a social science flavor, or attempting to do theoretical research where social science is a side quest… those all sound *not fun* to me. They sound like not the reason I was drawn to psychology– they’re not really about understanding how humans work, which is the core of why I’m here. Meta-work, on the other hand– that’s understanding humans. That sounds *interesting*. 

If you want to do meta-research aimed at helping AI safety, you’ve got some great examples! There’s the classic study: Grace et al. 2017, published by AI Impacts. GovAI and I think Rethink Priorities run surveys of AI researchers, policymakers, and other stakeholders in AI. I also suspect researchers like Lucius Caviola, and other people in the “EA Psychologists” sphere, might end up doing research on AI safety researchers or AI researchers. My work interviewing AI researchers about AI safety arguments is another example of meta-research aimed at AI safety, though it also had a community-building component alongside the research. (There’s impact from the knowledge gained (research), and also impact from engaging each individual researcher on the arguments (community-building).)

I think there’s great work to be done in the meta-research-directed-at-AI-safety space, and there’s some excellent people (5-20?) doing it! People can make an academic job out of this, or get funded to do this work independently, or do it through an org like GovAI. However, I worry about how many more people this space can absorb. When I tried to collect a list of questions in this space (some more direct theoretical research, some more meta-research), I found it surprisingly difficult, and moreover wasn’t as enthusiastic about the research questions as I was about doing other community-building activities. For example, take surveys, a classic data-collection meta-research technique. To my mind we just don’t need *that* many surveys– the first ones collect a lot of information, and they need to be continually collected because things change quickly, but people don’t want to be surveyed constantly and it often seems like surveys are especially useful if the people running them have a specific plan for what they’re going to do with the information. This could be my lack of creativity however, and my bias that I think EA could do with more implementation and less research in some places.

As some reviewers of this post pointed out, it’s possible that in the future, as AI safety grows, we’ll probably want more people doing meta-research than we currently have funding/positions for now. Relatedly, having a few academic researchers in the space means that the research is more acceptable, and more people can then join (so, there’s a community-building component). Another argument is that we’ll always want really good researchers in the space, and to get really good researchers, you probably need to have a bunch of not-really-good-researchers try. And a final argument is that many people are more optimistic than me about the range of meta-research roles available right now!

All that said, this post is quite near-term, object-level, “what jobs are available now”-focused, and I’m personally a bit pessimistic about whether I’d put 30 more top people in this space if I had somewhere else to distribute them with respect to AI safety goals. But regardless of optimism / pessimism here, I encourage people thinking about meta-research to also consider other alternatives, which might have both better impact and be a better fit for them individually, or might not. As in all things, if you think you’re particularly suited to a meta-research role, then we certainly need more really excellent, visionary people in this space who have a clear path to impact in mind, and I encourage you to go for it. Otherwise, consider further options!

 

(~) Interlude: Inside views

For any of the roles listed in this post, I think that to have impact people need to have an internal understanding of AI safety risks (inside view model) and theory of change for how their work will reduce those risks.

At this point, I want to go on a short rant about inside-view models and theory of change. For basically every career I describe in this post, I want the people involved to learn about AI (the AGISF Curriculum main readings are great!) and have internal, “inside-view” models of the arguments and why they personally care or are worried about risks from advanced AI. I also want everyone to have a theory of change– an understanding of how the work that they’re doing is going to lead to some “win condition”, some goal they are aiming towards. The best version of this is doing the work that is *most likely* to lead to your win condition (which can include things like status or learning new skills in addition to reducing existential risk!) compared to all of the other things you could do. This seems very important to me across the board, whether you’re doing technical, theoretical, policy research, support roles, and community building. The more senior you are in any project / the more ambitious your projects / the larger you expect your impact to be in any of these roles– the more important I think this is. 

A reviewer pointed out that it seemed like I was maybe implying that you only needed a theory of change for specific roles in this doc, so I wanted to make it clear that I think it’s very important to have inside view models: especially in pre-paradigmatic spaces, but in any role where you’re expecting to be ambitious or enact leadership. Even at lower levels where you’re not a leader, I think you probably should have enough of a model to choose your employer carefully on impact grounds (in addition to all of the other considerations that go into choosing an employer), since some organizations will produce much more of the impact you want to have than others will. 

 

(4) “Eh, who cares about ‘using’ my background, let’s go into AI governance or policy.” 

This is an attitude I’d love to see, and I think people should definitely look into whether they’re a good fit for this. I don’t know that governance research can absorb many people; policy roles scale better; they both seem high potential impact to me. 

If you care about AI safety, why not go into AI governance or policy? 

You don’t get to “use” your background, but hey, I spent 4 years studying neuroscience and 5 years studying computational cognitive science, and I’m not using most of that domain knowledge and I’m confident that’s the best decision for what my goals are. There’s a lot of generalizable skills you learn just by trying at things, and getting older and living in the world– those will come with you wherever you go. 

One reason I do pay attention to people’s backgrounds though is that they often indicate where people’s *interests* lie. I’m obsessed with studying people. I can’t really change that much (I’ve tried) – it’s where the dice fell. And I think AI governance or policy are a particularly good thing for people to look into, because I’d guess that the type of person who likes social science has some overlap with the type of person who likes policy or governance. 

I spent a while looking into policy and it’s not quite my thing, but here’s a starter pack:

I will now issue some broad, personal assessments that aren’t very expert, and mostly stem from reading the above (which you should do instead of trusting me if you’re interested, since it’s been a bit since I looked into this). 

Governance and longtermist AI policy roles seem needed and also very difficult to fill, because they’re in the pre-paradigmatic stage. There were some early calls for “deconfusion” research that I don’t know the current state of, it seems like some of the work seems hard to get traction on, and I suspect nothing scales well at the moment from the research side. That said, we certainly need more people in it, so if you think you’re a good fit for this kind of “deconfusion” work, or whatever is currently needed in governance that I don’t know about, go forth and try! I’d keep in mind that there are probably very few positions here at the moment, and I don’t think anyone really knows what they’re doing, so you’re likely to be doing a lot of “figuring things out”. 

Policy is less pre-paradigmatic, and it seems like there’s encouragement and space for many EAs to go into policy. There’s a bunch of personal preferences you should have in mind for that: How extroverted are you / what’s your orientation towards talking to lots of different types of people? How do you feel about living in DC? How good are you at working on things that are not directly related to the things you care about, with the long-term plan of some bets working out? Stuff like that. Definitely worth looking into to see if you’re a good fit, seems like there can be a lot of impact here as well. 

So my overall take is that if you’re a social science person, I’d check out whether governance or policy would be a good fit, and charge ahead if so. Seems like we need many more people in both and they have high potential for impact. Otherwise, keep looking? 

 

(5) Operations directed at AI safety! 

I’m pro. This is again sort of “moving on from one’s background” in order to secure more impact. Ops can absorb lots of people, which is great. And I like that it’s making good things happen, possibly great things happen depending on who you work for, which could often be better than doing research that’s not very directed.  

See heading. Here’s the 80,000 Hours Job Board link for [operations + assistant / AI] to take a look at some options there! If you’re in the California Bay Area, there was also desire for personal assistants, including for some AI people, at some point, and it’s possibly worth trying to get creative here.

 

(6) Other support roles? 

What about coaching (mental health, productivity, 80K)? That may be interesting to social scientists. Organizations like Nonlinear? All great things to look into– I’d try to have as clear a story as possible about how your work would reduce AI risk, while keeping a job that interests you. (Note: I think these are  very difficult, competitive jobs to do well. But I mention them as options that may result in more impact with respect to AI safety while being connected to more personal interests.)

See heading.

 

(7) Finally, consider AI safety community building

My personal favorite option. It can absorb a lot of people (as long as community builders *cause direct object level work* to happen; let’s not pyramid-scheme!), has high potential impact, and it may be interesting to social science-type people. 

Unlike many of you, I spent a very long time (5 years?) wandering around the “how do I contribute to AI safety” space as a social scientist before I ran into the concept of community building, and it was obviously the best fit for me out of the things I investigated. There’s three reasons why I like it a lot: high potential for impact, fits really well with my interests of understanding people, and it has a lot of flexibility. To elaborate, community building doesn’t refer to one thing, so you can find or make up your own role within it, while still having an impact. (For example: mentoring, teaching, coaching, recruiting, making websites, writing, presenting, tabling, making new programs, running ops for these programs, etc. can all fall under “community building”, and you can choose to do only what you want to.) 

I think community building is also a little more generous with impact than e.g. research. To make strong contributions in research, one often has to be exceptional at research, whereas in community-building, one can often create impact by being more average. In both cases one has to make sure one’s aimed at the problems, of course– I’m reminded here of a post from Holden Karnofsky, on the importance of not working too hard and instead being focused on the right problems: 

If effective altruists are going to have outsized impact on the world, I think it will be mostly thanks to the unusual questions they’re asking and the unusual goals they’re interested in, not unusual dedication/productivity/virtue. I model myself as "Basically like a hedge fund guy but doing more valuable stuff," not as "A being capable of exceptional output or exceptional sacrifice."

As always, I think that if one wants to make a real impact, one needs an inside-view model and theory of change. And to not act as a player within a pyramid scheme, one has to make sure that one’s community building efforts do in fact result in anyone *doing object level work* in the end. I personally try to keep track of how many people I counterfactually cause to go into *technical AI safety*-- would they have gone anyway, did I speed them up, by how much, etc. However, keeping those in mind, I like that a lot of community building seems to have relatively low downside risk for not-super-ambitious projects, where you’ll learn a bunch of things and maybe it’ll be useful. And then you can scale up to larger more ambitious projects, which seem awesome and impactful.

Some AI safety-oriented community-building resources (more existential-risk oriented, but they all have AI safety content), if you haven’t seen these yet:


Additional point: this is hard

One of the points I haven’t especially emphasized is that I think all of these positions are difficult to do well. It’s very difficult to be really good at anything, at all, and then on top of that to do EA things– to have an inside view and do ambitious things without running into downside risks, towards a huge and nigh-intractable goal. There’s a reason most of the roles here are very competitive. I think all of the roles here would absorb more “very good people”, the type who will form their own agendas and move forward with new things. I imply things like “doing AI safety via debate is hard, try community building!” but community-building is also hard to do well. (It is less competitive and has a broader range of skill requirements though.)

So this post isn’t saying that it’ll be easy. It’s just saying: for a notably hard thing (reducing risks from AI, which ranges from “basically intractable” to simply “hard” depending on who you talk to), where you’re going to need excellence in some capacity, consider that you’ve got options to explore. People tend to do well at things they’re interested in, so here are some things that I looked into that you might also look into, if you’re social science-leaning, and interested in reducing existential risks from AI. 

And… veering off sideways, but careers are super tricky. It’s really hard to get everything you’d want in a job (stability, flexibility, status, respect, great colleagues, great working environment, money, impact, interesting problems, etc.) and I only really succeeded when I “made up my own job” to some degree, since I couldn’t find something that already existed. To rattle additional thoughts off quickly: all of the options in this post seem good to me, some I think are more competitive / have less demand than others, the options have different impact profiles and it can be hard to tell what they are but one can try, there are no shoulds only options and tradeoffs, and I want you to do what’s good for you. Here are some artifacts of my job search, but yours will be different, and creativity is likely needed across the board. Sending you much empathy and good luck.  

My favorite critique of this post: this post is near-term

This post is very near-term, very object-level, very “what jobs are available right now”. That makes sense, since it’s a post that was born out of me looking for a job. It also makes sense given that a subset of my intended audience is young people who I’m trying to show my impression of the current job landscape, so they can plan where they’re going. However, my favorite critique of this post so far is from TJ (though I also heard versions echoed in three others), who says:  “[This post] conflates job security with true demand” and “[This post] assumes that EA job market, and related task decomposition, is efficient.” Perhaps we NEED 100 great social science researchers aimed at meta-research, or any of the given roles, regardless of what’s going to be funded. I’d certainly want more excellent people in all of these roles, though my prioritization differs based on what I think is most needed at the moment. In any case, I think the fact that I’m near-term focused in this post is true and something to keep track of.

Conclusion

To summarize: I’m not convinced that most people with a social science background will be able to contribute to AI safety (if that’s their goal) if they pursue a mainline social science path. (2) My expectation is that social scientists are generally not well positioned to help with technical / theoretical AI safety research unless they expand their default skillsets to be more typically-shaped for these roles (which I encourage them to do!). I highly encourage social scientists to try for direct safety research if they can and are interested. (3) Meta-research is a good option but I think there aren’t many positions if researchers don’t have a clear and somewhat visionary path to impact in mind. Outside of research, I suggest social scientists interested in reducing existential risk from advanced AI consider a wider option space including (4) AI governance and policy, (5) operations, (6) other support roles, and (7) community building. (Interlude) Further, I think that those looking to reduce existential risk from AI, regardless of their specific role, will substantially increase their potential impact by having an inside view of the problem and a story for why the thing they're doing will help. (1) As a final point, I think “AI safety needs social science” as a slogan refers to something far more specific than the interpretation usually taken for that article, and people should be aware that there’s less of a “track” or jobs for social scientists than that interpretation implies. 

More personally, this document developed from me progressively testing my fit for a number of roles that were aimed at reducing AI risks while still being in touch with my social science-flavored interests. During the process, I amassed opinions about the relative demand / impacts of the roles, and those feel like the hot-takes part of this document. I’m very happy to hear details / be disagreed with / have more nuanced discussion take place in the comment section, and am grateful to everyone who’s already left thoughtful comments for me to integrate! I hope that this post encourages more people with social science backgrounds to investigate other opportunities that may be good fits for them, and thanks for reading.

  1. ^

     Sidenote: I think this post also applies to neuroscientists who are interested in contributing to AI safety (who may or may not classify themselves as social scientists). 

    Neuroscience was historically informative to AI and vice versa before the deep learning paradigm took off (see e.g. dopamine reward prediction error in reinforcement learning). However, under the deep learning paradigm, I think having a neuroscience background is not particularly relevant to AI safety unless the person additionally has a lot of AI / computational background, which is the stance I also take towards social science backgrounds in this post. 

    (I expect this stance to be just as or more controversial than the stance around social science backgrounds’ contributions to AI safety. There was a lot of ambient debate about this during my PhD, and I’ve heard things like it during recent interviews with neuroscientists.)

    However, I did want to mention my favorite example of where AI knowledge and neuro knowledge came together to be relevant to theoretical / technical AI safety research (see heading 2 in this post) within the deep learning paradigm, which is mechanistic interpretability! I thought it was very cool to see circuit-level neural electrophysiology methods combined with AI neural networks. That’s the best example I know that feels counterexample-flavored to the overall gist of my post, though I do still think it works within my claims. (Steven Brynes is probably another example.)

    Final note: cellular/molecular neuroscience, circuit-level neuroscience, cognitive neuroscience, and computational neuroscience are some of the divisions within neuroscience, and the skills in each of these subfields have different levels of applicability to AI. My main point is I don’t think any of these without an AI / computational background will help you contribute much to AI safety, though I expect that most computational neuroscientists and a good subset of cognitive neuroscientists will indeed have AI-relevant computational backgrounds. One can ask me what fields I think would be readily deployed towards AI safety without any AI background, and my answer is: math, physics (because of its closeness to math), maybe philosophy and theoretical economics (game theory, principle-agent, etc.)? I expect everyone else without exposure to AI will have to reskill if they’re interested in AI safety, with that being easier if one has a technical background. People just sometimes seem to expect pure neuroscience (absent computational subfields) and social science backgrounds to be unusually useful without further AI grounding, and I’m worried that this is trying to be inclusive when it’s not actually the case that these backgrounds alone are useful.

    Update: A reviewer has pointed out that this resource exists! “Brain enthusiasts” in AI Safety”! I basically agree with all of the content, and in my mind this fits into my heading (2), though I’m probably significantly more pessimistic about their bull case.

65

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since: Today at 6:54 AM

Spicy takes, but I think these are good points people should consider! 

I'm also doing a PhD in Cognitive Neuroscience, and I would strongly agree with your footnote that: 

"Final note: cellular/molecular neuroscience, circuit-level neuroscience, cognitive neuroscience, and computational neuroscience are some of the divisions within neuroscience, and the skills in each of these subfields have different levels of applicability to AI. My main point is I don’t think any of these without an AI / computational background will help you contribute much to AI safety, though I expect that most computational neuroscientists and a good subset of cognitive neuroscientists will indeed have AI-relevant computational backgrounds."

A bunch of people in my program have gone into research at DeepMind. But these were all people who specifically focused on ML and algorithm development in their research. There's a wide swath of cognitive neuroscience, and other neuro sub-disciplines you list, where you can avoid serious ML research. I've spoken to about a dozen EA neuroscientists who didn't focus on ML and have become pretty pessimistic about how their research is useful to AI development/alignment. This is a bummer for EAs who want to use their PhDs to help with AI safety. So please take this into consideration if you're an early stage student considering different career paths!

Thanks for this, Vael! As I said previously, here are some areas of agreement and potential disagreement.

Agreement
 

I generally agree that most people with social science PhDs should look outside of AI safety research, and I like your suggestions for where to look.

I think that the top 10% or so of social science researchers should probably try to do AI safety related research, particularly people who thrive in academic settings or lack movement building skills.

Overall, I’d encourage anyone who was equally good at AI safety SS research and AI safety movement building, to choose the latter option. AI safety movement building  probably feels higher expected impact to me than AI safety SS research because I think that movement building is the highest impact ‘instrumental’ cause area, at least until vastly more people know about, understand and work on the key concepts, arguments and needs of EA.

Potential disagreement (shared as a total non-expert, to be clear)
 

My intuition is that AI safety research is still relatively undersupplied by social science researchers compared to the ideal. I think the area could, and ideally should, absorb a lot of social science research people over the next 30 years if funding and interest scale as I expect. Maybe 5,000+, if I consider all the organisations and geographies involved. Ideally, as many as possible of these people would be EA aware and aligned.

What might this look like? For instance, in academia, I see work to i) understand if/where creating evidence and interventions might be useful (e.g, interviewing/surveying technical researchers, policymakers and organisational leaders and/or mapping their key behaviours to influences), to ii) prioritise what is important (e.g., ranking malleability of different interventions/behaviour), then doing related research. I also foresee a range of theoretical work around how we can port over concepts and theory from areas such as communication, psychology and sociology to describe, understand and optimise how human and machines interact. I also expect that there will be a lot of value in coordination to support that work. I expect to see a global distribution of research labs researcher and projects.

In government and private settings, I foresee social science researchers hired to do more ‘context bound’ work with clear connections to immediate policy decisions. Government embedded research teams who need to understand technical, political and social factors to understand how to craft effective different types of national policy. Organisationally embedded teams engaged to create organisational policies that get value from internal AI, or engagement with other organisations platforms. Lots of work in the military and defence sector.

In support of these areas, I see a lot of social science background people being useful by working as knowledge brokers to effectively translate and communicate ideas between researcher and different types of practitioners (e.g., as marketers, community builders, user researchers, or educator). Also in providing various research support structures, training and curating potential students and research assistants, setting up support infrastructure like panels of potential technical/policy research participants, starting/organising conferences and journals etc).

I also suspect that’s there's going to be lots needed that I have left out.

Overall, from a behavioural science research perspective (e.g., who needs to do what differently/what is the most important behaviour here/how do we ensure that behaviour happens), there are a lot of different behaviours, audiences contexts, and interactions, and little to no understanding of the key behaviours, actors, or ideal interventions. If this is life-on-earth-threatening-in-the-near-future stuff, then there is a lot of work to be done across a huge range of areas!

Not sure if this is an actual disagreement, so let me know. It’s useful for me to write up and share regardless, as it underpins some of my movement building plans. Feedback is welcome.

One potential career option if you are interested in both AI safety and also the psychology of judgment and decision making: work in the EA psychology lab with me and Lucius Caviola. We currently have open positions for research assistants and postdocs.  We have job postings here: https://www.eapsychology.org/jobs. What’s more, I have it on Vael’s personal authority that they endorse this use of social science for helping with AI safety. The brief theory of change is something like the following: if the world ends because of AI, there’s a good chance that some people, somewhere along the line, made some pivotal judgment errors that could have been avoided with a better understanding of the kind of judgment errors that are most relevant to AI, AI policy, and AI alignment. We are conducting research on such judgment errors, among other x-risk and EA relevant topics. If you are interested in this kind of thing and are in a career stage where a postdoc or research assistantship would be useful, please apply! 

Thanks for a great post Vael! 

^ Yeah, endorsed! This is work in (3)-- if you've got the skills and interests,  going to work with Josh and Lucius seems like an excellent opportunity, and they've got lots of interesting projects lined up. 

Thanks for writing this! I think it sells short the option of AI governance research, though, with the claim "You don’t get to “use” your background." I think the good news for many social scientists considering AI governance paths is that there probably are ways to use your background, depending on your background. The field has tons of open questions and could use more economists, political scientists, legal scholars, anthropologists, sociologists, psychologists, and historians to make progress on them!

Thanks levin! I realized before I published that I hadn't gotten nearly enough governance people to review this,  and indeed was hoping I'd get help in the comment section.

I'd thus be excited to hear more. Do you have specific questions / subareas of governance that are appreciably benefited by having a background in "economics, political science, legal studies, anthropology, sociology, psychology, and history" rather than a more generic "generalist"-type background (which can include any of the previous, but doesn't depend on any of them?)

I view the core of this post as trying to push back a bit on inclusive "social scientists are useful!" framings, and instead diving into more specific instances of what kind of jobs and roles are available today that demand specific skills, or alternatively pointing out where I think background isn't actually  key and excellent generalist skills are what are sought.

Great questions, and I guess I agree that generalist skills are probably more important (with one implication being that I'd be less excited about people getting PhDs in these fields than my comment might have implied).

Just as an example, since I'm quite new to the field as well: the project I'm currently working on includes a sub-question that I think an actual economist would be able to make much faster progress on: how does the availability of research talent to top technology firms affect their technological progress?

My impression is that since a lot of important research projects on e.g. ideas for new treaties, historical analogies, military-strategic options seem to similarly break down into sub-questions that vary on how domain-knowledge-demanding they are, social scientists might be able to have an unusual impact working on the more demanding of these sub-questions.

I really enjoyed this post, thank you for writing it. I'm commenting from an AI law and policy centric view, so this comment is mainly aimed at that angle.

I agree with much of your post, but I want to highlight that there is a need for social scientists in some areas of AI Safety research. I have worked on a few projects for the UK government around AI Safety, helping to build legal, regulatory, and mitigation strategies in the AI Safety field. This is often part of an interdisciplinary team. A few of us are usually sociologists which, with me having a mixed CompSci and Law background, was initially a big change. They were massively useful. I think the importance of understanding human society and how it functions is often woefully underestimated in the AI Safety field. It may or may not have a place in the purely hard-line technical AI Safety area,  (I'd be the wrong person to ask), but in terms of governance and policy specialisms such as sociology and economics are very important. If anything, there's a bit of a lack of people with those expertise area with adequate knowledge of AI. So if there is someone who is, for example, a sociology PhD with a big interest in AI, there are definitely opportunities available.

The hard part is finding them. One of the weird niggles of AI Policy/Governance is that it's heavily network-based in that you have to build and maintain relationships as a core resource. This means someone starting out without someone there to guide/help can face a real challenge. Another of the downsides is that sometimes (quite rarely) the work/research/projects is secret or NDA, so people don't always get to talk about the work they did in as much detail when applying for fellowships, jobs etc.

This is why I think orgs which run fellowships in this area are important - they're a jumpstart on the network element and can help better guide people to new specialisms.


Edit reason: typo

 

Curated and popular this week
Relevant opportunities