Hi everyone,

FHI is hiring researchers.  The job details are below—please send to anyone you think might be qualified and interested!

---

The Future of Humanity Institute at the University of Oxford invites applications for four research positions. We seek outstanding applicants with backgrounds that could include computer science, mathematics, economics, technology policy, and/or philosophy.

The Future of Humanity Institute is a leading research centre in the University of Oxford looking at big-picture questions for human civilization. We seek to focus our work where we can make the greatest positive difference. Our researchers regularly collaborate with governments from around the world and key industry groups working on artificial intelligence. To read more about the institute’s research activities, please see http://www.fhi.ox.ac.uk/research/research-areas/.

1. Research Fellow – AI – Strategic Artificial Intelligence Research Centre, Future of Humanity Institute (Vacancy ID# 121242). We are seeking expertise in the technical aspects of AI safety, including a solid understanding of present-day academic and industrial research frontiers, machine learning development, and knowledge of academic and industry stakeholders and groups. The fellow is expected to have the knowledge and skills to advance the state of the art in proposed solutions to the “control problem.” This person should have a technical background, for example, in computer science, mathematics, or statistics. Candidates with a very strong machine learning or mathematics background are encouraged to apply even if they do not have experience with AI safety topics, assuming they are willing to switch to this subfield. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1M11RbY.

2. Research Fellow – AI Policy – Strategic Artificial Intelligence Research Centre, Future of Humanity Institute (Vacancy ID# 121241). We are looking for someone with expertise relevant to assessing the socio-economic and strategic impacts of future technologies, identifying key issues and potential risks, and rigorously analysing policy options for responding to these challenges. This person might have an economics, political science, social science, or risk analysis background. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1OfWd7Q.

3. Research Fellow – AI Strategy – Strategic Artificial Intelligence Research Centre, Future of Humanity Institute (Vacancy ID# 121168). We are looking for someone with a multidisciplinary science, technology, or philosophy background and with outstanding analytical ability. The post holder will investigate, understand, and analyse the capabilities and plausibility of theoretically feasible but not yet fully developed technologies that could impact AI development, and to relate such analysis to broader strategic and systemic issues. The academic background of the post-holder is unspecified, but could involve, for example, computer science or economics. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1jM5Pic.

4. Research Fellow – ERC UnPrEDICT Programme, Future of Humanity Institute (Vacancy ID# 121313). This Research Fellowship will work on a new European Research Council-funded UnPrEDICT (Uncertainty and Precaution: Ethical Decisions Involving Catastrophic Threats) programme, hosted by the Future of Humanity Institute at the University of Oxford. This is a research position for a strong generalist, and will focus on topics related to existential risk, model uncertainty, the precautionary principle, and other principles for handling technological progress. In particular, this research fellow will help to develop decision procedures for navigating empirical uncertainties related to existential risk, including information hazards and situations where model or structural uncertainty are the dominating form of uncertainty. The research could take a decision-theoretic approach, although this is not strictly necessary. We also expect the candidate to engage with the research on specific existential risks, possibly including developing a framework to evaluate uncertain risks in the context of nuclear weapons, climate risks, dual use biotechnology, and/or the development of future artificial intelligence. The successful candidate must demonstrate evidence of, or the potential for producing, outstanding research in the areas of relevance to the project, the ability to integrate interdisciplinary research in philosophy, mathematics and/or economics, and familiarity with both normative and empirical issues surrounding existential risk. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1HSCKgP.

Candidates should have normally completed a Ph.D. by the time of appointment, but exceptions will be made for particularly promising candidates.

Alternatively, please visit http://www.fhi.ox.ac.uk/vacancies/ or https://www.recruit.ox.ac.uk/ and search using the above vacancy IDs for more details.


9

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 4:26 PM

This is only tangentially related, but: the 80,000 Hours guide to AI risk research said that the field is talent- rather than funding-constrained. Is that true for FHI as well? Like Michael said, you are only hiring people with PhDs, and you will have too many applicants to be able to respond to each one, so I'm still reluctant to pursue it as a career route.

Overall we are way more talent constrained rather than funding constrained. The UK xrisk ecosystem has managed to get something like $10m from non-EA sources, and now recruitment is our biggest bottleneck. As mentioned below, we do hire people without PhDs to our 'postdoc' positions.

Why are you specifically hiring postdoctoral candidates only? Do you believe that having a PhD is typically necessary for strong research skills? Just looking at AI safety since that's what these positions focus on, MIRI has researchers without graduate degrees and FLI made some grants to non-PhDs; so it's non-obvious that PhDs matter much here.

Hiring post docs only substantially limits your hiring pool, so I'm curious why you have this restriction.

We do hire people without PhDs—appropriate edits done :)

There is something I might offer that's unusual though I'm not sure I'm a match. I see risks other people don't. What should I do with that?

Writing a blog post is one option... I'd read it :)

Curated and popular this week
Relevant opportunities