EDIT: I'm no longer actively checking this post for questions, but I'm likely to periodically check.

Hello, I work at the Centre for the Governance of AI (GovAI), part of the Future of Humanity Institute (FHI), University of Oxford, as a project manager, where I put time into e.g. recruitment, research management, policy engagement, and operations.

FHI and GovAI are hiring for a number of roles. Happy to answer questions about them:

  • GovAI is hiring a Project Manager, to work alongside myself. Deadline September 30th.
  • FHI is hiring researchers, across three levels of seniority and all our research groups (including GovAI). Deadline Oct 19th.
  • The Future of Humanity Foundation, a new organisation aimed at supporting FHI, is hiring a CEO. Deadline September 28th.
  • We’re likely to open for applications to our GovAI Fellowship over the next month or so, a 3-month research stint aimed at helping people get up to speed with and test their fit for AI governance research, likely starts in Jan or July 2021.

Relevant things folks at GovAI have published in 2020:

A little more about me:

  • At GovAI, I’ve been especially involved e.g. with our research on public and ML researcher views on AI governance and forecasting (led by Baobao Zhang), implications of increased data efficiency (led by Aaron Tucker), the NeurIPS Broader Impact Statement Requirement (led by Carolyn Ashurst & Carina Prunkl), our submission on the EU Trustworthy AI Whitepaper (led by Stefan Torges), and on the what we can learn about AI governance from the governance of previous powerful technologies.
  • Before coming to GovAI in 2018, I worked as the Executive Director of EA Sweden, e.g. running a project promoting representation for future generations (more info here). I’ve also worked as a management consultant at EY Stockholm, and I ran the Giving What We Can: Cambridge group (now EA Cambridge) for a year.
  • I was encouraged to write down some of my potentially unusual views to spur discussion. Here are some of them:
    • There should be more EA community-building efforts focused on professionals, say people with a few years of experience.
    • I think EAs tend to underestimate the value of specialisation. For example, we need more people to become experts in a narrow domain / set of skills and then make those relevant to the wider community. Most of the impact you have in a role comes when you’ve been in it for more than a year.
    • There is a vast array of important research that doesn’t get done because people don’t find it interesting enough.
    • People should stop using “operations” to mean “not-research”. I’m guilty of this myself, but it clumps together many different skills and traits, probably leading to people undervaluing them.
    • Work on EU AI Policy is plausibly comparably impactful to US policy on the current margin, in particular over the next few years as the EU Commission's White Paper on AI is translated into concrete policy.
    • I think the majority of AI risk is structural – as opposed to stemming from malicious use or accidents – e.g. technological unemployment leading to political instability, competitive pressures, or decreased value of labour undermining liberal values.
    • Some forms of expertise which I'm excited to have more of at GovAI include: institutional design (e.g. how should the next Facebook Oversight Board-esque institution be set up), transforming our research insights into policy proposals (e.g. answering questions like what EU AI Policy we should push for, how a system to monitor compute could be set up), AI forecasting, along with relevant bits of history and economics.

49

0
0

Reactions

0
0

More posts like this

Comments24
Sorted by Click to highlight new comments since: Today at 7:48 AM

There is something I would really like to know, although it is only tangentially related to the above: how is taking a postdoctoral position at FHI seen comparatively with other "standard academia" paths? How could it affect future research career options? I am personally more interested from the technical side, but feel free to comment on whatever you feel is interesting.

And since you mention it: 
What EU AI Policy we should push for? And what mechanisms do you see the EU AI policy having a positive impact, compared with the US for example? What ways do you see for technical people to influence AI governance?

Many thanks in advance!

Thanks, Pablo. Excellent questions!

how is taking a postdoctoral position at FHI seen comparatively with other "standard academia" paths? How could it affect future research career options?

My guess is that for folks who are planning on working on FHI-esque topics in the long term, FHI is a great option. Even if you treat the role as a postdoc, staying for say 2 years, I think you could be well set up to go to continue doing important research at other institutions. Examples of this model include Owain Evans, Jan Leike, and Miles Brundage. Though all of them went on to do research in non-academic orgs, I think their time at FHI would have set them up decently for that.

The other option is planning on staying at FHI for the longer term. For people who end up being a good fit, I think that's often an excellent option. Even though these roles are fixed term for 2 years, we're able to extend contracts beyond that if the person ends up being a good fit, provided we have the funding (which I expect us to do for the next couple of years).

What EU AI Policy we should push for?

My overall answer is that I'm not quite sure, but that it seems important to figure out! More reasoning below.

And what mechanisms do you see the EU AI policy having a positive impact, compared with the US for example?

Earlier this year, the EU Commission published its White Paper on AI, laying out a legislative agenda that's likely to be put into practice over the next couple of years. There's reason to think that this legislation will be very influential globally, due to it plausibly being subject to the Brussels Effect. The Brussels Effect (Wikipedia, recent excellent book) is the EU's legislation having a large effect globally, as it's adopted by companies and/or governments globally. We've seen this effect at work with regards to GDPR, food safety, etc. In brief, the reason this happens is that the EU has a large domestic market that is immovable, which means companies are strongly incentivised to operate on the market. Further, the EU tends to put in place legislation that is much harsher than other jurisdictions. As such, need to comply with the legislation at least in the EU market. In some cases, it won't make sense for companies to provide different products to different markets, and so they adhere to the EU standard globally. Once this occurs, there's pressure for the companies to push for EU-level standards in other markets and legislators in other jurisdictions may be tempted to put in place the EU-level standard. However, there's a number of ways one could imagine that the Brussels Effect won't be at work in the case of AI, which I'd be very keen to see someone start researching.

As such, I believe that the EU is much more likely than the US to determine what legislation tech/AI companies need to adhere to. The US could do the same thing, but it probably won't because the US legislator has a much smaller appetite for legislation than the EU.

What ways do you see for technical people to influence AI governance?

There are lots! I think technical folks can help solve technical problems that will help AI governance, for example following the Cooperative AI agenda, working on verification of AI systems, or otherwise solving bits of the technical AI safety problem (which would make the governance problem easier). They can also be helpful by working more directly on AI governance/policy issues. A technical background will be very useful as a credential (policy makers will take you more seriously) and it seems likely to improve your ability to think through the problems. I also think technical folks are uniquely positioned to help steer the AI researcher community in positive directions, shaping norms in favour of safe AI development and deployment.

Your paragraph on the Brussels effect was remarkably similar to the main research proposal in my FHI research scholar application that I hastily wrote, but didn't finish before the deadline.

The Brussels effect it strikes me as one of the best levers available to Europeans looking to influence global AI governance. It seems to me that better understanding how international law such as the Geneva conventions came to be, will shed light on the importance of diplomatic third parties in negotiations between super powers.

I have been pursuing this project on my own time, figuring that if I didn't, nobody would. How can I make my output the most useful to someone at FHI wanting to know about this?

That's exciting to hear! Is your plan still to head into EU politics for this reason? (not sure I'm remembering correctly!)

To make it maximally helpful, you'd work with someone at FHI in putting it together. You could consider applying for the GovAI Fellowship once we open up applications. If that's not possible (we do get a lot more good applications than we're able to take on) getting plenty of steer / feedback seems helpful (you can feel to send it past myself). I would recommend spending a significant amount of time making sure the piece is clearly written, such that someone can quickly grasp what you're saying and whether it will be relevant to their interests.

In addition to Markus' suggestion that you could consider applying to the GovAI fellowship, you could also considering applying for a researcher role at GovAI. Deadline is October 19th. 

(I don't mean to imply that the only way to do this is to be at FHI. I don't believe that that's the case. I just wanted to mention that option, since Markus had mentioned a different position but not that one.)

Thanks for doing this!

There should be more EA community-building efforts focused on professionals, say people with a few years of experience.

Could you say more about which fields / career paths you have in mind?

There is a vast array of important research that doesn’t get done because people don’t find it interesting enough.

You already give some examples later but, again, which fields do you have in mind? Do you think there are any systemic reasons for why certain areas of research aren't considered to be 'interesting' in EA / longtermism (assuming those are the people you have in mind here)?

Could you say more about which fields / career paths you have in mind?

No particular fields or career paths in particular. But there are some strong reasons for reaching out to people who already have or are on a good track to having impact in a field/career path we care about. These people will need a lot less training to be able to contribute and they will already have been selected for being able to contribute to the field.

The issue that people point to is that it seems hard to change people's career plans or research agendas once they are already established in a field. E.g. students are much more moveable. I think this is true, but I also think we haven't worked hard enough to find ways of doing this. For example, we currently have a problem where someone with over a decade of experience might not be comfortable with EA groups, because they tend to skew very young.

Things we could try more:

  • Find senior researchers working on topics adjacent to those that seem important and who seem interested in the questions we're asking. Then get them involved by connecting them with an interesting research agenda, excellent collaborators, and potentially funding. This is something that a lot of the academic institutions in the longtermist space (which I know better than other EA causes) are trying out. I'm excited to see how it goes.
  • Introduce more specialist community building. Perhaps this can be done by e.g. hosting more conferences on specific EA-related topics, where researchers already involved in a field can see how their research fits into the topic (this is something e.g. GPI seems to be doing a good job of).
You already give some examples later but, again, which fields do you have in mind?

Some categories that spring to mind:

  • Collecting important data series. One example is Benn Todd's recent work on retention in the EA movement. Another example is data regarding folk's views on AI timelines. I'm part of a group working on a follow-up survey to Grace et al 2018, where we've resurveyed folks who responded to these questions in 2016. This is the first time (to my knowledge) results where the same person is asked the same HLMI timeline question twice, with some time in between, even though the first survey of people's beliefs on the question were done around 2005 (link).
  • Doing rigorous historical case studies. These tend to be fairly informative to my world view (for better or worse), but I don't know of anyone within the EA community who spends a significant amount of time on it.
  • Thoroughly trying to understand an important actor, e.g. becoming a China specialist with deep understanding of longtermist or animal welfare issues.
Do you think there are any systemic reasons for why certain areas of research aren't considered to be 'interesting' in EA / longtermism (assuming those are the people you have in mind here)?

I think something systematic is that people are excited about work that is conceptually hard. A lot of people tend to have philosophical mathsy minds. This means that e.g. empirical research gets left by the wayside. I.e. it's not by any means inherently uninteresting work.

Thanks for doing this!

People should stop using “operations” to mean “not-research”. I’m guilty of this myself, but it clumps together many different skills and traits, probably leading to people undervaluing them.

Could you say more about the different skills and traits relevant to research project management?

Thanks, Jia!

Could you say more about the different skills and traits relevant to research project management?

Understanding the research: Probably the most important factor is that you're able to understand the research. This entails knowing how it connects to adjacent questions / fields, having well thought-out models about the importance of the research. Ideally, the research manager is someone who could contribute, at least to some extent, to the research they're helping manage. This often requires a decent amount of context on the research, often having spent a significant amount of time reading the relevant research and talking to the relevant people.

Common sense & wide expertise: One way in which you can help as a research manager is often to suggest how the research relates to work by others, and so having decently wide intellectual interests is useful. You also want to have a decent amount of common sense to help make decisions about things like where something should be published and what ways a research project could go wrong.

Relevant epistemic virtues: Just like a researcher, it seems important to have incorporated epistemic virtues like calibration, humility, and other truth-seeking behaviours. As a research manager, you might be the main person that needs to communicate these virtues to new potential researchers.

People skills: Seems very important. Being able to do things like helping people become better researchers by getting to know what motivates them, what tends to block them, etc. Also being able to deal with potential conflicts and sensitive situations that can arise in research collaborations.

Inclination: I think there's a certain kind of inclination that's helpful to do research management. You're excited about dabbling in a lot of different questions, more so than really digging your head down and figuring out one question in depth. You're perhaps better at providing ideas, structure, conceptual framing, feedback, than doing the nitty-gritty of producing all the research yourself. You also probably need to be fine with being more of a background figure, and let the researchers shine.

Probably there are a bunch more useful traits I haven't pointed to

Any insights into what constitutes good research management on the levels of (a) a facilitator helping a lab to succeed, and (b) an individual researcher managing himself (and occasional collaborators)?

Thanks Misha!

Not sure I've developed any deep insights yet, but here are some things I find myself telling researchers (and myself) fairly often:

  • Consider a wide range of research ideas. It's easy to get stuck in a local optimum. We often have people write out at least 5 research ideas and rate them on criteria like "fit, importance, excitement, tractability", e.g. when they join as a GovAI Fellow. You should also have a list of research ideas that you periodically look through.
  • Think about what output you're aiming at from the start. It will determine the style, what literature you need to read, length, etc. Reworking a piece from one style to another can often take up to 20 hours.
  • Make outlines often and early. This will help you be clear about what your argument is. Also, start writing earlier than you feel comfortable with. Often, things feel much clearer in your head than they actually are.
  • If you have a high barrier to start writing, find ways to lower it. You can write outlines, dictate, say it out loud to a friend, set a timer during which you're just allowed to write, no editing.
  • Discuss your ideas with others often. I find that a lot of good thinking, in particular on how to frame something, comes from discussion. Just the act of putting your thoughts into words helps.
  • Clear writing is hugely valuable. A lot of research gets less attention than it deserves because it takes too much effort to parse. Trying to write clearly will also often highlight complexities or problems with your argument that can be hidden in unclear prose.
  • Don't reinvent the wheel. There's been a lot of smart people trying to figure out things about the world before you. Make a proper effort to try to find their writings and learn from them.
  • Quality over quantity. The impact of research is probably heavy-tailed, and we have okay means to tell the quality of a piece ex ante.
  • If you're just starting out, find ways to work closely with people you can learn from. To do this, it's much easier to support some senior person's current agenda, rather than convincing them to help you on your research ideas. That way, they will very clearly get value out of working with you, and you'll get an inside view into their research intuitions. For example, I think working as a research assistant to a great researcher is one of the best ways you can learn the ropes.
  • As a facilitator of research, there's also a bunch of common sense things you can do: Set deadlines for people, organise talks, connect people with the right people.
  • There's a lot I could say about feedback, but I'll just say some things on how to ensure you actually get the feedback you need. Often this is difficult as the people whose feedback is most valuable to you have a lot of things pulling at their time.
    • Make it easy. Give people instructions about what parts they should focus on, what questions you want them to think about etc. You can also pull out specific questions where you think they have expertise and just send those along.
    • Be accommodating. Some people find it much less costly to give feedback after having listened to you give a talk or prefer giving feedback over a call. Others prefer doing everything via text.

I found this answer very interesting - thanks!

On feedback, I also liked and would recommend these two recent posts:

Thanks Markus.

I read the US public opinion on AI report with interest, and thought to replicate this in Australia. Do you think having local primary data is relevant for influence?

Do you think the marginal value lies in primary social science research or in aggregation and synthesis (eg rapid and limited systematic review) of existing research on public attitudes and support for general purpose / transformative technologies?

Thanks Alexander. Would be interested to hear how that project proceeds.

I read the US public opinion on AI report with interest, and thought to replicate this in Australia. Do you think having local primary data is relevant for influence?

I think having more data on public opinion on AI will be useful primarily for understanding the "strategic landscape". In scenarios where AI doesn't look radically different from other tech, it seems likely that the public will be a powerful actor in AI governance. The public was a powerful actor in the history of e.g. nuclear power, nuclear weapons, GMOs, and perhaps the industrial revolution not happening sooner (The Technology Trap makes this argument). Understanding the public's views is therefore important to understanding how AI governance will go. It also seems important to understand how one can shape or use public opinion for the better, though I'm pessimistic about that being a high leverage opportunity.

Do you think the marginal value lies in primary social science research or in aggregation and synthesis (eg rapid and limited systematic review) of existing research on public attitudes and support for general purpose / transformative technologies?

Following on from the above, I think the answer is yes. I'd be particularly keen for this work to try to answer some counterfactual history questions: What would need to have been different for GMO/nuclear to have been more accepted? Was it possible to see the public's resistance in advance?

What will the typical week of the new GovAI Project Manager be like?

It's a little hard to say, because it will largely depend on who we end up hiring. Taking into account the person's skills and interests, we will split up my current work portfolio (and maybe add some new things into the mix as well). That portfolio currently includes:

  • Operations: Taking care of our finances (including some grant reporting, budgeting, fundraising) and making sure we can spend our funds on what we want (e.g. setting up contracts, sorting out visas). It also includes things like setting up our new office and maintaining our website. A lot of our administrative / operations tasks are supported by central staff at FHI, which is great.
  • Team management: Making sure everyone on the team is doing well and helping improve their productivity. This includes organising the team meetings and events, having regular check-ins with everyone.
  • Recruitment: Includes taking our various hiring efforts to fruition, such as those that are currently ongoing, but also helping onboard and support folks once they join. I've for example spent time supervising a few of our GovAI Fellows as well as Summer Research Fellows. It also includes being on the lookout for and cultivating relationships with folks we might want to hire in the future, by bringing them over for visits, having them do talks etc.
  • Outreach: This can include doing talks, and organising various events. Currently we're running a webinar series that I think the new PM would be well-suited to take over responsibility of. In the future, this could mean organising conferences as well.
  • Research management: This includes a lot of activities usually done in collaboration with the rest of the team, ranging from just checking in on research and making sure it's progressing as planned, to giving in-depth feedback and steering, to deciding where and how something should be published, to in some cases co-authoring pieces. This work requires a lot of context and understanding of the field.
  • Policy Engagement: We're starting to put more work into policy engagement, but it's still in its early stages. There's a lot of room to do more. Currently, this primarily consists of scanning for opportunities that seem particularly high value and engaging in those. In the future, I'd like us to become more proactive, e.g. defining some clear policy goals and figuring out how to increase the chance they're realised.
  • Strategy: Working with Allan and the rest of the team to decide what we should be spending our time on.

I think the most likely thing is that the person will start by working on things like operations, team management, recruitment, and helping organise events. As they absorb more context and develop a better understanding of the AI governance space, they'll take on more responsibility in other areas such as policy engagement, research management, recruitment, strategy, or other new projects we identify.

Hi Markus! I like the list of unusal views.

I think EAs tend to underestimate the value of specialisation. For example, we need more people to become experts in a narrow domain / set of skills and then make those relevant to the wider community. Most of the impact you have in a role comes when you’ve been in it for more than a year.

I would've expected you to cite the threshold for specialisation as longer than a year; as stated, I think most EAs would agree with the last sentence. Do you think that the gains from specialisation keep accumulating after a year, or do you think that someone switching roles every three years will achieve at least half as much as someone who keeps working in the same role? (This might also depend on how narrowly you define a "role".)

Thanks for the question, Lukas.

I think you're right. My view is probably stronger than this. I'll focus on some reasons in favour of specialisation.

I think your ability to carry out a role keeps increasing for several years, but the rate on improvement presumably goes tapers off with time. However, the relationship between skill in a role and your impact is less clear. It seems plausible that there could be threshold effects and the like, such that even though your skill doesn't keep increasing at the same rate, the impact you have in the role could keep increasing at the same or an even higher rate. This seems for example to be the case with research. It's much better to produce the very best piece on one topic than to produce 5 mediocre pieces on different topics. You could imagine that the same thing happens with organisations.

One important consideration - especially early in your career - is how staying in one role for a long time affects your career capital. The fewer competitive organisations there are in the space where you're aiming to build career capital and the narrower the career capital you want to build (e.g. because you are aiming to work on a particular cause or in a particular type of role), the less frequently changing roles makes sense.

There's also the consideration of what happens when we coordinate. In the ideal scenario, more coordination in terms of careers should mean people try to build more narrow career capital, which means that they'd hop around less between different roles. I liked this post by Denise Melchin from a while back on this topic.

It's also plausible that you get a lot of the gains from specialisation not from staying in the same role, but primarily in staying in the same field or in the same organisation. And so, you can have your growth and still get the gains from specialisation by staying in the same org or field but growing your responsibilities (this can also be within one and the same role).

Thanks, that's helpful.

The fewer competitive organisations there are in the space where you're aiming to build career capital and the narrower the career capital you want to build (e.g. because you're unsure about cause prior or because the roles you're aiming at require wide skillsets), the less frequently changing roles makes sense.

Is this a typo? I expect uncertainty about cause prio and requirements of wide skillsets to favor less narrow career capital (and increased benefits of changing roles), not narrower career capital.

It is indeed! Editing the comment. Thanks!

Thanks for doing this! I take my chances and ask this question here, even though it is not strictly speaking related to the current job postings: Do you know if FHI has notified the longlisted applicants for the position as a research scholar yet? (The position had a deadline: September 14)

Unfortunately, I'm not on that selection committee, and so don't have that detailed insight. I do know that there was quite a lot of applications this year, so it wouldn't surprise me if the tight deadlines originally set end up slipping a little.

I'd suggest you email: fhijobs@philosophy.ox.ac.uk

Curated and popular this week
Relevant opportunities