AMA: Markus Anderljung (PM at GovAI, FHI)

That's exciting to hear! Is your plan still to head into EU politics for this reason? (not sure I'm remembering correctly!)

To make it maximally helpful, you'd work with someone at FHI in putting it together. You could consider applying for the GovAI Fellowship once we open up applications. If that's not possible (we do get a lot more good applications than we're able to take on) getting plenty of steer / feedback seems helpful (you can feel to send it past myself). I would recommend spending a significant amount of time making sure the piece is clearly written, such that someone can quickly grasp what you're saying and whether it will be relevant to their interests.

AI Governance: Opportunity and Theory of Impact

It definitely seems true that if I want to specifically figure out what to do with scenario a), studying how AI might affect structural inequality shouldn't be my first port of call. But it's not clear to me that this means we shouldn't have the two problems under the same umbrella term. In my mind, it mainly means we ought to start defining sub-fields with time.

AI Governance: Opportunity and Theory of Impact
A first guess at what might be meant by AI governance is "all the non-technical stuff that we need to sort out regarding AI risk". Wonder if that's close to the mark?

A great first guess! It's basically my favourite definition, though negative definitions probably aren't all that satisfactory either.

We can make it more precise by saying (I'm not sure what the origin of this one is, it might be Jade Leung or Allan Dafoe):

AI governance has a descriptive part, focusing on the context and institutions that shape the incentives and behaviours of developers and users of AI, and a normative part, asking how should we navigate a transition to a world of advanced artificial intelligence?

It's not quite the definition we want, but it's a bit closer.

AMA: Markus Anderljung (PM at GovAI, FHI)

It's a little hard to say, because it will largely depend on who we end up hiring. Taking into account the person's skills and interests, we will split up my current work portfolio (and maybe add some new things into the mix as well). That portfolio currently includes:

  • Operations: Taking care of our finances (including some grant reporting, budgeting, fundraising) and making sure we can spend our funds on what we want (e.g. setting up contracts, sorting out visas). It also includes things like setting up our new office and maintaining our website. A lot of our administrative / operations tasks are supported by central staff at FHI, which is great.
  • Team management: Making sure everyone on the team is doing well and helping improve their productivity. This includes organising the team meetings and events, having regular check-ins with everyone.
  • Recruitment: Includes taking our various hiring efforts to fruition, such as those that are currently ongoing, but also helping onboard and support folks once they join. I've for example spent time supervising a few of our GovAI Fellows as well as Summer Research Fellows. It also includes being on the lookout for and cultivating relationships with folks we might want to hire in the future, by bringing them over for visits, having them do talks etc.
  • Outreach: This can include doing talks, and organising various events. Currently we're running a webinar series that I think the new PM would be well-suited to take over responsibility of. In the future, this could mean organising conferences as well.
  • Research management: This includes a lot of activities usually done in collaboration with the rest of the team, ranging from just checking in on research and making sure it's progressing as planned, to giving in-depth feedback and steering, to deciding where and how something should be published, to in some cases co-authoring pieces. This work requires a lot of context and understanding of the field.
  • Policy Engagement: We're starting to put more work into policy engagement, but it's still in its early stages. There's a lot of room to do more. Currently, this primarily consists of scanning for opportunities that seem particularly high value and engaging in those. In the future, I'd like us to become more proactive, e.g. defining some clear policy goals and figuring out how to increase the chance they're realised.
  • Strategy: Working with Allan and the rest of the team to decide what we should be spending our time on.

I think the most likely thing is that the person will start by working on things like operations, team management, recruitment, and helping organise events. As they absorb more context and develop a better understanding of the AI governance space, they'll take on more responsibility in other areas such as policy engagement, research management, recruitment, strategy, or other new projects we identify.

AMA: Markus Anderljung (PM at GovAI, FHI)

Unfortunately, I'm not on that selection committee, and so don't have that detailed insight. I do know that there was quite a lot of applications this year, so it wouldn't surprise me if the tight deadlines originally set end up slipping a little.

I'd suggest you email:

AMA: Markus Anderljung (PM at GovAI, FHI)

Probably there are a bunch more useful traits I haven't pointed to

AMA: Markus Anderljung (PM at GovAI, FHI)

Thanks, Jia!

Could you say more about the different skills and traits relevant to research project management?

Understanding the research: Probably the most important factor is that you're able to understand the research. This entails knowing how it connects to adjacent questions / fields, having well thought-out models about the importance of the research. Ideally, the research manager is someone who could contribute, at least to some extent, to the research they're helping manage. This often requires a decent amount of context on the research, often having spent a significant amount of time reading the relevant research and talking to the relevant people.

Common sense & wide expertise: One way in which you can help as a research manager is often to suggest how the research relates to work by others, and so having decently wide intellectual interests is useful. You also want to have a decent amount of common sense to help make decisions about things like where something should be published and what ways a research project could go wrong.

Relevant epistemic virtues: Just like a researcher, it seems important to have incorporated epistemic virtues like calibration, humility, and other truth-seeking behaviours. As a research manager, you might be the main person that needs to communicate these virtues to new potential researchers.

People skills: Seems very important. Being able to do things like helping people become better researchers by getting to know what motivates them, what tends to block them, etc. Also being able to deal with potential conflicts and sensitive situations that can arise in research collaborations.

Inclination: I think there's a certain kind of inclination that's helpful to do research management. You're excited about dabbling in a lot of different questions, more so than really digging your head down and figuring out one question in depth. You're perhaps better at providing ideas, structure, conceptual framing, feedback, than doing the nitty-gritty of producing all the research yourself. You also probably need to be fine with being more of a background figure, and let the researchers shine.

AI Governance: Opportunity and Theory of Impact

I'll drop in my 2c.

AI governance is a fairly nascent field. As the field grows and we build up our understanding of it, people will likely specialise in sub-parts of the problem. But for now, I think there's benefit to having this broad category, for a few reasons:

  • There's a decent overlap in expertise needed to address these questions. By thinking about the first, I'll probably build up knowledge and intuitions that will be applicable to the second. For example, I might want to think about how previous powerful technologies such as nuclear weapons came to be developed and deployed.
  • I don't think we currently know what problems within AI governance are most pressing. Once we do, it seems prudent to specialise more.

This doesn't mean you shouldn't think of problems of type a and b separately. You probably should.

AMA: Markus Anderljung (PM at GovAI, FHI)

Thanks for the question, Lukas.

I think you're right. My view is probably stronger than this. I'll focus on some reasons in favour of specialisation.

I think your ability to carry out a role keeps increasing for several years, but the rate on improvement presumably goes tapers off with time. However, the relationship between skill in a role and your impact is less clear. It seems plausible that there could be threshold effects and the like, such that even though your skill doesn't keep increasing at the same rate, the impact you have in the role could keep increasing at the same or an even higher rate. This seems for example to be the case with research. It's much better to produce the very best piece on one topic than to produce 5 mediocre pieces on different topics. You could imagine that the same thing happens with organisations.

One important consideration - especially early in your career - is how staying in one role for a long time affects your career capital. The fewer competitive organisations there are in the space where you're aiming to build career capital and the narrower the career capital you want to build (e.g. because you are aiming to work on a particular cause or in a particular type of role), the less frequently changing roles makes sense.

There's also the consideration of what happens when we coordinate. In the ideal scenario, more coordination in terms of careers should mean people try to build more narrow career capital, which means that they'd hop around less between different roles. I liked this post by Denise Melchin from a while back on this topic.

It's also plausible that you get a lot of the gains from specialisation not from staying in the same role, but primarily in staying in the same field or in the same organisation. And so, you can have your growth and still get the gains from specialisation by staying in the same org or field but growing your responsibilities (this can also be within one and the same role).

Load More