EDIT: I'm no longer actively checking this post for questions, but I'm likely to periodically check.
Hello, I work at the Centre for the Governance of AI (GovAI), part of the Future of Humanity Institute (FHI), University of Oxford, as a project manager, where I put time into e.g. recruitment, research management, policy engagement, and operations.
FHI and GovAI are hiring for a number of roles. Happy to answer questions about them:
- GovAI is hiring a Project Manager, to work alongside myself. Deadline September 30th.
- FHI is hiring researchers, across three levels of seniority and all our research groups (including GovAI). Deadline Oct 19th.
- The Future of Humanity Foundation, a new organisation aimed at supporting FHI, is hiring a CEO. Deadline September 28th.
- We’re likely to open for applications to our GovAI Fellowship over the next month or so, a 3-month research stint aimed at helping people get up to speed with and test their fit for AI governance research, likely starts in Jan or July 2021.
Relevant things folks at GovAI have published in 2020:
- AI Governance: Opportunity and Theory of Impact, Allan Dafoe
- The Windfall Clause, Cullen O’Keefe & others
- A Guide to Writing the NeurIPS Impact Statement, Carolyn Ashurst & others
- The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?, Toby Shevlane & Allan Dafoe
- Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society, Carina Prunkl & Jess Whittlestone (CFI)
A little more about me:
- At GovAI, I’ve been especially involved e.g. with our research on public and ML researcher views on AI governance and forecasting (led by Baobao Zhang), implications of increased data efficiency (led by Aaron Tucker), the NeurIPS Broader Impact Statement Requirement (led by Carolyn Ashurst & Carina Prunkl), our submission on the EU Trustworthy AI Whitepaper (led by Stefan Torges), and on the what we can learn about AI governance from the governance of previous powerful technologies.
- Before coming to GovAI in 2018, I worked as the Executive Director of EA Sweden, e.g. running a project promoting representation for future generations (more info here). I’ve also worked as a management consultant at EY Stockholm, and I ran the Giving What We Can: Cambridge group (now EA Cambridge) for a year.
- I was encouraged to write down some of my potentially unusual views to spur discussion. Here are some of them:
- There should be more EA community-building efforts focused on professionals, say people with a few years of experience.
- I think EAs tend to underestimate the value of specialisation. For example, we need more people to become experts in a narrow domain / set of skills and then make those relevant to the wider community. Most of the impact you have in a role comes when you’ve been in it for more than a year.
- There is a vast array of important research that doesn’t get done because people don’t find it interesting enough.
- People should stop using “operations” to mean “not-research”. I’m guilty of this myself, but it clumps together many different skills and traits, probably leading to people undervaluing them.
- Work on EU AI Policy is plausibly comparably impactful to US policy on the current margin, in particular over the next few years as the EU Commission's White Paper on AI is translated into concrete policy.
- I think the majority of AI risk is structural – as opposed to stemming from malicious use or accidents – e.g. technological unemployment leading to political instability, competitive pressures, or decreased value of labour undermining liberal values.
- Some forms of expertise which I'm excited to have more of at GovAI include: institutional design (e.g. how should the next Facebook Oversight Board-esque institution be set up), transforming our research insights into policy proposals (e.g. answering questions like what EU AI Policy we should push for, how a system to monitor compute could be set up), AI forecasting, along with relevant bits of history and economics.
Thanks for the question, Lukas.
I think you're right. My view is probably stronger than this. I'll focus on some reasons in favour of specialisation.
I think your ability to carry out a role keeps increasing for several years, but the rate on improvement presumably goes tapers off with time. However, the relationship between skill in a role and your impact is less clear. It seems plausible that there could be threshold effects and the like, such that even though your skill doesn't keep increasing at the same rate, the impact you have in the role could keep increasing at the same or an even higher rate. This seems for example to be the case with research. It's much better to produce the very best piece on one topic than to produce 5 mediocre pieces on different topics. You could imagine that the same thing happens with organisations.
One important consideration - especially early in your career - is how staying in one role for a long time affects your career capital. The fewer competitive organisations there are in the space where you're aiming to build career capital and the narrower the career capital you want to build (e.g. because you are aiming to work on a particular cause or in a particular type of role), the less frequently changing roles makes sense.
There's also the consideration of what happens when we coordinate. In the ideal scenario, more coordination in terms of careers should mean people try to build more narrow career capital, which means that they'd hop around less between different roles. I liked this post by Denise Melchin from a while back on this topic.
It's also plausible that you get a lot of the gains from specialisation not from staying in the same role, but primarily in staying in the same field or in the same organisation. And so, you can have your growth and still get the gains from specialisation by staying in the same org or field but growing your responsibilities (this can also be within one and the same role).