MichaelA

Senior Research Manager @ Rethink Priorities; also guest fund manager @ the EA Infrastructure Fund
Working (0-5 years experience)
11886Oxford, UKJoined Dec 2018

Bio

I’m Michael Aird, a Senior Research Manager at Rethink Priorities and guest fund manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. See here for Rethink Priorities' job openings or expression of interest forms, here for a list of EA-aligned funding sources you could apply to,  and here for my top recommended resources for people interested in EA/longtermist research careers. You can give me anonymous feedback here.

With Rethink, I'm mostly focused on co-leading our AI Governance & Strategy team. I also do some nuclear risk research, give input on Rethink's Generalist Longtermism team's work, and do random other stuff.

Previously, I did a range of longtermism-y and research-y things as a Research Scholar at the Future of Humanity Institute, a Summer Research Fellow at the Center on Long-Term Risk, and a Researcher/Writer for Convergence Analysis

I also post to LessWrong sometimes.

If you think you or I could benefit from us talking, feel free to message me! You might also want to check out my post "Interested in EA/longtermist research careers? Here are my top recommended resources".

Sequences
3

Moral uncertainty
Risks from Nuclear Weapons
Improving the EA-aligned research pipeline

Comments
2457

Topic Contributions
793

...and while I hopefully have your attention: My team is currently hiring for a Research Manager! If you might be interested in managing one or more researchers working on a diverse set of issues relevant to mitigating extreme risks from the development and deployment of AI, please check out the job ad!

The application form should take <2 hours. The deadline is the end of the day on March 21. The role is remote and we're able to hire in most countries.

People with a wide range of backgrounds could turn out to be the best fit for the role. As such, if you're interested, please don't rule yourself out due to thinking you're not qualified unless you at least read the job ad first!

Riesgos Catastróficos Globales

Our mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world. 

There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising. [Quote from Introducing the new Riesgos Catastróficos Globales team]

Epoch

We’re a team of researchers investigating and forecasting the development of advanced AI.

International Center for Future Generations

The International Center for Future Generations is a European think-and-do-tank for improving societal resilience in relation to exponential technologies and existential risks.

As of today, their website lists their priorities as:

  • Climate crisis
  • Technology [including AI] and democracy
  • Biosecurity

I appreciate you sharing this additional info and reflections, Julia. 

I notice you mention being friends with Owen, but, as far as I can tell, the post, your comment, and other comments don't highlight that Owen was on the board of (what's now called) EV UK when you learned about this incident and tried to figure out how to deal with it, and EV UK was the umbrella organization hosting the org (CEA) that was employing you (including specifically for this work).[1] This seems to me like a key potential conflict of interest, and like it may have warranted someone outside CEA being looped in to decide what to do about this incident. At first glance, I feel confused about this not having been mentioned in these comments. I'd be curious to hear whether you explicitly thought about that when you were thinking about this incident in 2021?

That is, if I understand correctly, in some sense Owen had a key position of authority in an organization that in turn technically had authority over the organization you worked at. That said, my rough impression from the outside is that, prior to November 2022, the umbrella organization in practice exerted little influence over what the organizations it hosted did. So this conflict of interest was probably in practice weaker than it would've looked on paper. But still it seems noteworthy.

More generally, this makes me realise that it seems like it would be valuable for the community health team to:

  • have a standard protocol for dealing with reports/incidents related to leadership or board members at CEA itself, EV UK, and EV US
    • And perhaps also to other staff at those orgs, and senior staff at any funder providing these orgs with (say) >10% of their funding (which I'd guess might just be Open Phil?)
  • have that protocol try to reduce reliance on the community health team's own judgment/actions in those cases
    • Probably meaning finding someone similarly suited to this kind of work but who sits outside of those lines of authority, who can deal with the small minority of cases that this protocol applies to. Or perhaps multiple people, each handling a different subset of cases.

(I'm not saying this should extend to the other orgs EV UK / EV US host, e.g. GWWC or 80k, just CEA and the umbrella orgs themselves.)

I'd be curious to hear whether such a thing is already in place, and if so what it looks like.

Caveats in a footnote. [2]

(I wrote this just in a personal capacity. I didn't run this by anyone.)

  1. ^

    I'n not sure if this terminology is exactly right. I'm drawing on the post CEA Disambiguation

  2. ^

    • I'm certainly not an expert on how these sorts of things should be handled.
    • I think your team has a tricky job that has to involve many tradeoffs.
    • I think it's probably disproportionately common for the times when your actions were followed by bad outcomes (even if that wasn't caused by your action, or was you making a good bet but getting unlucky) to become visible and salient.
    • I think there are likely many considerations I'm missing.
    • I didn't saliently notice worries or ideas about how should the community health team should handle various conflicts of interest prior to November 2022, and didn't saliently notice the question of what to do about incidents relating to senior staff at CEA / EV UK / EV US until this morning, and of course things tend to be easier to spot in hindsight. (OTOH I just hadn't spent much time thinking about the community health team at all, since it wasn't very relevant to my job.)

Harvard AI Safety Team (HAIST), MIT AI Alignment (MAIA), and Cambridge Boston Alignment Initiative (CBAI)

These are three distinct but somewhat overlapping field-building initiatives. More info at Update on Harvard AI Safety Team and MIT AI Alignment and at the things that post links to.

Labour for the Long Term

Is Britain prepared for the challenges ahead?
We face significant risks, from climate change to pandemics, to digital transformation and geopolitical tensions. We need social-democratic answers to create a fair and resilient future.

Our vision
A leading role for the UK
Many long-term issues have an important political dimension in which the UK can play a leading role. Building on the work of previous Labour governments, we see a future where the UK can play a larger role in areas such as in reducing international tensions and in becoming a world leader in green technology.

Policy Foundry

an Australian-based organisations dedicated to developing high-quality and detailed policy proposals for the greatest challenges of the 21st century. [source]

The Collective Intelligence Project

We are an incubator for new governance models for transformative technology.

Our goal: To overcome the transformative technology trilemma.

Existing tech governance approaches fall prey to the transformative technology trilemma. They assume significant trade-offs between progress, participation, and safety.

Market-forward builders tend to sacrifice safety for progress; risk-averse technocrats tend to sacrifice participation for safety; participation-centered democrats tend to sacrifice progress for participation.

Collective flourishing requires all three. We need CI R&D so we can simultaneously advance technological capabilities, prevent disproportionate risks, and enable individual and collective self-determination.

Just remembered that Artificial Intelligence and Nuclear Command, Control, & Communications: The Risks of Integration was written and published after I initially drafted this, so Will and I's post doesn't draw on  or reference this, but it's of course relevant too.

Load More