Senior Research Manager @ Rethink Priorities; also guest fund manager @ the EA Infrastructure Fund
Working (0-5 years experience)
11479Oxford, UKJoined Dec 2018


I’m Michael Aird, a Senior Research Manager at Rethink Priorities and guest fund manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. See here for Rethink Priorities' job openings or expression of interest forms, here for a list of EA-aligned funding sources you could apply to,  and here for my top recommended resources for people interested in EA/longtermist research careers. You can give me anonymous feedback here.

With Rethink, I'm mostly focused on co-leading our AI Governance & Strategy team. I also do some nuclear risk research, give input on Rethink's Generalist Longtermism team's work, and do random other stuff.

Previously, I did a range of longtermism-y and research-y things as a Research Scholar at the Future of Humanity Institute, a Summer Research Fellow at the Center on Long-Term Risk, and a Researcher/Writer for Convergence Analysis

I also post to LessWrong sometimes.

If you think you or I could benefit from us talking, feel free to message me! You might also want to check out my post "Interested in EA/longtermist research careers? Here are my top recommended resources".


Moral uncertainty
Risks from Nuclear Weapons
Improving the EA-aligned research pipeline


Topic Contributions

Rethink Priorities' AI Governance & Strategy team (which I co-lead) has room for more funding. There's some info about our work and the work of RP's other x-risk-focused team* here and elsewhere in that post. One piece of public work by us so far is Understanding the diffusion of large language models: summary. We also have a lot of work that's unfortunately not public, either because it's still in progress or e.g. due to information hazards. I could share some more info via a DM if you want.

We also have yet to release a thorough public overview of the team, but we aim to do so in the coming months.

(*That other team - the General Longtermism team - may also be interested in funding, but I don't want to speak for them. I could probably connect you with them  if you want.)

Thanks - I only read this linkpost and Haydn's comment quoting your summary, not the linked post as a whole, but this seems to me like probably useful work.

One nitpick: 

It seems likely to me that the US is currently much more likely to create transformative AI before China, especially under short(ish) timelines (next 5-15 years) - 70%.

I feel like it'd be more useful/clearer to say "It seems x% likely that the US will create transformative AI before China, and y% likely if TAI is developed in short(ish) timelines (next 5-15 years)". Because:

  • At the moment, you're saying it's 70% likely that the US will be "much more likely", i.e. giving a likelihood of a qualitatively stated (hence kind-of vague) likelihood. 
  • And that claim itself seems to be kind-of but not exactly conditioned on short timelines worlds. Or maybe instead it's a 70% chance of the conjunction of "the US is much more likely (not conditioning on timelines)" and "this is especially so if there are short timelines". It's not really clear which one. 
    • And if it's the conjunction, that seems less useful than knowing what odds you assign to each of the two claims separately.

Thanks, this seems right to me.

Are the survey results shareable yet? Do you have a sense of when they will be? 

Also Cavendish Labs:

Cavendish Labs is a 501(c)(3) nonprofit research organization dedicated to solving the most important and neglected scientific problems of our age.

We're founding a research community in Cavendish, Vermont that's focused primarily on AI safety and pandemic prevention, although we’re interested in all avenues of effective research.


(I wrote this comment in a personal capacity, intending only to reflect my own views / knowledge.)


In 2021, the EA Infrastructure Fund (which is not CEA, though both are supported and fiscally sponsored by Effective Ventures) made a grant for preparatory work toward potentially creating a COVID-related documentary.[1] I was the guest fund manager who recommended that grant. When I saw this post, I guessed the post was probably related to that grant and to things I said, and I’ve now confirmed that.

This post does not match my memory of what happened or what I intended to communicate, so I'll clarify a few things:

  • The EA Infrastructure Fund is not CEA, and I’m just one of its (unpaid, guest) fund managers. So what I said shouldn’t be treated as “CEA’s view”. 
  • The EAIF provided this grant in response to an application the grantseekers made, rather than “commissioning” it. 
  • When evaluating this grant, I consulted an advisor from the COVID forecasting space and another from the biosecurity space. They both flagged one of the people mentioned in the title of this post as seeming maybe unwise to highlight in this documentary. 
    • But I don’t recall this having been about that person having disagreed with authorities. 
      • Instead, my notes confirm that one of the advisors solely mentioned that that person’s work hadn’t been very impactful, since it was principally used just by the EA and rationality communities for their own benefit (rather than having impact at a larger scale). 
      • And my rough memory (but I lack clear notes in this case) is that the other advisor basically thought this person’s work had made overly strong claims overly confidently and that this didn’t seem like good epistemics to highlight. (As opposed to the worry being that the person was overly divisive or controversial.)
    • I passed a summarized version of those thoughts on to the grantseekers, along with various other bits of tentative feedback/advice.
    • I didn’t make the grant conditional on excluding anyone. And if I recall correctly, I didn’t feel very confident about whether and how to feature specific people in the documentary and I explicitly communicated that I was giving various bits of input merely as input and I want the grantees to make their own decisions. 
      • (Though this post makes me think that I failed to adequately emphasize that this was just input, and that therefore in this case it may have been better to not give the input at all. I now intend to adjust my future communications in light of that.) 
  • I don’t immediately recognize the other two names in the title of this post, and couldn’t find any comments from me about those people in my main grant evaluation notes doc or a few other related docs. So I don’t know why they’re mentioned in the title or screenshot. 

[1] The grant is described in one of EAIF’s public payout reports. But it doesn’t seem productive to name the grantees here. 

(EDIT: I wrote this and hit publish before seeing Rachel also commented shortly beforehand. Her comment does not match my memory of events in a few ways additional to what I noted in this comment. I might say more on that later, but I'd guess it's not very productive to discuss this further here. Regardless, as noted in my comment, it does seem to me that in this case I failed to adequately emphasize that my input was intended just as input, and I regret that.)

Minor (yet longwinded!) comment: FWIW, I think that:

  • Rohin's comment seems useful
  • Stephen's and your rebuttal also seem useful
  • Stephen's and your rebuttal does seem relevant to what Rohin said even with his caveat included, rather than replying to a strawman
  • But the phrasing of your latest comment[1] feels to me overconfident, or somewhat like it's aiming at rhetorical effect rather than just sharing data and inferences, or somewhat soldier-mindset-y
    •  In particular, personally I dislike the use of "110%", "maximally", and maybe "emphatically".
    • My intended vibe here isn't "how dare you" or "this is a huge deal". 
      • I'm not at all annoyed at you for writing that way, I (think I) can understand why you did (I think you're genuinely confident in your view and feel you already explained it, and want to indicate that?), and I think your tone in this comment is significant less important than your post itself. 
    • But I do want to convey that  I think debates and epistemics on the Forum will typically be better if people avoid adding such flourishes/absolutes/emphatic-ness in situations like this (e.g., where the writing shouldn't be optimized for engagingness or persuasion but rather collaborative truth-seeking, and where the disagreed-with position isn't just totally crazy/irrelevant). And I guess what I’d prefer pushing toward is a mindset of curiosity about what’s causing the disagreement and openness to one’s own view also shifting.

(I should flag that I didn't read the post very carefully, haven't read all the comments, and haven't formed a stable/confident view on this topic. Also I'm currently sleep-deprived and expect my reasoning isn't super clear unfortunately.)

  1. ^

    I also think the comment is overconfident in substance, but that's something that happens often in productive debates, and I think that cost is worth paying and hard to totally avoid if we want productive debates to happen.)


(Update: I've now made this entry.)

Publication norms

I haven't checked how many relevant posts there are, but I'd guess 2-10 quite relevant and somewhat notable posts? 

Related entries

proliferation | AI governance | AI forecasting | [probably some other things]

Also the Forecasting Research Institute

The Forecasting Research Institute (FRI) is a new organization focused on advancing the science of forecasting for the public good. 

[...] our team is pursuing a two-pronged strategy. One is foundational, aimed at filling in the gaps in the science of forecasting that represent critical barriers to some of the most important uses of forecasting—like how to handle low probability events, long-run and unobservable outcomes, or complex topics that cannot be captured in a single forecast. The other prong is translational, focused on adapting forecasting methods to practical purposes: increasing the decision-relevance of questions, using forecasting to map important disagreements, and identifying the contexts in which forecasting will be most useful.

[...] Our core team consists of Phil Tetlock, Michael Page, Josh Rosenberg, Ezra Karger, Tegan McCaslin, and Zachary Jacobs. We also work with various contractors and external collaborators in the forecasting space.

Also School of Thinking

School of Thinking (SoT) is a media startup.

Our purpose is to spread Effective Altruist, longtermist, and rationalist values and ideas as much as possible to the general public by leveraging new media. We aim to reach our goal through the creation of high-quality material posted on an ecosystem of YouTube channels, profiles on social media platforms, podcasts, and SoT's website. 

Our priority is to produce content in English and Italian, but we will cover more languages down the line. We have been funded by the Effective Altruism Infrastructure Fund (EAIF) and the FTX Future Fund.

Load More