MichaelA

Senior Research Manager @ Rethink Priorities; also guest fund manager @ the EA Infrastructure Fund
Working (0-5 years experience)
11441Oxford, UKJoined Dec 2018

Bio

I’m Michael Aird, a Senior Research Manager at Rethink Priorities and guest fund manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. See here for Rethink Priorities' job openings or expression of interest forms, here for a list of EA-aligned funding sources you could apply to,  and here for my top recommended resources for people interested in EA/longtermist research careers. You can give me anonymous feedback here.

With Rethink, I'm mostly focused on co-leading our AI Governance & Strategy team. I also do some nuclear risk research, give input on Rethink's Generalist Longtermism team's work, and do random other stuff.

Previously, I did a range of longtermism-y and research-y things as a Research Scholar at the Future of Humanity Institute, a Summer Research Fellow at the Center on Long-Term Risk, and a Researcher/Writer for Convergence Analysis

I also post to LessWrong sometimes.

If you think you or I could benefit from us talking, feel free to message me! You might also want to check out my post "Interested in EA/longtermist research careers? Here are my top recommended resources".

Sequences
3

Moral uncertainty
Risks from Nuclear Weapons
Improving the EA-aligned research pipeline

Comments
2442

Topic Contributions
790

Also Cavendish Labs:

Cavendish Labs is a 501(c)(3) nonprofit research organization dedicated to solving the most important and neglected scientific problems of our age.

We're founding a research community in Cavendish, Vermont that's focused primarily on AI safety and pandemic prevention, although we’re interested in all avenues of effective research.

projectspapersprizes

(I wrote this comment in a personal capacity, intending only to reflect my own views / knowledge.)

Hi,

In 2021, the EA Infrastructure Fund (which is not CEA, though both are supported and fiscally sponsored by Effective Ventures) made a grant for preparatory work toward potentially creating a COVID-related documentary.[1] I was the guest fund manager who recommended that grant. When I saw this post, I guessed the post was probably related to that grant and to things I said, and I’ve now confirmed that.

This post does not match my memory of what happened or what I intended to communicate, so I'll clarify a few things:

  • The EA Infrastructure Fund is not CEA, and I’m just one of its (unpaid, guest) fund managers. So what I said shouldn’t be treated as “CEA’s view”. 
  • The EAIF provided this grant in response to an application the grantseekers made, rather than “commissioning” it. 
  • When evaluating this grant, I consulted an advisor from the COVID forecasting space and another from the biosecurity space. They both flagged one of the people mentioned in the title of this post as seeming maybe unwise to highlight in this documentary. 
    • But I don’t recall this having been about that person having disagreed with authorities. 
      • Instead, my notes confirm that one of the advisors solely mentioned that that person’s work hadn’t been very impactful, since it was principally used just by the EA and rationality communities for their own benefit (rather than having impact at a larger scale). 
      • And my rough memory (but I lack clear notes in this case) is that the other advisor basically thought this person’s work had made overly strong claims overly confidently and that this didn’t seem like good epistemics to highlight. (As opposed to the worry being that the person was overly divisive or controversial.)
    • I passed a summarized version of those thoughts on to the grantseekers, along with various other bits of tentative feedback/advice.
    • I didn’t make the grant conditional on excluding anyone. And if I recall correctly, I didn’t feel very confident about whether and how to feature specific people in the documentary and I explicitly communicated that I was giving various bits of input merely as input and I want the grantees to make their own decisions. 
      • (Though this post makes me think that I failed to adequately emphasize that this was just input, and that therefore in this case it may have been better to not give the input at all. I now intend to adjust my future communications in light of that.) 
  • I don’t immediately recognize the other two names in the title of this post, and couldn’t find any comments from me about those people in my main grant evaluation notes doc or a few other related docs. So I don’t know why they’re mentioned in the title or screenshot. 

[1] The grant is described in one of EAIF’s public payout reports. But it doesn’t seem productive to name the grantees here. 

(EDIT: I wrote this and hit publish before seeing Rachel also commented shortly beforehand. Her comment does not match my memory of events in a few ways additional to what I noted in this comment. I might say more on that later, but I'd guess it's not very productive to discuss this further here. Regardless, as noted in my comment, it does seem to me that in this case I failed to adequately emphasize that my input was intended just as input, and I regret that.)

Minor (yet longwinded!) comment: FWIW, I think that:

  • Rohin's comment seems useful
  • Stephen's and your rebuttal also seem useful
  • Stephen's and your rebuttal does seem relevant to what Rohin said even with his caveat included, rather than replying to a strawman
  • But the phrasing of your latest comment[1] feels to me overconfident, or somewhat like it's aiming at rhetorical effect rather than just sharing data and inferences, or somewhat soldier-mindset-y
    •  In particular, personally I dislike the use of "110%", "maximally", and maybe "emphatically".
    • My intended vibe here isn't "how dare you" or "this is a huge deal". 
      • I'm not at all annoyed at you for writing that way, I (think I) can understand why you did (I think you're genuinely confident in your view and feel you already explained it, and want to indicate that?), and I think your tone in this comment is significant less important than your post itself. 
    • But I do want to convey that  I think debates and epistemics on the Forum will typically be better if people avoid adding such flourishes/absolutes/emphatic-ness in situations like this (e.g., where the writing shouldn't be optimized for engagingness or persuasion but rather collaborative truth-seeking, and where the disagreed-with position isn't just totally crazy/irrelevant). And I guess what I’d prefer pushing toward is a mindset of curiosity about what’s causing the disagreement and openness to one’s own view also shifting.

(I should flag that I didn't read the post very carefully, haven't read all the comments, and haven't formed a stable/confident view on this topic. Also I'm currently sleep-deprived and expect my reasoning isn't super clear unfortunately.)

  1. ^

    I also think the comment is overconfident in substance, but that's something that happens often in productive debates, and I think that cost is worth paying and hard to totally avoid if we want productive debates to happen.)

     

(Update: I've now made this entry.)

Publication norms

I haven't checked how many relevant posts there are, but I'd guess 2-10 quite relevant and somewhat notable posts? 

Related entries

proliferation | AI governance | AI forecasting | [probably some other things]

Also the Forecasting Research Institute

The Forecasting Research Institute (FRI) is a new organization focused on advancing the science of forecasting for the public good. 

[...] our team is pursuing a two-pronged strategy. One is foundational, aimed at filling in the gaps in the science of forecasting that represent critical barriers to some of the most important uses of forecasting—like how to handle low probability events, long-run and unobservable outcomes, or complex topics that cannot be captured in a single forecast. The other prong is translational, focused on adapting forecasting methods to practical purposes: increasing the decision-relevance of questions, using forecasting to map important disagreements, and identifying the contexts in which forecasting will be most useful.

[...] Our core team consists of Phil Tetlock, Michael Page, Josh Rosenberg, Ezra Karger, Tegan McCaslin, and Zachary Jacobs. We also work with various contractors and external collaborators in the forecasting space.

Also School of Thinking

School of Thinking (SoT) is a media startup.

Our purpose is to spread Effective Altruist, longtermist, and rationalist values and ideas as much as possible to the general public by leveraging new media. We aim to reach our goal through the creation of high-quality material posted on an ecosystem of YouTube channels, profiles on social media platforms, podcasts, and SoT's website. 

Our priority is to produce content in English and Italian, but we will cover more languages down the line. We have been funded by the Effective Altruism Infrastructure Fund (EAIF) and the FTX Future Fund.

Sometime after writing this, I saw Asya Bergal wrote an overlapping list of downsides here

"I do think projects interacting with policymakers have substantial room for downside, including:

  • Pushing policies that are harmful
  • Making key issues partisan
  • Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with
  • “Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project"

Types of downside risks of longtermism-relevant policy, field-building, and comms work [quick notes]

I wrote this quickly, as part of a set of quickly written things I wanted to share with a few Cambridge Existential Risk Initiative fellows. This is mostly aggregating ideas that are already floating around. The doc version of this shortform is here, and I'll probably occasionally update that but not this.

"Here’s my quick list of what seem to me like the main downside risks of longtermism-relevant policy work, field-building (esp. in new areas), and large-scale communications.

  1. Locking in bad policies
  2. Information hazards (primarily attention hazards)
  3. Advancing some risky R&D areas (e.g., some AI hardware things, some biotech) via things other than infohazards
    • e.g., via providing better resources for upskilling in some areas, or via making some areas seem more exciting
  4. Polarizing / making partisan some important policies, ideas, or communities 
  5. Making a bad first impression in some communities / poisoning the well
  6. Causing some sticky yet suboptimal framings or memes to become prominent 
    • Ways they could be suboptimal: inaccurate, misleading, focusing attention on the wrong things, non-appealing
    • By “sticky” I mean that, one these framings/memes are prominent, it’s hard to change that
  7. Drawing more attention/players to some topics, and thereby making it less the case that we’re operating in a niche field and can have an outsized influence

Feel free to let me know if you’re not sure what I mean by any of these or if you think you and me chatting more about these things seems worthwhile. 

Also bear in mind the unilateralist's curse

None of this means people shouldn’t do policy stuff or large-scale communications. Definitely some policy stuff should happen already, and over time more should happen. These are just things to be aware of so you can avoid doing bad things and so you can tweak net positive things to be even more net positive by patching the downsides.

See also Hard-to-reverse decisions destroy option value  and Adding important nuances to "preserve option value" arguments"

EU AI Act and/or NIST AI Risk Management Framework

These are quite separate, but I mention them together because they're both specific pieces of upcoming AI policy that I think many experts think are pretty important. It's pretty unclear to me whether we should have entries for these two specific things and for things like this in general. 

  • There are several posts focused on or touching on each of these things, and it seems nice to have a way to collect them. 
  • But maybe if we had entries for each piece of policy that's roughly this important, according to each major EA cause area, that'd be dozens and would be too many?
Load More