I’m Michael Aird, a Senior Research Manager at Rethink Priorities, Research Scholar at the Future of Humanity Institute, and guest fund manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. You can give me anonymous feedback at this link.

With Rethink, I'm currently mostly working on nuclear risk research and AI governance & strategy research.

Previously, I did longtermist macrostrategy research for Convergence Analysis and then for the Center on Long-Term Risk. More on my background here.

I also post to LessWrong sometimes.

If you think you or I could benefit from us talking, feel free to message me or schedule a call. For people interested in doing EA-related research/writing, testing their fit for that, "getting up to speed" on EA/longtermist topics, or writing for the Forum, I also recommend this post.



Research idea: Evaluate the IGM economic experts panel

I don't have a particularly informed/informative stance on how useful it'd be, unfortunately. (I guess I can at least say it'd be nice if someone spent a few hours thinking further about it, but that's true of many things.)

Research idea: Evaluate the IGM economic experts panel

Thanks for this post.

Here's a copy of some quick notes I wrote earlier (and don't plan to take any next steps on), in case this is useful to anyone:

  • "I see results from the IGM economics expert panels cited somewhat often, and it seems like that's probably a really useful thing to have.
  • Idea 1: Does anyone know if there are similar things for other disciplines? 
    • E.g., historians, psychologists, international relations scholars?
    • And if there isn't, might it be highly valuable per unit effort to set that up?
    • Obviously lots of individual experts comment on individual things, and there'll sometimes be surveys or the like. But it seems like it could make sense for this sort of thing to be institutionalised in the way the IGM panels are, rather than being ad hoc.
  • Idea 2: Does anyone know if it's possible to submit questions to the IGM panel or other panels (if they exist), or to in some other way get them to respond to questions on topics of particular interest to EAs?
  • (If people think this sort of thing doesn't exist yet but might be worth thinking more about, I might write a quick EA Forum post/question post/shortform on this.)"

Replies from colleagues of mine included:

  • “In the US, the National Academy of Sciences, Engineering and Medicine does expert opinion in a more formal way for relevant issues e.g., https://www.nap.edu/catalog/25665/heritable-human-genome-editing
  • “You hear a lot in the US from the US Army Corps of Engineers.”
  • “PhilPapers seems like a natural place to expand polls of philosophers.”
  • “I don't know about historians, but wouldn't it be cool if macrohistory/predictive history were more of a thing and they took surveys of their members?”
List of EA funding opportunities

Thanks again for doing that! 

Just in case other commenters were wondering: JJ usefully started this, and then we all mutually agreed to have Effective Thesis take over maintenance of the Airtable, so I've now added to the top of the post an update linking to the latest version of the Airtable. 

Concrete Biosecurity Projects (some of which could be big)

These projects have reasonably good feedback loops (at least compared to most longtermist interventions), making this area a promising proving ground for meta-EA interventions, especially around entrepreneurship. [emphasis added]

Do you mean that these projects would be a promising proving ground for people (esp. entrepreneurial types) who might want to later also do interventions seeking to build/strengthen the EA movement? If so, why?

I would've thought that (a) the biosecurity projects would provide similarly good or better feedback loops for other "object-level" entrepreneurial longtermist projects, and (b) entrepreneurial community building projects themselves would have good feedback loops so might be better proving grounds for further work of the same type?

Or maybe the idea is that meta-EA interventions aimed at getting more skilled longtermist aligned entrepreneurs could focus on funnelling them into doing the sorts of projects you propose in this post, and then how well that goes could be used to evaluated those meta-EA interventions?

Concrete Biosecurity Projects (some of which could be big)

By "this space", I meant the longtermist biosecurity/biorisk space. As far as I'm aware, the concern was along the lines of "These new people might not be sufficiently cautious about infohazards, so them thinking more about this area in general could be bad", rather than it being tailored to specific projects/areas/focuses the new people might have (and in particular, it wasn't because the people proposed thinking up new biothreats). 

(But I acknowledge that this remains vague, and also this is essentially second-hand info, so people probably shouldn't update strongly in light of it.)

Concrete Biosecurity Projects (some of which could be big)

Thanks, this post seems super useful!

This version of a ‘sentinel system’ is going to be neglected by traditional public health authorities and governments because they won’t be searching for engineered threats designed to elude pathogen-specific detection tools

This sounds a bit too fatalistic to me. It does seem basically guaranteed that these authorities and governments will spend less attention and resources on this than longtermists would ideally like. I imagine you're also right that they'll very much neglect this, like building something far short of what longtermists would like. But that latter point doesn't seem ~guaranteed, and it seems like something we might have a real shot at changing? E.g., via getting aligned people in positions of power, via lobbying, via more public advocacy.

But this isn't my area; I could just be wrong. Also I imagine this is probably just a superficial phrasing thing, not an important disagreement.

Concrete Biosecurity Projects (some of which could be big)

FWIW, I know of a case from just last month where an EA biosecurity person I respect indicated that they or various people they knew had substantial concerns about the possibility of other researchers (who are known to be EA-aligned and are respected by various longtermist stakeholders) entering the space, due to infohazard concerns.

(I'm not saying I think these people should've been concerned or shouldn't have been. I'm also not saying these people would have confidently overall opposed these researchers entering the space. I'm just registering a data point.)

Concrete Biosecurity Projects (some of which could be big)

FWIW, I don't actually know what you mean/believe here and whether it's different to what the post already said, because:

  • The post said "fraction of longtermist effort" but you're saying "share of highly-engaged EAs". Maybe you're thinking the increased share should mostly come from highly engaged EAs who aren't currently focused on longtermist efforts? That could then be consistent with the post.
  • You said "feels reasonable", which doesn't make it clear whether you think this actually should happen, it probably should happen, it's 10% likely it should happen, it shouldn't happen but it wouldn't be unreasonable for it to happen, etc.
A guided cause prioritisation flowchart

I think "Speeding up sustainable progress" is presented here substantially too positively, or more specifically that some very important counterpoints aren't raised but should be. More discussion can be found at https://forum.effectivealtruism.org/tag/speeding-up-development . And I think (from memory) the Greaves & MacAskill paper cited either doesn't mention or argues against a focus on speeding up development?

A guided cause prioritisation flowchart

Thanks for making this! The idea, reasoning, and initial draft all seem promising/reasonable to me. 

Some quick thoguhts:

  • I'd think of this project as essentially a distillation project, and I think good executions of such work on important topics where not much distillation has occurred yet tend to be very valuable. I think EA-aligned cause prioritization is an important topic, and that a fair bit of distillation has occurred on this topic (e.g., 80k articles) but that a flowchart approach could probably add a bunch of value if done well and then endorsed/promoted by some people with big followings in EA. So I think my biggest uncertainties would be how well it's executed and how useful the other ways you could use your time are, rather than the basic idea.
  • A related idea would be something like a "cause prioritization quiz". 
    • This could just be an additional/alternative way of presenting the same ideas and info (like the quiz could basically walk you through one path along the flowchart, one step at a time, rather than showing you the whole thing). 
    • 80k made a "problem quiz" that was like this. It can be found here
      • I'm not sure if they still feel that this is useful
      • I know they took down their "career quiz", but I'm not sure if that was because of the quiz format or just because their views on lots of topics changed since then and they didn't want to take the time to update the quiz in light of that
Load More