MichaelA

I’m Michael Aird, a Staff Researcher at Rethink Priorities, Research Scholar at the Future of Humanity Institute, and guest manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. You can give me anonymous feedback at this link.

With Rethink, I'm currently mostly working on nuclear risk research. I might in future work on topics related to what I'm calling "Politics, Policy, and Security from a Broad Longtermist Perspective".

Previously, I did longtermist macrostrategy research for Convergence Analysis and then for the Center on Long-Term Risk. More on my background here.

I also post to LessWrong sometimes.

If you think you or I could benefit from us talking, feel free to message me or schedule a call. For people interested in doing EA-related research/writing, testing their fit for that, "getting up to speed" on EA/longtermist topics, or writing for the Forum, I also recommend this post.

Sequences

Moral uncertainty
Risks from Nuclear Weapons
Improving the EA-aligned research pipeline

Wiki Contributions

Load More

Comments

An estimate of the value of Metaculus questions

I think this is cool.

Maybe "Changing decisions" should be "Changing other decisions"? Since I think influencing the forecasting event occurs via influencing decisions about that event? 

List of EA funding opportunities

Yeah, I think complementing this with an Airtable would indeed be handy, and I'd be in favour of someone making such an Airtable based on this post (and then maybe giving me edit access as well, so I can help maintain it) :)

An estimate of the value of Metaculus questions

”For Metaculus, another constraint is to have questions that interest forecasters. Interestingness is necessary to build a community around forecasting that may later have a large instrumental value.”

Nitpick: I think interestingness is very helpful but not necessary. Other potential incentives to forecast include the opportunity to be impactful/helpful, status/respect on Metaculus and maybe elsewhere, Metaculus points, getting feedback that improves epistemics, sense of mastery, and money (e.g. tournament prizes).

An estimate of the value of Metaculus questions

A complementary approach to estimating the value of Metaculus questions (focusing just on decisions affects, not improving epistemics, vetting potential researchers, etc.) would be to actually ask a bunch of people whether they look at Metaculus questions, whether they think they seem decision-relevant and valuable, and whether Metaculus questions influenced their decisions. This could be similar to the impact survey Rethink does and the impact survey I did last year. See also https://forum.effectivealtruism.org/posts/EYToYzxoe2fxpwYBQ/should-surveys-about-the-quality-impact-of-research-outputs-1

I think Metaculus indeed intends to do something like this soon-ish (iirc, it was mentioned in the recent job ad for an EA Question Author).

An estimate of the value of Metaculus questions

On the other hand, we also have questions such as “Will Israel recognize Palestine by 2070?” or “When will Hong Kong stop being a Special Administrative Region of China?”. These events seem so large as to essentially be non-influenceable, and thus I’d tend to think that their Metaculus questions are not valuable [3].

Key point of this comment: It seems to me a mistake to think forecasting questions are usually useful only if it's feasible to influence whether the asked-about event happens. I think there are just many ways in which our actions can be improved by knowing more about the world's past, present, and likely future states. 

As an analogy, I'd find a map more useful if it notes where booby traps are even if I can't disable the traps (since I can side step them), and I'm better able to act in the world if I'm aware that China exists and that its GDP is larger than that of most countries (even though I can't really influence that), and a huge amount of learning is about things that happened previously and yet is still useful. 

Your footnote nods to that idea, but as if that's like a special case.

For example, the first of those questions could be relevant to decisions like how much to invest in reducing Israel-Palestine tensions or the chance of a major nuclear weapons buildup by Israel or Iran. 

I also think people influenced directly or indirectly by Metaculus could take actions with substantial leverage over major events, so I'd focus less on "large" and more on "neglectedness / crowdedness". E.g., EAs seem to be some of the biggest players for extreme AI risk, extreme biorisk, and possibly nuclear risk, which are all actually very large in terms of complexity and impact, but are sufficiently uncrowded that a big impact can still be made. 

(Though I do of course agree that questions can differ hugely in decision-relevance, that considering who will be directly or indirectly influenced by the questions matters, that those questions you highlighted are probably less impactful than e.g. many AI risk or nuclear risk questions on Metaculus.)

An estimate of the value of Metaculus questions

Thanks for this post!

I've been thinking a lot about related topics lately. I haven't written anything very polished yet, but here are some rough things you or readers of this post may find interesting (in descending order of predicted worth-checking-out-ness):

Less directly relevant: 

An estimate of the value of Metaculus questions

I agree with this. I'm planning to write 1 or more posts of vaguely that type, but I don't know precisely when and it seems very unlikely I'll 100% cover this. So if someone is interested in doing that, maybe contact me (michael AT rethinkpriorities DOT org), and perhaps we could collaborate or I could give some useful pointers? 

Open Philanthropy is seeking proposals for outreach projects

If anyone stumbles upon this later, I imagine they may also be interested in Open Phil's call for course development grant applications: https://www.openphilanthropy.org/focus/other-areas/open-philanthropy-course-development-grants 

This program aims to provide grant support to academics for the development of new university courses (including online courses). At present, we are looking to fund the development of courses on a range of topics that are relevant to certain areas of Open Philanthropy’s grantmaking that form part of our work to improve the long-term future (potential risks from advanced AI, biosecurity and pandemic preparedness, other global catastrophic risks), or to issues that are of cross-cutting relevance to our work. We are primarily looking to fund the development of new courses, but we are also accepting proposals from applicants who are looking for funding to turn courses they have already taught in an in-person setting into freely-available online courses.

Applications are open until further notice and will be assessed on a rolling basis.

Propose and vote on potential EA Wiki entries

(Just wanted to send someone a link to a tag for Social media or something like that, then realised it doesn't exist yet, so I guess I'll bump this thread for a second opinion, and maybe create this in a few days if no one else does)

Load More