M

MichaelA

Senior Research Manager @ Rethink Priorities; also guest fund manager @ the EA Infrastructure Fund
12149 karmaJoined Dec 2018Working (0-5 years)Oxford, UK

Bio

I’m Michael Aird, a Senior Research Manager at Rethink Priorities and guest fund manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. See here for Rethink Priorities' job openings or expression of interest forms, here for a list of EA-aligned funding sources you could apply to,  and here for my top recommended resources for people interested in EA/longtermist research careers. You can give me anonymous feedback here.

With Rethink, I'm mostly focused on co-leading our AI Governance & Strategy team. I also do some nuclear risk research, give input on Rethink's Generalist Longtermism team's work, and do random other stuff.

Previously, I did a range of longtermism-y and research-y things as a Research Scholar at the Future of Humanity Institute, a Summer Research Fellow at the Center on Long-Term Risk, and a Researcher/Writer for Convergence Analysis

I also post to LessWrong sometimes.

If you think you or I could benefit from us talking, feel free to message me! You might also want to check out my post "Interested in EA/longtermist research careers? Here are my top recommended resources".

Sequences
4

Nuclear risk research project ideas
Moral uncertainty
Risks from Nuclear Weapons
Improving the EA-aligned research pipeline

Comments
2473

Topic Contributions
793

Tentative suggestion: Maybe try to find a way include info about how much karma the post has near the start of the episode description, in the podcast feed?

Reasoning:

  • This could help in deciding what to listen to, at least for the "all audio" feed. (E.g. I definitely don't have time for even just all AI-related episodes in there.) 
  • It could also led to herd-like behavior or ignoring good content that didn't get lots of karma right away. But I think that that is outweighed by the above benefit.
  • OTOH this may just be infeasible to do in a non-misleading way, if you put things in the feed soon enough after they're posted that the karma hasn't really stabilized yet* and if it's hard to automatically update the description to reflect karma scores later.
    • *My rough sense is that karma scores are pretty stable after something like 3-7 days - stable enough that something like "karma after 5 days was y" is useful info - but that if you can only show karma scores after e.g. 1 day then that wouldn't be very informative. 

Thanks! This seems valuable.

One suggestion: Could the episode titles, or at least the start of the descriptions, say who the author is? 

Reasoning:

  • I think that's often useful context for the post, and also useful info for deciding whether to read it (esp. for the feed where the bar is "just" >30 karma). 
  • I guess there are some upsides to nudging people to decide just based on topic or the start of the episode rather than based on the author's identity. But I think that's outweighed by the above points.

Confido Institute

Hi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.

We design tools, workshops and materials to support this mission. This is the first in a series of multiple EA Forum posts. We will tell you more about our mission and our other projects in future articles.

In this post, we are pleased to announce that we have just released the Confido app, a web-based tool for tracking and sharing probabilistic predictions and estimates. You can use it in strategic decision making when you want a probabilistic estimate on a topic from different stakeholders, in meetings to avoid anchoring, to organize forecasting tournaments, or in calibration workshops and lectures. We offer very high data privacy, so it is used also in government setting.  See our demo or request your Confido workspace for free.

The current version of Confido is already used by several organizations, including the Dutch government, several policy think tanks and EA organizations.

Confido is under active development and there is a lot more to come. We’d love to hear your feedback and feature requests. To see news, follow us on Twitter, Facebook or LinkedIn or collaborate with us on Discord. We are also looking for funding. [emphasis added]

Epistea

We are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.

Our current projects are FIXED POINTPrague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biological and Social Systems), and HAAISS (Human Aligned AI Summer School).

This seems like a useful topic to raise. Here's a pretty quickly written & unsatisfactory little comment: 

  • I agree that there's room to expand and improve the pipeline to valuable work in AI strategy/governance/policy. 
  • I spend a decent amount of time on that (e.g. via co-leading RP AI Governance & Strategy team, some grantmaking with EA Infrastructure Fund, advising some talent pipeline projects, and giving lots of career advice).
  • If a reader thinks they could benefit from me pointing you to some links or people to talk to, or via us having a chat (e.g. if you're running a talent pipeline project or strongly considering doing so), feel free to DM me. 
    • (But heads up that I'm pretty busy so may reply slowly or just with links or suggested people to talk to, depending on how much value I could provide via a chat but not via those other quicker options.)

One specific thing I'll mention in case it's relevant to some people looking at this post: The AI Governance & Strategy team at Rethink Priorities (which I co-lead) is hiring for a Compute Governance Researcher or Research Assistant. The first application stage takes 1hr, and the deadline is June 11. @readers: Please consider applying and/or sharing the role! 

We're hoping to open additional roles sometime around September. One way to be sure you'd be alerted if and when we do is filling in our registration of interest form.  

Nonlinear Support Fund: Productivity grants for people working in AI safety

Get up to $10,000 a year for therapy, coaching, consulting, tutoring, education, or childcare

[...]

You automatically qualify for up to $10,000 a year if:

  • You work full time on something helping with AI safety
    • Technical research
    • Governance
    • Graduate studies
    • Meta (>30% of beneficiaries must work in AI safety)
  • You or your organization received >$40,000 of funding to do the above work from any one of these funders in the last 365 days
  • Your organization does not pay for these services already
  • (Only if you're applying for therapy or child care) You make less than $100,000. There are no income qualifiers for any other services we provide.

As long as you meet these criteria, you will qualify. Funding is given out in the order the applications were received. (For more details about how the fund works and why we made it read here)

What services can you apply for?

  • Therapy (only if you make less than $100,000)
  • Coaching
  • Consultants* (e.g. management, operations, IT, marketing, etc)
  • Childcare (only if you make less than $100,000)
  • Tutors (e.g. ML, CS, English, etc) 
  • Meditation classes 
  • Mental health apps
  • Books (either for your mental health or that's relevant to your work)
  • Anything educational that helps you do better at your work (e.g. classes, books, workshops, etc)

*Consultant can mean a lot of things. In the context of the Nonlinear Support Fund, it refers to people who give you advice on how to do better at your work. It does not refer to people who are more like contract hires, who actually go and do the work for you. 

SaferAI

SaferAI is developing the technology that will allow to audit and mitigate potential harms from general-purpose AI systems such as large language models.

Load more