I’m Michael Aird, a Senior Research Manager at Rethink Priorities and guest fund manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. See here for Rethink Priorities' job openings or expression of interest forms, here for a list of EA-aligned funding sources you could apply to, and here for my top recommended resources for people interested in EA/longtermist research careers. You can give me anonymous feedback here.
With Rethink, I'm mostly focused on co-leading our AI Governance & Strategy team. I also do some nuclear risk research, give input on Rethink's Generalist Longtermism team's work, and do random other stuff.
Previously, I did a range of longtermism-y and research-y things as a Research Scholar at the Future of Humanity Institute, a Summer Research Fellow at the Center on Long-Term Risk, and a Researcher/Writer for Convergence Analysis.
I also post to LessWrong sometimes.
If you think you or I could benefit from us talking, feel free to message me! You might also want to check out my post "Interested in EA/longtermist research careers? Here are my top recommended resources".
Thanks! This seems valuable.
One suggestion: Could the episode titles, or at least the start of the descriptions, say who the author is?
Reasoning:
Hi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.
We design tools, workshops and materials to support this mission. This is the first in a series of multiple EA Forum posts. We will tell you more about our mission and our other projects in future articles.
In this post, we are pleased to announce that we have just released the Confido app, a web-based tool for tracking and sharing probabilistic predictions and estimates. You can use it in strategic decision making when you want a probabilistic estimate on a topic from different stakeholders, in meetings to avoid anchoring, to organize forecasting tournaments, or in calibration workshops and lectures. We offer very high data privacy, so it is used also in government setting. See our demo or request your Confido workspace for free.
The current version of Confido is already used by several organizations, including the Dutch government, several policy think tanks and EA organizations.
Confido is under active development and there is a lot more to come. We’d love to hear your feedback and feature requests. To see news, follow us on Twitter, Facebook or LinkedIn or collaborate with us on Discord. We are also looking for funding. [emphasis added]
We are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.
Our current projects are FIXED POINT, Prague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biological and Social Systems), and HAAISS (Human Aligned AI Summer School).
This seems like a useful topic to raise. Here's a pretty quickly written & unsatisfactory little comment:
One specific thing I'll mention in case it's relevant to some people looking at this post: The AI Governance & Strategy team at Rethink Priorities (which I co-lead) is hiring for a Compute Governance Researcher or Research Assistant. The first application stage takes 1hr, and the deadline is June 11. @readers: Please consider applying and/or sharing the role!
We're hoping to open additional roles sometime around September. One way to be sure you'd be alerted if and when we do is filling in our registration of interest form.
Nonlinear Support Fund: Productivity grants for people working in AI safety
Get up to $10,000 a year for therapy, coaching, consulting, tutoring, education, or childcare
[...]
You automatically qualify for up to $10,000 a year if:
- You work full time on something helping with AI safety
- Technical research
- Governance
- Graduate studies
- Meta (>30% of beneficiaries must work in AI safety)
- You or your organization received >$40,000 of funding to do the above work from any one of these funders in the last 365 days
- Open Philanthropy
- The EA Funds (Infrastructure or Long-term)
- The Survival and Flourishing Fund
- Longview Philanthropy
- Your organization does not pay for these services already
- (Only if you're applying for therapy or child care) You make less than $100,000. There are no income qualifiers for any other services we provide.
As long as you meet these criteria, you will qualify. Funding is given out in the order the applications were received. (For more details about how the fund works and why we made it read here)
What services can you apply for?
- Therapy (only if you make less than $100,000)
- Coaching
- Consultants* (e.g. management, operations, IT, marketing, etc)
- Childcare (only if you make less than $100,000)
- Tutors (e.g. ML, CS, English, etc)
- Meditation classes
- Mental health apps
- Books (either for your mental health or that's relevant to your work)
- Anything educational that helps you do better at your work (e.g. classes, books, workshops, etc)
*Consultant can mean a lot of things. In the context of the Nonlinear Support Fund, it refers to people who give you advice on how to do better at your work. It does not refer to people who are more like contract hires, who actually go and do the work for you.
SaferAI is developing the technology that will allow to audit and mitigate potential harms from general-purpose AI systems such as large language models.
Tentative suggestion: Maybe try to find a way include info about how much karma the post has near the start of the episode description, in the podcast feed?
Reasoning: