Hi everyone! I’ll be doing an Ask Me Anything (AMA) here. Feel free to drop your questions in the comments below. I will aim to answer them by Monday, July 24.
Who am I?
I’m Peter. I co-founded Rethink Priorities (RP) with Marcus A. Davis in 2018. Previously, I worked as a data scientist in industry for five years. I’m an avid forecaster. I’ve been known to tweet here and blog here.
What does Rethink Priorities do?
RP is a research and implementation group that works with foundations and impact-focused non-profits to identify pressing opportunities to make the world better, figures out strategies for working on those problems, and does that work.
We focus on:
- Wild and farmed animal welfare (including invertebrate welfare)
- Global health and development (including climate change)
- AI governance and strategy
- Existential security and safeguarding a flourishing long-term future
- Understanding and supporting communities relevant to the above
What should you ask me?
Anything!
I oversee RP’s work related to existential security, AI, and surveys and data analysis research, but I can answer any question about RP (or anything).
I’m also excited to answer questions about the organization’s future plans and our funding gaps (see here for more information). We're pretty funding constrained right now and could use some help!
We also recently published a personal reflection on what Marcus and I have learned in the last five years as well as a review of the organization’s impacts, future plans, and funding needs that you might be interested in or have questions about.
RP’s publicly available research can be found in this database. If you’d like to support RP’s mission, please donate here or contact Director of Development Janique Behman.
To stay up-to-date on our work, please subscribe to our newsletter or engage with us on Twitter, Facebook, or LinkedIn.
I have trouble understanding what “AGI” specifically refers to and I don’t think it’s the best way to think about risks from AI. As you may know, in addition to being co-CEO at Rethink Priorities, I take forecasting seriously as a hobby and people actually for some reason pay me to forecast, making me a professional forecaster. So I think a lot in terms of concrete resolution criteria for forecasting questions and my thinking on these questions has actually been meaningfully bottlenecked right now by not knowing what those concrete resolution criteria are.
That being said, being a good thinker also involves having to figure out how to operate in some sort of undefined grey space, and so I should be at least somewhat comfortable enough with compute trends, algorithmic progress, etc. to be able to give some sort of answer. And so I think for the type of AI that I struggle to define but am worried about – the kind that has the capability of autonomously causing existential risk – the kind of AI that AI researcher Caroline Jeanmaire refers to as the “minimal menace” – I am willing to tentatively put the following distribution on that:
(Though of course that's my opinion on my distribution, not saying that Caroline or others would agree.)
To be clear that’s an unconditional distribution, so it includes the possibility of us not producing “minimal menace” AI because we go extinct from something else first. It includes the possibility of AI development being severely delayed due to war or other disasters, the possibility of policy delaying AI development, etc.
I'm still actively working on refining this view so it may well change soon. But this is my current best guess.