245 karmaJoined


I am the co founder of and researcher at the quantitative long term strategy organization Convergence (see here for our growing list of publications). Over the last decade I have worked with MIRI, CFAR, EA Global, and Founders Fund, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. I have a MS degree in computer science and BS degrees in computer science, mathematics, and physics.


Other perspectives that are arguably missing or extensions that can be done are:

Here also is an additional post analyzing the ITN framework: https://forum.effectivealtruism.org/posts/fR55cjoph2wwiSk8R/formalizing-the-cause-prioritization-framework

Update from Convergence Analysis

In July, we published the following research posts:

  • Improving the future by influencing actors' benevolence, intelligence, and power: This post outlines a framework for coming up with, and assesses the expected value of, actions to improve the long-term future, and discusses nine implications of this framework. --We were excited to see this framework already drawn on in two new forum posts by other authors (1,2).
  • Moral circles: Degrees, dimensions, visuals: This post overviews the classic concept of moral circles, discusses two important complexities that conception overlooks or fails to make explicit, and how they can be represented visually. --This post also led to several researchers getting in contact with Michael Aird (its author) and then being connected with each other.
  • Crucial questions for longtermists: This introduces a collection of “crucial questions for longtermists”: important questions about the best strategies for improving the long-term future. This collection is intended to serve as an aide to thought and communication, a kind of research agenda, and a kind of structured reading list.--In August, we followed up with a post focusing specifically on Crucial questions about optimal timing of work and donations

Additionally, our Researcher/Writer Michael Aird published:

Thanks for writing the post! I think we need a lot more strategy research, cause prioritization being one of the most important types, and that is why we founded Convergence Analysis (theory of change and strategy, our site, and our publications). Within our focus of x-risk reduction we do cause prioritization, describe how to do strategy research, and have been working to fill the EA information hazard policy gap. We are mostly focused on strategy research as a whole which lays the groundwork for cause prioritization. Here are some of our articles:

We’re small and relatively new group and we’d like to see more people and groups do this type of research and that this field get more support and grow. There is a vast amount to do and immense opportunity in doing good with this type of research.

Following Sean here I'll also describe my motivation for taking the bet.

After Sean suggested the bet, I felt as if I had to take him up on it for group epistemic benefit; my hand was forced. Firstly, I wanted to get people to take the nCOV seriously and to think thoroughly about it (for the present case and for modelling possible future pandemics) - from an inside view model perspective the numbers I was getting are quite worrisome. I felt that if I didn't take him up on the bet people wouldn't take the issue as seriously, nor take explicitly modeling things themselves as seriously either. I was trying to socially counter what sometimes feels like a learned helplessness people have with respect to analyzing things or solving problems. Also, the EA community is especially clear thinking and I think a place like the EA forum is a good medium for problem solving around things like nCOV.

Secondly, I generally think that holding people in some sense accountable for their belief statements is a good thing (up to some caveats); it improves the collective epistemic process. In general I prefer exchanging detailed models in discussion rather than vague intuitions mediated by a bet but exchanging intuitions is useful. I also generally would rather make bets about things that are less grim and wouldn't have suggested this bet myself, but I do think that it is important that we do make predictions about things that matter and some of those things are rather grim. In grim bets though we should definitely pay attention to how something might appear to parts of the community and make more clear what the intent and motivation behind the bet is.

Third, I wished to bring more attention and support to the issue in the hope that it causes people to take sensible personal precautions and that perhaps some of them can influence how things progress. I do not entirely know who reads this and some of them may have influence, expertise, or cleverness they can contribute.

Nice find! Hopefully it updates soon as we learn more. What is your interpretation of it in terms of mortality rate in each age bracket?

Sure, I'll take the modification to option (i). Thanks Sean.

Hmm... I will take you up on a bet at those odds and with those resolution criteria. Let's make it 50 GBP of mine vs 250 GBP of yours. Agreed?

I hope you win the bet!

(note: I generally think it is good for the group epistemic process for people to take bets on their beliefs but am not entirely certain about that.)

Load more