I do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). I'm currently in the Bahamas as part of the FTX EA Fellowship

I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.

I have also been running a Forecasting Newsletter since April 2020, and have written, a search tool which aggregates predictions from many different platforms. I also generally enjoy winning bets against people too confident in their beliefs.

Otherwise, I like to spend my time acquiring deeper models of the world, and generally becoming more formidable. A good fraction of my research is available either on the EA Forum or on

I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term."

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 and 2019, and SPARC during 2020; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

You can share feedback anonymously with me here.


Vantage Points
Estimating value
Forecasting Newsletter

Topic Contributions


A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

I imagine that the benefits of marginally increased mortality wouldn't be the most important facto here: the vast majority of prisoners would prefer to be outside prison, even if this leads to an (I presume small) increase in mortality.

So I imagine this would have an effect, but for it to not be too large.

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

I've now edited the post to reference mean(QALYs)/mean($). You can find this by ctrl+f for "EDIT 22/06/2022" and under the graph charts.

Note that I've used mean($)/mean(QALYS) ($8k) rather than 1/mean(QALYs/$) ($5k), because it seems to me that is more the quantity of interest, but I'm not hugely certain of this.

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Why did you take the mean $/QALY instead of mean QALY/$ (which expected value analysis would suggest)? When I do that I get $5000/QALY as the mean.

Because I didn't think about it and I liked having numbers which were more interpretable, e.g. 3.4*10^-6 QALYs/$ is to me less interpretable that $290k/QALY, and same with 7.7 * 10^-4 vs $1300/QALY.

Another poster reached out and mentioned he was writing a post about this particular mistake, so I thought I'd leave the example up.

Critiques of EA that I want to read

I think I have a handful of critiques I want to make about EA that I am fairly certain would negatively impact my career to voice, even though I believe they are good faith criticisms, and I think engaging with them would strengthen EA.

This seems suboptimal, particularly if more people feel like that. But it does seem fixable: I'm up for receiving things like this anonymously at this link, waiting for a random period, rewording them using GPT-3, and publishing them. Not sure what proportion of that problem that would fix, though.

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Yeah, see this Open Philanthropy post. Or think about the difference in funding for an additional dollar to someone living on $500/year vs an additional dollar given to someone living on $50k, given log utility.

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

So here is the thing, Chloe and her team's virtues and flaws are amplified by virtue of them being in charge of millions. And so I think that having good models here requires mixing speculative judgments about personal character with cost-effectiveness estimates.

At this point I can either:

  1. Not develop good models of the world
  2. Develop ¿good? models but not share them
  3. Develop them and share them

Ultimately I went with option 3, though I stayed roughly three months in option 2. It's possible this wasn't optimal. I think the deciding factor here was having two cost-effectiveness estimates which ranged over 2-3 orders of magnitude and yet were non-overlapping. But I could just have published those alone.  But I don't think they can stand alone, because the immediate answer is that Open Philanthropy knows something I don't, and so the rest of the post is in part an exploration of whether that's the case.

Chloe and Jesse are competent and committed people working in a cause area that does not meet the 1000x threshold currently set by GiveWell top charities. If it were easy to cross that bar, these charities would not be the gold standard for neartermist, human-focused giving. Open Phil chose to bet on CJR as a cause area, conduct a search, and hire Chloe anyway.

I don't disagree with the meat of this paragraph. Though note that Jesse Rothman is not working on criminal justice reform any more, I think (see the CEA teams page)

I genuinely believe policy- and politics-focused EAs could learn a lot from the CJR team’s movement building work. Their strengths in political coordination and movement strategy are underrepresented in EA.

I imagine this is one of the reasons why CEA hired Jesse Rothman/Jesse Rothman chose to be hired to work on EA groups.

I bought the idea that we could synthesize knowledge from different fields and coordinate to solve the world’s most pressing problems. That won’t happen if we can’t respectfully engage with people who think or work differently from the community baseline.

We can’t significantly improve the world without asking hard questions. We can ask hard questions without dismissing others or assuming that difference implies inferiority.

Yes, but sometimes you can't answer the hard questions without being really unflattering. For instance, assume for a moment that my cost effectiveness estimates are roughly correct. Then there were moments where Chloe could have taken the step of saying "you know what, actually donating $50M to GiveDirectly or to something else would be more effective than continuing my giving through  JustImpact". This would have been pretty heroic, and the fact that she failed to be heroic is at least a bit unflattering. 

I'm not sure how this translates to your "assuming inferiority" framing. People routinely fail to be heroic. Maybe it's too harsh and unattainable a standard. On the other hand, maybe holding people and organizations to that standard will help them become stronger, if they want to. I think that's what I implicitly believe.

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

So I don't actually have to assume that criminals are rational actors; I can also assume that actions which are high expected value will spread through mimesis. See this short post: Unconscious Economics for an elaboration of this point.

But you are right that it smuggles many assumptions.

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Thanks Josh, I particularly appreciate your quantified estimates of likelihood/impact.

per this article, incarceration rates in the U.S. have declined 20% (per person) between 2008 and 2019, your estimates here seem somewhat pessimistic to me. 

Not sure how that follows; what would be needed for counterfactual/shapley impact would be a further reduction in the absence/reduction of funding. If OP donates $5B and the imprisonment rate goes down another 20%, but would have gone down a 20% (resp 15%) anyways, the impact is 0 (resp 5%).

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

This is coming from my own understanding of what is politically feasible. I think that policies that make crime worth it in expectation are likely to be politically very unpopular and/or lead to more crime. So I think that restorative justice approaches would be stronger with a punitive component, even if they overall reduce that punitive component.

Load More