2. Yes, this makes a lot of sense and probably more closely captures GW's intent behind the adjustments.
re: versions of this model, stuff being broken, etc (mostly for the benefit of other readers since I think Nuño knows all of this already)
The version linked in this post is still working perfectly fine for me, even when I am not logged into my account. There is a new version from November 2022 that is broken; this was the version used in the GiveWell Change Our Minds contest entry with Sam Nolan and Hannah Rokebrand (here). The main contest entry notebook is not broken because it uses a downloaded csv of the results for each CEA rather than directly importing the results from the respective CEA notebooks (I believe Sam did this because of performance issues, but I guess it had an unintended benefit).
Since the GiveWell contest entry was submitted, I haven't made any updates to the code or anything else related to this project, and don't intend to (although others are of course very welcome to fork, etc.). Readers curious about the rough methods used can check out the notebook linked to this blogpost, which is still displaying properly (and is probably a bit easier to follow than the November 2022 version, because it does way less stuff). Readers curious about the end results of the analysis can read our main submission document, either on Observable or on the EA Forum.
Do share whatever you end up doing around worldview diversification! I'd be curious to read, and have spent some time thinking about these issues especially in the global health context.
treating the pre-post data as evidence in a Bayesian update on conservative priors
Is there any way we can get more details on this? I recently made a blogpost using Bayesian updates to correct for post-decision surprise in GiveWell's estimates, which led to a change in the ranking of New Incentives from 2nd to last in terms of cost effectiveness among Top Charities. I'd imagine (though I haven't read the studies) that the uncertainty in the Strong Minds CEA is / should be much larger.
For that reason, I would have guessed that Strong Minds would not fare well post-Bayesian adjustment, but it's possible you just used a different (reasonable) prior than I did, or there is some other consideration I'm missing?
Also, even risk neutral evaluators really should be using Bayesian updates (formally or informally) in order to correct for post-decision surprise. (I don't think you necessarily disagree with me on this, but it's worth emphasizing that valuing GW-tier levels of confidence doesn't imply risk aversion.)
Is there any website or spreadsheet where we can see how different ethical views affect the ranking of charities we should donate to? Specifically, where we can plug in our own parameters into the model, like we can do with the GiveWell CEA, except for more complicated positions (like the ones expressed in this post)?
If not, would such a project potentially be in the cards for HLI?
This was fascinating. Thanks so much for writing it, and for including your refactored model for public viewing. I'm especially looking forward to the future post on uncertainty analysis.
As of now, I'm fairly optimistic about the potential of recent projects like Squiggle and Causal to help take uncertainty analysis beyond the pessimistic-to-optimistic interval presented in some of GiveWell's older CEAs. I'd be interested in learning, from your future post, about the tools that health economists currently use to analyse uncertainty, and your views on how EAs should carry out uncertainty analysis going forward.
FWIW I think it's a bad solution, but why not quantify the uncertainty in the ex ante CEA? See this GiveWell Change Our Minds submission as an example--I don't think the uncertainty intervals are uninformatively large, although there is a rather strong assumption that the GiveWell models capture the right structure of the problem. Once the uncertainty is quantified, we could run something like the Bayesian adjustment I demonstrate in this PDF to (in theory!) eliminate the positive bias for more uncertain estimates. And then compare the posterior distribution to an analogous distribution for AMF/other relevant benchmark.
Conceptually, the difference between the ex ante and ex post CEA isn't categorical. It is a matter of degree--the degree of uncertainty about the model and its parameters. This difference could be captured with an adequate explicit treatment of uncertainty in the CEA.