Governance, financing, and supply chain interventions can be randomised at state or district level
While I agree this is true in theory, is it practical? I imagine the size needed to power such a study is prohibitive except for the largest organisations, the answer still wouldn't be definitive (eg due to generalisation concerns), and there would be lots of measurement issues (eg residents in one district crossing to another district with better funded healthcare).
If you tell me I'm wrong, I'd definitely bow to your experience and knowledge in this field but this isn't obviously true.
Thanks for this post @alex lawsen, I continually revisit this as inspiration and to remember the usefulness of this process when I am making hard decisions, especially for my career.
From your blog, I know you are a big user of LLMs. I was wondering if you had successfully used them to replace, or complement, this process? When I feed one my Google Doc, I find the output is too scattergun or vague to be useful, compared to sharing the same Doc with friends.
If you've succeeded using LLMs, would you please share what prompts and models that have worked well for you?
I feel like the overall takeaway is very different though. I've not fully understood the details in either argument so this is a little vibes based. You seemed to be arguing that below subsistence wages were fairly likely while here it seems to be that even falling wages require a weird combination of conditions.
What have I misunderstood?
This seems like good work but the headline and opening paragraph aren't supported when you've shown it might be log-normal. Log-normal and power distributions often have quite different consequences for how important it is to move to very extreme percentiles, and hence this difference can matter for lots of decisions relevant to EA.
Naively, this seems like a great fit for EAIF, which is looking to fund more projects. Is there something I'm missing?
Could you please expand on why you think a Pareto distribution is appropriate here? Tail probabilities are often quite sensitive to the assumptions here, and it can be tricky to determine if something is truly power-law distributed.
When I looked at the same dataset, albeit processing the data quite differently, I found that a truncated or cutoff power-law appeared to be a good fit. This gives a much lower value for extreme probabilities using the best-fit parameters. In particular, there were too few of the most severe pandemics in the dataset (COVID-19 an...
I don't see how you can say both that it will "almost never" be the case that NYC will "hit 1% cumulative incidence after global 1% cumulative incidence" but also that it would surprise you if you can get to where your monitored cities lead global prevalence?
Sorry, this is poorly phrased by me. I meant that it would surprise me if there's much benefit from adding a few additional cities.
The best stuff looking at global-scale analysis of epidemics is probably by GLEAM. I doubt full agent-based modelling at small-scales is giving you much but massively complicating the model.
Sorry, I answered the wrong question, and am slightly confused what this post is trying to get out. I think your question is: will NYC hit 1% cumulative incidence after global 1% cumulative incidence?
I think this is almost never going to be the case for fairly indiscriminately-spreading respiratory pathogens, such as flu or COVID.
The answer is yes only if NYC's cumulative incidence is lower than the global mean region (weighted by population). Due to connectedness, I expect NYC to always be hit pretty early, as you point out, definitely before most rural c...
This effect should diminish as the pandemic progresses, but at least in the <1% cumulative incidence situations I'm most interested in it should remain a significant factor.
1% cumulative incidence is quite high, so I think this is probably far along you're fine. E.g. we've estimated London hit this point for COVID around 22 Mar 2020 when it was pretty much everywhere.
This seems intuitively in the right ballpark (within an order of magnitude of GiveWell), but I'd caution that, as far as I can tell, the World Bank and Bernstein et al. numbers are basically made up.
I've previously written about how to identify higher impact opportunities. In particular, we need to be careful about the counterfactuals here because a lot of the money on pandemic preparedness comes from governments who would otherwise spend on even less cost effective things.
Glad you found it useful.
Your question is hard to answer because tackling GCBRs is extremely inter-disciplinary and I'm not familiar enough with the field at large to identify clear gaps. Personal fit probably matters more than the specific field. Most work on pandemic preparedness is applicable to GCBRs, and there's lots of research groups on that.
Sorry that's not a very helpful answer. If you've got more specific questions I'm happy to try and help.
Why are you ballparking $10b when all of the examples given are many multiples of that? $100b seems like a better estimate.
I also suspect we're targeting easy to eradicate diseases. Those without animal reservoirs that will cause resurgences and where there are effective interventions. Therefore, I'd suggest this is a lower bound.
Your objections seem reasonable but I do not understand their implications due to a lack of finance background. Would you mind helping me understand how your points affect the takeaway? Specifically, do you think that the estimates presented here are biased, much more uncertain than the post implies, or something else?
Sure, the claim hides a lot of uncertainties. At a high level the article says “A implies X, Y and Z”, but you can’t possibly derive all of that information from the single number A. Really what’s the article should say is “X, Y and Z are consistent with the value of A”, which is a very different claim.
i don’t specifically disagree with X, Y and Z.
I mean it in the sense that they will have to sell substantially below market value if they want to sell it quickly.
This kind of property tends to have huge bid-ask-spreads and the usual thing to do is to continue operating the property while looking for a buyer (my guess is they would succeed at selling it eventually at market value, but it would take a while).
Interesting read, I'm left unconvinced that traditional pharma is moving much slower than optimal. That would seem to imply that they're leaving a lot of money on the table (quicker approval = longer selling the drug before patent expires).
I have three speculative ideas on why this might be. Cost of the process, ability to scale the process, and risk (e.g. amount of resources wasted if a drug fails at some stage in development).
As the article points out, pharma can do this when the incentives are right (COVID vaccines) which implies there's a reason to not do it normally.
You need a step beyond this though. Not just that we are coming up with harder moral problems, but that solving those problems is important to future moral progress.
Perhaps a structure as simple as the one that has worked historically will prove just as useful in the future, or, as you point out has happened in the past, wider societal changes (not progress in moral philosophy at an academic discipline) is the major driver. In either case, all this complex moral philosophy is not the important factor for practical moral progress across society.
In your argument for 3, I think I accept the part that moral philosophising hasn't happened much historically. However, I can't really find the argument that it probably will in the future. Could you perhaps spell it out a bit more explicitly, or highlight where you think the case is being made please?
Great and interesting post though, I love seeing people rigourously exploring EA ideas and fitting them into the wider academic literature.
Thank you Ricardo, this is an insightful analysis. I'd like to see more EA Forum posts with this level of investigation invested into them. In particular, the balance of more longtermist and less global health funding is in contrast with other analyses on the forum.
I think your write-up could be improved more than the underlying analysis. To make this more accessible to others, and your work higher impact, I'd recommend the following.
Do you know of anything else that feels similar to this? People in public areas collecting biological samples from volunteers (perhaps lightly compensated).
Afraid not. The closest I can think of is collecting samples from healthy volunteers without any benefit to them, but not in public areas. In particular, I'm thinking of swabbing in primary health settings (eg RGCP/UKHSA run something like this in England, I can't remember if it only includes those with respiratory symptoms) and testing blood donations (normally serological testing looking for antibo...
Thank you for that very detailed reply Jeff, I learnt a lot about how to think about costing this.
The easiest way to collect a pooled sample is the walk around some building and sample everyone. This gets you a big sample pretty cheaply, but it's not a great one if you want to understand the containing city because it's likely that many people in the building will get sick on a similar timeframe.
I agree this is true for an office block, but I would think you can do much better without much cost. For example, if you use a high-traffic commuter train sta...
am I practicing my handwriting in 1439?
I'm not sure what the question is here, I find your metaphor opaque. I guess this is a reference to the invention of the printing press around then, which in some sense makes handwriting pointless. But, being able to have legible handwriting seems pretty useful up until at least this century, perhaps until widespread smartphones.
Thank you for this write-up, very interesting. I'm excited to see more investigations of different surveillance systems' potential.
Hopefully, the SIREN 2.0 study, running this winter, will generate some more data to answer this question.
A few questions now I've had time to consider this post a bit more. Apologies, if these are very basic, I'm pretty unfamiliar with metagenomics.
First, how do you relate relative abundance to detection probability? I would have thought the total number of reads of the pathogen of interest also matters. That is, if you tested...
Tl;dr: epidemic and statistical modelling PhD looking for roles in biosecurity, global health, and quantitative generalist roles.
Skills & background: I am about to submit a biostatistics PhD (University of Cambridge, UK), focusing on statistical methods to estimate the incidence of COVID-19 in England and survival analysis. I have experience providing scientific advice to the UK government on the pandemic. Broad Bayesian statistical skillset, as well as skills in engaging critically with literature. View my past posts for less academic samples of my wo...
Feel free to message me if you're interested in going deeper into what a typical viral load might look like. I can generate trajectories, based on the data from the ATACCC study. Note that this is in viral RNA copies, not Ct values - they did the conversion as part of that study.
Thank you Ben! The 80% CI[1] is an output from the model.
Rough outline is.
The argument AI safety work is more cost-effective than AMF when considering only the next few generations is pretty weak.