Incremental Institutional Review Board Reform
Epistemic Institutions, Values and Reflective Process
Institutional Review Boards (IRBs) regulate biomedical and social science research. In addition to slowing and deterring life-saving biomedical research, IRBs interfere with controversial but useful social science research, eg, Scott Atran was deterred from studying Jihadi terrorists; Mark Kleiman was deterred from studying the California prison system, and a Florida State University IRB cited public controversy as a reason to deter research. We would like to see a group focused on advocating for plausible reforms to IRBs that allow more social science research to be performed. Some plausible examples:
Concrete steps to these goals could be:
Replacing Institutional Review Boards with Strict Liability
Biorisk, Epistemic Institutions, Values and Reflective Process
Institutional Review Boards (IRBs) regulate biomedical and social science research. As a result of their risk-averse nature, important biomedical research is slowed or deterred entirely; eg, the UK human challenge trial was delayed by several months because of a protracted ethics review process and an enrollment delay in a thrombolytics trial cost thousands of lives. In the US, a plausible challenge to IRB legality can be mounted on First Amendment grounds. We would be interested in funding a civil rights challenge to IRB legality, with the eventual goal of FDA guidance on control groups and strict liability replacing IRBs as a means of research regulation. This would have substantial overlap with our project idea of rapid countermeasure development to new pathogens.
Slowing AI Contingency Planning
AI Governance
AI progress has been especially rapid over the last 4 years. Because of visible success in diverse tasks by OpenAI, DeepMind, and others, it is likely that even more money and talent will flow into accelerating AI progress in the future. However, there is substantial controversy over whether AI safety/alignment technology is advancing as quickly as capability. Given that, we are interested in funding work on 1) identifying nonviolent ways to reversibly slow AI progress and 2) more research into whether and when such an intervention would be net-good.
Glad you didn't see any factual error in the posts!
#1, Yeah, you're totally right that "bioethicists" is the wrong target. Will try to use "institutionalized research ethics" going forward. It is much more explicit about what the problem is and more fair to bioethicists.
re #2, sort of agreed. I tend to think the public doesn't like weird ideas in general, but there was a recent paper showing higher public support for challenge trials than traditional trials. So I'm not sure what counts as weird to the public as a whole. It might be the case that the public has surprisingly EA-ish ideas on medical ethics, at least on this specific issue. Not sure.
I'm the author of the blogposts and tweets (@willyintheworld). You raise a bunch of good points and you're 100% right that when I write "bioethicists" on twitter I should really write "institutionalized research ethics". Not doing do so is sloppy of me. I think I do a better job showing the institutional dynamics bioethicists work under in my blogposts, so I think those hold up okay. But I'll look at those posts again and see if I think they need some edits.
Mostly agree with: "worth some eyebrow-raising if it turns out that the ingroup defense is something along the lines of “well, by bioethicists, we mean research ethicists, and by research ethicists we mean research bureaucrats, and by research bureaucrats, we mean research bureaucracy."
Your survey data on actual bioethicists' opinions was slightly surprising to me, so I should update on that.
My criticism of bioethics is aimed at bioethics-as-practiced-by-institutions, which does seem bad and deserve criticism, but you're right that the causal story here is definitely not [bioethicists are the sole reason big institutions are risk-averse] and so blaming only them doesn't make sense. My own posts basically argue that institutions use IRB's as a means of reputation/PR control, so in some sense I should exonerate bioethicists per se and focus on the institutional dynamics and laws that led to that equilibrium.
Incidentally, this does lead me to two points of possible (not sure of your views) disagreement:
Interesting though not super important piece of information: Rabies is ~100% fatal once symptoms present, but there is evidence that even without vaccination, some humans have been exposed and survived, they just didn't realize it.
I was about to post this. There are now two effective antivirals for COVID-19, developed relatively quickly, which makes me update towards antiviral development being a little easier and more promising than I thought.
In addition, the historic antivirals with great success are against HIV and Hepatitis C and are targeted against a chronic disease. Herpes and CMV have antiviral treatments and are somewhat more acute (though Herpes is a chronic disease with acute flare-ups), but COVID-19 is more acute than those two.
So my skepticism towards effective antivirals for acute illnesses is lower than before.
Hey, I'm working with Josh on an AMC project so I can answer this.
I don’t think it is actually a pessimistic paper for the pro-AMC case. The top-line result of “only 6 cents of additional R&D spending per dollar” is just part of the story. My summary of that paper:
I think the take-away is that if AMC’s act like the instrument Finkelstein uses here, we shouldn’t expect an AMC to stimulate a lot more private pharma investment, but they could still be very cost-effective if they resulted in an efficacious vaccine or if they sped up rollout. Notably, speeding up rollout is basically what Finkelstein found happened with the Hepatitis B vaccine.
So AMC’s could still be very cost-effective if the vaccine developed is effective and/or roll-out is sped up, as in the GAVI Pneumococcus AMC case.
Another factor is that Finkelstein examined the effects of increased revenue on already existing vaccines, while the proposed AMC's would mostly be focused on new vaccines.
My guess is that if Finkelstein found a big dynamic benefit from more R&D in the flu vaccine case just by a moderate increase in vaccine efficacy, then going from 0 efficacy (no vaccine) to moderate/substantial efficacy (new vaccine with vaccine efficacy of 75%) would yield large dynamic benefits. But I might be misunderstanding this part- not super confident in this.
We'll also be doing some more reading into the literature on AMC impact this week and next so we'll post about it next week.
It seems unlikely but not impossible given how strong status quo bias is among humans. NIMBY movement, reactionary and conservative politics in general, lots of examples of politics that call for less or no change.
Humans have had periods of tens or hundreds of thousands of years where we stagnate and technology doesn't seem to change much, as far as we can tell from the archaeological record, so this isn't unprecedented.
Another reason to doubt the infertility-->declining birth rate story is that some populations that live in similar environments have maintained very high fertility rates.
Ultra Orthodox Jews live close to other city dweller in the US, have high-ish levels of obesity (implying similar food environment to average westerner, which is a reason to think Amish living as farmers might be exempt), and have high fertility rates.
Also, there are some factors, like much better treatment of STDs, that should, all other things being equal, reduce infertility rates. Historically, STDs could be a major cause of infertility.
Also, the relationship between sperm count and conception rates is not linear. IIRC, after about 20 million/ML, higher sperm counts don't mean higher conception rates. So a 25% reduction in sperm count might not have much effect on conception rates for most men above that threshold, if that decline is even real.
(Apologies for the lack of citations, on mobile, will link later)