I really appreciate you looking into this topic. I think you want to have much much bigger error bars on these, however. Interventions like this are known to have massive selection effects and difficulty with determining causality—giving point estimates is kind of sweeping under the rug the main thing that I'm interested in regarding whether these interventions work.For example, ACE had a problem similar to this when it was beginning. For one of the charities, they relied on survey data to look for an effect and gave estimates of how effective interventions were based on this, but all of the interesting question was basically "whether we should believe at all the type of conclusion they drew from the surveys". In the end, of course the answer was no.
I didn't read the whole post but the reasoning in the summary and early sections seemed to be centered around point estimates and taking-data-at-face-value. The type of analysis that would convince me to change my actions here would be reliability analysis, seeking to show any place within this domain that has extremely clear support for a real effect. By default this basically doesn't exist for social interventions ime, so the conclusions are unfortunately more affected by the vagaries of the input data rather than the underlying reality.
A similar position to David's might be that bioethics institutions are bad for the world, while being agnostic about academia. I don't know much about academic bioethicists and you might be right that their papers as a whole aren't bad for the world. But bioethics think tanks and NGOs seem terrible to me: for example, here's a recent report I found pretty appalling (short version, 300-page version).
Looks like a great idea, very glad someone is pursuing the roll-up-your-sleeves method here.
I think the best addition to this that you could make is a business plan—basically, how much would it cost to replicate how many studies, how would you best choose studies for replication to maximize efficiency / impact, how much / how long until you were replicating 1 or 10% of top studies, etc. I'd also personally like to see a different version of "what has been achieved" that didn't lean as much on collaborations / work of collaborators, as I find these basically meaningless.
This seems like a really great thing to try at small scale first. Seems important to have a larger vision but make Little Bets to start, as Peter Sims or Cal Newport would say. You don't want to start with 30+ people with serious expertise at 90% likelihood of conversion because you want to anneal into a good structure, not bake your early mistakes into the lasting organizational culture. (Maybe you had already planned on this but seems worth clarifying as one of the most common mistakes made by EAs.)
Single issue lobbying group called "2%", perhaps. Or 5% if NGDP.
Some other possible takeaways that I would lean toward:
I definitely agree re antitrust, it seems like a slam-dunk. If I have time after this case I was thinking about trying to slowly reach out to try to elicit an American version from someone, or finding out why that's not on the table. I've been made quite aware of how much I don't know about ongoing projects in this space.
I did email ~20 of them about drafting amicus briefs and didn't get any takers; plausibly they would be down to give some sort of lesser help if you had ideas for what to ask for.
Good idea, I'll forward this. I'm focusing on US/Western profs for now because A) many Indian institutes are already involved, and India's profs seem to know about the case, and Sci-Hub's lawyers are much better connected there, and B) I think international/Western backing is an important source of clout diversification. Many Indian Supreme Court cases actually cite American amici as an important legal source.
I think backups of Sci-Hub would be a good idea if you can find any legal avenues to create them. I'm not sure if that's very tractable, and it doesn't appear to be all that neglected (though these are probably mostly in illegal jurisdictions).
Re scientific progress, I agree that it's not obviously a good thing, but after thinking about this extensively with little resolution, my conclusion is roughly: given that we cannot reasonably learn enough to resolve this uncertainty, and we can't coordinate on acting as if scientific progress is a negative thing, and it would hamstring us in many ways to act as such, I think we should basically treat "generally advancing science" as a fine/good thing. We can circumscribe areas like AI capabilities and gain-of-function as specifically bad, for better results and a more reasonable stance.