Thanks for the thoughtful response!
I want to start with the recognition that everything I remember hearing from you in particular around this topic, here and elsewhere, has been extremely reasonable. I also very much liked your paper.
My experience has been that I have had multiple discussions around disease shut down prematurely in some in-person EA spaces, or else turned into extended discussions of infohazards, even if I'm careful. At some point, it started to feel more like a meme than anything. There are some cases where "infohazards" were brought up as a good, genuine, relevant concern, but I also think there are a lot of EAs and rationalists who seem to have a better grasp of the infohazard meme than they do of anything topical in this space. Some of the sentiment you're pointing to is largely a response to that, and it was one of the motivations for writing a post focused on clear heuristics and guidelines. I suspect this sort of thing happening repeatedly comes with its own kind of reputational risk, which could stand to see some level of critical examination.
I think there are good reasons for the apparent consensus you present that particularly effective EA Biorisk work requires extraordinarily credentialed people.* You did a good job of presenting that here. The extent to which political sensitivity and the delicate art of reputation-management plays into this, is something I was partially aware of, but had perhaps under-weighted. I appreciate you spelling it out.
The military seems to have every reason to adopt discretion as a default. There's also a certain tendency of the media and general public to freak out in actively damaging directions around topics like epidemiology, which might feed somewhat into a need for reputation-management-related discretion in those areas as well. The response to an epidemic seems to have a huge, and sometimes negative, impact on how a disease progresses, so a certain level of caution in these fields seems pretty warranted.
I want to quickly note that I tend to be relatively-unconvinced that mature and bureaucratic hierarchies are evidence of a field being covered competently. But I would update considerably in your direction if your experience agrees with something like the following:
Is it your impression that whenever you -or talented friends in this area- come up with a reasonably-implementable good idea, that after searching around, you tend to discover that someone else has already found it and tried it?
And if not, what typically seems to have gone wrong? Is there a step that usually falls apart?
(Here are some possible bottlenecks I could think of, and I'm curious if one of them sounds more right to you than the others: Is it hard to search for what's already been done, to the point that there are dozens of redundant projects? Is it a case of there being too much to do, and each project is a rather large undertaking? (a million good ideas, each of which would take 10 years to test) Does it seem to be too challenging for people to find some particular kind of collaborator? A resource inadequacy? Is the field riddled with untrustworthy contributions, just waiting for a replication crisis? (that would certainly do a lot to justify the unease and skepticism about newcomers that you described above) Does it mostly look like good ideas tend to die a bureaucratic death? Or does it seem as if structurally, it's almost impossible for people to remain motivated by the right things? Or is the field just... noisy, for lack of a better word for it. Hard to measure for real effect or success.)
*It does alienate me, personally. I try very hard to stand as a counterargument to "credentialism-required"; someone who tries to get mileage out of engaging with conversations and small biorisk-related interventions as a high-time-investment hobby on the side of an analysis career. Officially, all I'm backed up with on this is a biology-related BS degree, a lot of thought, enthusiasm, and a tiny dash of motivating spite. If there wasn't at least a piece of me fighting against some of the strong-interpretation implications of this conclusion, this post would never have been written. But I do recognize some level of validity to the reasoning.
Now that we've gone over some of the considerations, here's some of the concrete topics I see as generally high or low hazard for open discussion.
These things may be worth specialists discussing among themselves, but are likely to do more harm than good in an open thread.
Thanks! For me, this does a bit to clear up why buy-in is perceived as such a key bottleneck.
(And secondarily, supporting the idea that other areas of fairly-high ROI are likely to be centered around facilitating collaboration and consolidation of resources among people with a lot of pre-existing experience/expertise/buy-in.)