The COVID-19 pandemic likely was due to a lab leak in Wuhan. Currently, it's still up for public debate but likely the topic will be closed when the US intelligence community reports on their attempts to gather information of what happened at the Wuhan Institute of Virology between September and December of 2019 and suspicious activities in it around that time.
However, even in the remote change that this particular pandemic didn't happen as a downstream consequence of gain-of-function research, we had good reason to believe that the reason was extremely dangerous.
Marc Lipsitch, who should be on the radar of the EA community given that he spoke at EA Boston in 2017, wrote in 2014 about the risks of gain of function research:
A simulation model of an accidental infection of a laboratory worker with a transmissible influenza virus strain estimated about a 10 to 20% risk that such an infection would escape control and spread widely (7). Alternative estimates from simple models range from about 5% to 60%. Multiplying the probability of an accidental laboratory-acquired infection per lab-year (0.2%) or full-time worker-year (1%) by the probability that the infection leads to global spread (5% to 60%) provides an estimate that work with a novel, transmissible form of influenza virus carries a risk of between 0.01% and 0.1% per laboratory-year of creating a pandemic, using the select agent data, or between 0.05% and 0.6% per full-time worker-year using the NIAID data.
If we make conservative assumption about with 20 full-time people working on gain of function research and take his lower bound of risk that gives us 1% chance of gain of function research causing a pandemic per year. These are conservative numbers and there's a good chance that the real chances are higher then that.
When in 2017 the US moratorium on gain of functions was lifted the big EA organization that promise their donors to act against Global Catastrophic Risk were asleep and didn't react. That's despite those organizations counting pandemics as a possible Global Catastrophic Risk.
Looking back it seems like this was easy mode, given that a person in the EA community had done the math. Why didn't the big EA organizations listen more?
Given that the got this so wrong, why should we believe that there other analysis of Global Catastrophic Risk isn't also extremely flawed?
Generally, if you don't have strong faith in the numbers the way to deal with it is to study it more. I was under the impressiong that understanding global catastrophic risk point of why we have organizations like FLI.
Even if they didn't accept the numbers the task for an organization like FLI would be to make their own estimate.
To go a bit into history, the reason the moratorium existed in the first place was that within the span of a few weeks 75 US scientists at the CDC got infected with anthrax and FDA employees found 16 forgotten vials of smallpox in storage and this was necessary to weaken the opposition to the moratorium in 2014 to get it passed.
When the evidence for harm is so strong that it forces the hand of politicians, it seems to me reasonable expectation that organizations who's mission is it to think about global catastrophic risk analyse the harm and have a public position on what they think the risk happens to be. If that's not what organization like the FLI are for, what's are they for?