MS

Matthew Stork

155 karmaJoined Oct 2021

Comments
18

Thanks for that additional context Mats. I did want to follow up on US vs ex-US P1 trials.

  1. At this point, Pfizer and Moderna have consistently gotten mRNA vaccines into US trials in ~2 months after receiving a new sequence. Are the timelines to starting a trial in Australia/South Africa/elsewhere considerably shorter?

  2. As a comment, big pharma does perform vaccine trials in the global South, although this is more about seasonality than speed. Pfizer performed a major P1/P2 trial for their RSV vaccine in Australia, because the timing of RSV season in Australia matched well with the development timelines for the drug.

  3. One of the main arguments I see in favor of US or EU P1 trials is that the FDA/EMA are the most sophisticated regulatory agencies, and will provide more useful feedback on your development plan than other agencies. This is particularly relevant if you intend to eventually market a drug in the US, since the FDA only accepts foreign trial data if the study was conducted according to FDA requirements. Since Alvea wasn't really targeting the US market, this is less of a consideration. Still, what do you make of this argument?

  4. I'd also mention that once you get to the P3/BLA stage, filling in a country with a less sophisticated regulatory agency can be a burden. My own experience is that FDA/EMA/Japan are more willing to let sponsors deviate from official guidelines if there is a good scientific justification. Meanwhile, other agencies lack the expertise to assess drugs on the merits and will fall back to saying that you must follow the guidelines. For instance, I've been in the maddening position before of having agencies in small countries repeatedly cite FDA guidance to argue that we need to make some change to our plans, despite the fact that our drug was already approved by the FDA without said change!

Thanks for the thoughtful response Max, and I appreciated your separate write-up on this subject.

I do want to highlight though that in his write-up, Kyle listed the improved regulatory environment for updated mRNA vaccines as a reason why Alvea decides not to proceed with your first vaccine candidate. That's really what I was addressing with my comment.

As to the second point about lack of efficacy for plasmid vaccines, I think there were different opinions on this within Alvea. One high ranking person I spoke to gave me the opinion that poor performance of a competitor COVID plasmid vaccine was due to competitor incompetence rather than an issue with plasmid vaccines themselves. That sort of comment is why I mentioned my concern that there may of been a degree of contempt among some in Alvea for big pharma.

Totally agree as to the "practical" efficacy of the adenovirus vaccines though!

One thing to clarify here, Alvea has the record for time from company founding to clinical trial. Established companies have had shorter timelines for going from program conception to a clinical trial. For instance, Pfizer/Moderna went from initial receipt of the COVID sequence to a clinical trial in ~two months or so for their COVID vaccines.

Still an impressive achievement for Alvea though; they were nearly as fast as the pharma giants despite starting from nothing.

While I was sad to hear about Alvea's winddown, you do have a lot to be proud of. As you and others have highlighted, the extreme speed to the clinic for your initial vaccine candidate was unprecedented. 

That said, I did want to provide a couple critiques, with the intention of helping guide future EA endeavors in the biotech space. For some context to folks, I'm a mid-career professional working in biologics manufacturing, with experience with both plasmids and vaccines. I also had a number of conversations with Alvea folks at various stages of their clinical development. I'm going to focus here just on the initial push for the 1st candidate, which is the stage I'm most familiar with.

  1. There weren't enough experienced Alvea team members with significant domain expertise in biotech and vaccines. Your speed to the clinic demonstrated that a team of smart and motivated generalists can achieve operational excellence. You were right to mistrust the industry veterans that said those timelines were impossible*. However, I would argue that there are other areas where industry "common wisdom" was correct, and listening to industry common wisdom would have been beneficial:
    1. Competition from mRNA vaccines. Pfizer had already announced development of an Omicron-specific vaccine in Dec 2021 with a trial starting in January 2022. The original COVID vaccine was approved under an EUA 8-months after initiation of clinical trials. I therefore don't think it should have been surprising to see an updated anti-Omicron vaccine approved in Sept 2022, 8-months after initiation of that trial. 
    2. Low efficacy of plasmid vaccines. I won't get into the biology here, but plasmids vaccines have generally been shown to have poor efficacy compared to other modalities.
  2. Related, I think the "sprint" mindset made it difficult to recruit experienced professionals. Speaking just for myself, as someone with a mortgage and a child, it's very difficult for me to justify dropping everything to work on a short term, high-risk project. 
  3. Over indexing on the belief that "FDA and big pharma is too slow and EAs could do better". Generally, FDA and big pharma are (usually) too slow! However, I think as EAs, we need to be realistic about what our marginal contribution can actually be. This is the purpose of assessing for neglectedness in the "impact, neglectedness, tractability" framework. Making COVID vaccines is one of the least neglected cause areas out there, given the billions invested in this space by both industry and the public sector. If I had a main critique of the initial vaccine push, it was along the lines of "how do you think you are going to out-compete Pfizer/Moderna?" (Full-disclosure, I'm an ex-Pfizer employee myself). This also plays into why domain expertise is useful. In a highly crowded space, it's particularly important to understand the competition.

Anyway, these critiques still don't take away from what you accomplished. I would particularly love to hear more at some point about how you managed to get your external partners to meet your timelines, for very self-interested reasons!

* I do want to say that this was not a universal sentiment among industry veterans. Checking back to our initial emails, at least from a manufacturing perspective I never argued that your goals were unachievable. This plays a bit into my third critique as well.

I would define race science as the field trying to prove the superiority of one race over another race, for the purpose of supporting a racial hierarchy.

So IQ differences between races = race science

Susceptibility to different diseases != race science

Differences in 100M dash times != race science (countries don't choose their leaders based on sprint times).

Sure, I provided David Reich as an example of a population geneticist doing good work that I believe is worthwhile.

I disagree, I don't think there is value in race science at all, since race isn't a particularly good way of categorizing people. At the moment, there are plenty of good scholars working in population genetics (David Reich at Harvard is a good example). None of the scholars I'm aware of use race as a primary grouping variable, since it's not particularly precise.

To be clear, the "one topic" is race science, not general intelligence.

I think this is the best Steelman of a certain position that prioritizes epistemic integrity. I also think this position is wrong.

The only acceptable approach to race science is to clearly and vigorously denounce assertions that one race is somehow superior or inferior, and to state that it is a priority to address any apparent disparities between races. Responding to inquiries on this subject with some version of "I'm not an expert in intelligence research, etc" comes across as "mealy mouthed," to use Rohit's words. Bostrom himself used a version of this argument in his apology, and it just doesn't fly.

This doesn't require sacrificing epistemic integrity. Rohit's suggested apology is pretty good in this regard:

"We still have IQ gaps between races, which doesn't make sense. It's closing, but not fast enough. We should work harder on fixing this."

EDIT: Overall, my main point is that Rohit is broadly correct in asserting that it's a huge problem if the EA community ends up somehow having a position on the IQ and race question. It's obviously a massive PR problem; how do you recruit people to join an organization that has been branded as being racist? Even more important though, if the question of IQ and race plays a non-trivial role in your determination of how to do the most good, then you have massively screwed up somewhere in your thought process.

EDIT 2: Removed some comments that prompted a discussion on topics that really just aren't relevant in my opinion. I think we should avoid getting caught up arguing about the specifics of Bostrom's claims, but part of my comment seems to have prompted discussion in that direction so I've removed it.

Agree with your post and want to add one thing. Ultimately this was a failure of the EA ideas more so than the EA community. SBF used EA ideas as a justification for his actions. Very few EAs would condone his amoral stance w.r.t. business ethics, but business ethics isn't really a central part of EA ideas. Ultimately, I think the main failure was EAs failing to adequately condemn naive utilitarianism. 

I think back to the old Scott Alexander post about the rationalist community: Yes, We Have Noticed The Skulls | Slate Star Codex. I think he makes a valid point, that the rationalist community has tried to address the obvious failure modes of rationalism. This is also true of the EA community, in that there has absolutely been some criticism of galaxy brained naive utilitarianism. However, there is a certain defensiveness in Scott's post, an annoyance that people keep bringing up past failure modes even though rationalists try really hard to not fail that way again. I suspect this same defensiveness may have played a role in EA culture. Utilitarianism has always been criticized for the potential that it could be used to justify...well, SBF-style behavior. EAs can argue that we have newer and better formulations of utilitarianism / moral theory that don't run into that problem, and this is true (in theory). However, I do suspect that this topic was undervalued in the EA community, simply because we were super annoyed at critics that keep harping on the risks of naive utilitarianism even though clearly no real EA actually endorses naive utilitarianism. 

Load more