J

jeberts

Comms director @ 1Day Sooner
483 karmaJoined Sep 2022Working (0-5 years)Washington, DC, USA

Bio

My name is Jake. I live in DC. I used to work in foreign affairs, primarily Chinawatching, and then  as an investigative researcher (think due diligence, political mudslinging, corporate accountability, etc.). Then, I got dysentery as part of a human challenge trial. I tweeted about it like a maniac, went viral, and now I'm here. Life is funny. 

Feel free to reach out via Twitter DM or LinkedIn, or email me at jake dot eberts @ 1daysooner dot org.

Unless it's very obviously about 1Day Sooner stuff, assume what I post here is my own personal opinion.

Comments
11

Topic contributions
2

Woops, link fixed (here it is again). That article is part of a dedicated supplement to HCV challenge/CHIM. 

Speaking in my personal capacity, I agree — I'd love for insurance/that sort of compensation to be the norm. That does not happen enough in medical research, challenge or otherwise. 

I can see why an insurance agency would be very wary. Establishing causation of cancer in general is hard. Even if someone were screened and in perfect liver health during the CHIM, that doesn't mean they won't later adopt common habits (e.g. smoking or excessive drinking) that are risk factors for liver cancer. 

Relatedly, another article in Clinical Infectious Diseases reviewed liver cancer risks due to CHIM, concluding that "[a]lthough it is difficult to precisely estimate HCC risk from an HCV CHIM, the data suggest the risk to be very low or negligible." This was based on analysis of three separate cohorts/datasets of people who had previously been infected with hepatitis C in other contexts. Still, the risk cannot be discounted entirely, and there are risks other than liver cancer that our FAQ document discusses, too.

Perhaps a workaround could be to establish some sort of trust that pays out to any former CHIM participant who develops liver cancer not obviously traceable to something like alcohol abuse disorder, and have this fund liquidate its assets after a certain number of decades. That would be very novel, expensive, and probably legally complicated, and I don't think it's been raised before.

Thanks for reading!

The donation equivalent aspect is pretty interesting. A study probably would not allow a participant not to take a donation, so in practice it might just be however much money from the study one chooses to donate to effective causes (minus taxes; trial income is usually treated as taxable income, which is probably bad policy). I might be misunderstanding your point, though.

I'll reiterate (this probably should've been worded clearer in the post), one of the arguments we make here is that assuming all participants who make it into the study are about equally useful, we think EAs are more likely to be effective as pre-participants as well. This is because the study is still under consideration: there are decisions about the study's design that may make it go faster, and informed advocacy from earnest pre-participants could be very persuasive for regulators and ethicists who might otherwise reject certain study design decisions on paternalistic grounds. The community and shared worldview of EA makes us think EAs will, on average, be more engaged when it comes to voicing their views on study design.

This interactive model app based on the paper we mention in footnote 4 lets you tinker with a bunch of variables related to challenge model development and vaccine deployment. Based on that, and after a conversation with the lead author, we get about 200 years of life saved for every day sooner the model is developed. (The app isn't that granular/to the day yet but it is supposed to be updated soon.) So pushing for stud decisions that condense things even by a month or two could be huge. 

Part of our work has included pushing for higher compensation in general, both because we believe it can make recruitment easier (and faster) but also because we think that pay should be more commensurate with the social value generated. I and a few other former human challenge volunteers wrote this paper published in Clinical Infectious Diseases calling for US$20,000 in compensation as a baseline. That's far higher than the norm for challenge studies; the highest I've seen is under $8,0000. 

Re: Why EAs specifically, we delve into that a bit in footnote 9. In short, the study is still in a stage where it can be modified to substantially increase potential QALYs/DALYs saved. The voices of prospective participants could be very, very persuasive to researchers, regulators, ethicists when considering study design. Non-EAs are certainly capable of advocating and supporting changes as well, but we think EAs are much more likely to a) grasp the case for certain changes and b) be willing to advocate for them. 

No one should feel like they're obligated to be in a study as an EA (or as a "normie," though I dislike that dichotomy with EAs). There are certainly people for whom time is better spent elsewhere, EA or not. But not everyone on the forum necessarily works for an EA organization, and there are also certainly people who feel they'd have spare capacity and time that they'd like to commit to this sort of thing. 

I agree with this! People get filtered out of the studies for reasons completely beyond their control, even if they really want to join. You just can't help it if your white blood cell count is a tad too low or you have a slight fever the day of study admission. 

Shoutout to the 130-ish people in the UK who volunteered to be infected with malaria in two separate studies at various stages of the R21 development process! Those studies helped identify Matrix-M as the ideal adjuvant, and also provided insight into the optimal dose/vaccination schedule.

I feel motivated as a former due diligence/investigative research guy to expand briefly on where my frustration came from. I think it's hard to understate how stunning a failure of due diligence this was in the first round.

Due diligence for corporate work involves much more than Googling, but, like, the first step is often just Googling. When you Google Nya Dagbladet, the Swedish Wikipedia page pops up. (The English one did not exist last year.) 

Skimming the page as it existed circa fall 2022 thru Google Translate should have immediately raised several red flags, even for people not familiar with Swedish politics. These flags obviously would be taken with a grain of salt, because it's Wikipedia, but it stuns me that they were ignored at first. These immediately apparent flags include:

  • The links to the far-right party Nationaldemokraterna/National Democrats 
  • The use of at least once columnist noted for antisemitic conspiracy theories (this guy)
  • The "ethnopluralist" label
  • Irresponsible and misleading reporting related to vaccines (This was added after the letter of intent was signed, so assuming it was not clear)

Some of those flags don't immediately  check out — e.g., the ethnopluralist label is cited to the paper's about page, but is not specifically there (nor was it there in archived version of the website). But unless we assume the Wikipedia page is a straight up hit job — which is unlikely, and would be ruled out by checking even a few of the references — then proper due diligence research would have started with a very, very heavy level of scrutiny. 

But it sounds like what happened is they merely checked the Nya Dagbladet website and proposal and didn't see anything suspicious (again, a due diligence failure, but the website is not quite as blatant at first glance as say, Breitbart News), and wrote off the evidence of far-right ties and views because "quality of public discourse worldwide has degraded so badly" such that you can't be sure. 

The baseline Wikipedia + sources check took about twenty minutes to do, including typing this up here. Strong due diligence work is really important. I get that they ultimately did not give the grant, but to me, it's very disturbing that it even made it past the first half-hour sniff test.

If you're not already aware of the University of Chicago's Scav, I'd highly recommend poaching some ideas from them if you ever need inspiration. (E.g., Item 10 from 2021: "A collection of baseball cards for members of the Los Angeles Biblically Accurate Angels baseball team" or Item 262, 2015, "a series of cartoons  [drawn] on at least 30 tissues such that when they are rapidly pulled out of a tissue box, they create an animation.")

It's great that you know the results. While relatively minor in the grand scheme of things, it's frustrating that trials, at least here in the US, don't often share results with participants, even though it's theoretically as simple as a mass email along the lines of "here's what we learned" — presumably an email they're already sending to colleagues, funders, etc., in some  form. I had to ask the people running the Shigella trial for my data (not available yet, but I really wanna see if I got the placebo or not)! 

Load more