All of JoshuaBlake's Comments + Replies

The people who are concerned about existential risk from near-term AGI don’t think it’s only a justified worry if you account for lives in the distant future. They think it’s a justified worry if you only account for people who already alive right now.

The argument AI safety work is more cost-effective than AMF when considering only the next few generations is pretty weak.

2
Yarrow Bouchard 🔸
This is, of course, sensitive to your assumptions.
6
DavidNash
Doesn't that still depend on how much risk you think there is, and how tractable you think interventions are? I think it's still accurate to say that those concerned with near term AI risk think it is likely more cost effective than AMF.

Governance, financing, and supply chain interventions can be randomised at state or district level

While I agree this is true in theory, is it practical? I imagine the size needed to power such a study is prohibitive except for the largest organisations, the answer still wouldn't be definitive (eg due to generalisation concerns), and there would be lots of measurement issues (eg residents in one district crossing to another district with better funded healthcare).

If you tell me I'm wrong, I'd definitely bow to your experience and knowledge in this field but this isn't obviously true.

6
NickLaing
I don't have great experience and knowledge here AT ALL as a caveat. Never "bow" to anything I say, my takes are often more on the "loose" than "rock solid" end of things :D. I think if we can randomise things like socrecard studies and IMCI across hundreds of health facilities (done a number of times), then I don't see why we can't do the same with a supply chain interventions or governance interventions. The community Health worker movement has done some impressive large scale RCTs like this one. Perhaps 1-3 million dollars could make these studies happen without too much trouble. Give 10 randomised districts the governance/ intervention and 10 not, then just see if healthcare outputs improve. I actually think its easier than many other types of studies because 1. I think its good enough to measure outcomes in terms of facility level outputsd, so we don't necessarily need community level morbidity/mortality data 2. Outcome measures (no. of patients treated, correct diagnosis) would be super easy and not expensive to measure compared with other studies. In many cases routinely collected DHIS data should be enough to answer the primary outcome question so we don't even necessarily need much expense on data collection (a big study cost) I would say from an RCT perspective if people crossed to another district because healthcare was getting that much better, that would be a strong sign that the intervention is working insanely well. If it was a financing type intervention, then making it close to cost-neutral between the intervention and control group. People are NOT very mobile in places like Uganda at least. Where I work in rural places Transport is often (if not usually) the biggest healthcare cost people incur. I  think the biggest reasons these studies haven't happened more (there are some) are less practicality and more... 1. Most governance, financing and supply chain interventions are funded by bilateral aid not philanthropy, so they don't usually think a

Do you have any advice for people like myself considering whether to go to you, 80k, or both for 1-1 career advice?

6
Probably Good
The best advice we can give on this currently is to simply apply to both!

Thanks for this post @alex lawsen, I continually revisit this as inspiration and to remember the usefulness of this process when I am making hard decisions, especially for my career.

From your blog, I know you are a big user of LLMs. I was wondering if you had successfully used them to replace, or complement, this process? When I feed one my Google Doc, I find the output is too scattergun or vague to be useful, compared to sharing the same Doc with friends.

If you've succeeded using LLMs, would you please share what prompts and models that have worked well for you?

Thanks, that's very useful for me trying to follow but not that deep in the models!

I feel like the overall takeaway is very different though. I've not fully understood the details in either argument so this is a little vibes based. You seemed to be arguing that below subsistence wages were fairly likely while here it seems to be that even falling wages require a weird combination of conditions.

What have I misunderstood?

5
Matthew_Barnett
I think the conditions that support eventual below subsistence wages are fairly plausible, which is why I argued that the overall outcome is plausible. It appears Phillip Trammell either believes these conditions are less likely than I do, or decided to temporarily suspend judgement about their likelihood for the purposes of writing this post. Either way, while I agree the emphasis of our posts is different, I think the posts are still consistent with each other in a minimal sense.

This seems like good work but the headline and opening paragraph aren't supported when you've shown it might be log-normal. Log-normal and power distributions often have quite different consequences for how important it is to move to very extreme percentiles, and hence this difference can matter for lots of decisions relevant to EA.

While Sam Bowman is interesting and seems ideologically similar to many EAs, it's not obvious to me why he's on the 80k podcast. Does 80k think UK housebuilding is one of the world's most pressing problems? Is there something else I'm missing?

6
JWS 🔸
The underlying idea here is the Housing Theory of Everything. A lossy compression of the idea is that if you fix the housing crisis in Western Economies, you'll unlock positive outcomes across economic, social, and political metrics which you can then have high positive impact. A sketch, for example, might be that you want the UK government to do lots of great stuff in AI Safety. But UK state capacity in general might be completely borked until it sorts out its housing crisis.
2
Nithin Ravi🔸
I've never seen a BOTEC of DALYs/$ for land use reform so unsure how it compares to other interventions, but my impression is that it is in the EA discourse because OP funds it. I lean towards it not being an effective use of funds, but I have low confidence in that. https://www.openphilanthropy.org/focus/land-use-reform/
5
Chris Szulc
Thank you sharing this. I have applied there recently and either did not reach EAIF's bar or failed to present the achievements and plans in a compelling way. Feedback seems uncommon in the EA grants acquisition landscape, so I can only guess and try again in 2025.

Type of estimate: Geometric mean of forecaster probabilities

This is a bit odd. Should probabilities be odds?

4
NunoSempere
Yes, should be odds, thanks

Could you please expand on why you think a Pareto distribution is appropriate here? Tail probabilities are often quite sensitive to the assumptions here, and it can be tricky to determine if something is truly power-law distributed.

When I looked at the same dataset, albeit processing the data quite differently, I found that a truncated or cutoff power-law appeared to be a good fit. This gives a much lower value for extreme probabilities using the best-fit parameters. In particular, there were too few of the most severe pandemics in the dataset (COVID-19 an... (read more)

2
Vasco Grilo🔸
Thanks for the relevant points, Joshua. I strongly upvoted your comment. I did not mean to suggest a Pareto distribution is appropriate, just that it is worth considering. Agreed. In my analysis of conflict deaths, for the method where I used fitter: ---------------------------------------- I did not get what you would like me to add to my tail distribution plot. However, I added here the coefficients of determination (R^2) of the regressions I did. Focussing on the annual deaths as a fraction of the global population is useful because it being 1 is equivalent to human extinction. In contrast, total epidemic/pandemic deaths as a fraction of the global population in the year in which the epidemic/pandemic started being equal to 1 does not imply human extinction. For example, a pandemic could kill 1 % of the population each year for 100 years, but population remain constant due to births being equal to the pandemic deaths plus other deaths. However, I agree interventions should be assessed based on standard cost-effectiveness analyses. So I believe the quantity of most interest which could be inferred from my analysis is the expected annual epidemic/pandemic deaths. These would be 2.28 M (= 2.87*10^-4*7.95*10^9) multiplying: * My annual epidemic/pandemic deaths as a fraction of the global population based on data from 1900 to 2023. Earlier years are arguably not that informative. * The population in 2021. The above expected death toll would rank as 6th in 2021. For reference, based on my analysis of conflicts, I get an expected death toll of conflicts based on historical data from 1900 to 2000 (also adjusted for underreporting), and the population in 2021 of 3.83 M (= 2.87*10^-4*7.95*10^9), which would rank above as 5th. Here is a graph with the top 10 actual causes of death and expected conflict and epidemic/pandemic deaths:

If you were willing to start putting doses in arms without any safety or efficacy testing at all, that's when you could start.

I think even this is pretty optimistic because there was very little manufacturing capacity at that point.

8
Will Bradshaw
I didn't say a lot of arms. 😛 But it's a fair point. Of course, in the absence of testing moderna could have ramped up production much faster. But I'm not sure they would have even if they were allowed to - that's a pretty huge reputational risk.

I don't see how you can say both that it will "almost never" be the case that NYC will "hit 1% cumulative incidence after global 1% cumulative incidence" but also that it would surprise you if you can get to where your monitored cities lead global prevalence?

Sorry, this is poorly phrased by me. I meant that it would surprise me if there's much benefit from adding a few additional cities.

2
Jeff Kaufman 🔸
Possibly! That would certainly be a convenient finding (from my perspective) if it did end up working out that way.

The best stuff looking at global-scale analysis of epidemics is probably by GLEAM. I doubt full agent-based modelling at small-scales is giving you much but massively complicating the model.

Sorry, I answered the wrong question, and am slightly confused what this post is trying to get out. I think your question is: will NYC hit 1% cumulative incidence after global 1% cumulative incidence?

I think this is almost never going to be the case for fairly indiscriminately-spreading respiratory pathogens, such as flu or COVID.

The answer is yes only if NYC's cumulative incidence is lower than the global mean region (weighted by population). Due to connectedness, I expect NYC to always be hit pretty early, as you point out, definitely before most rural c... (read more)

4
Jeff Kaufman 🔸
That's one of the main questions, yes. The core idea is that our efficacy simulations are in terms of cumulative incidence in a monitored population, but what people generally care about is cumulative incidence in the global (or a specific country's) population. Thanks! The tool is neat, and it's close to the approach I'd want to see. I don't see how you can say both that it will "almost never" be the case that NYC will "hit 1% cumulative incidence after global 1% cumulative incidence" but also that it would surprise you if you can get to where your monitored cities lead global prevalence?

This effect should diminish as the pandemic progresses, but at least in the <1% cumulative incidence situations I'm most interested in it should remain a significant factor.

1% cumulative incidence is quite high, so I think this is probably far along you're fine. E.g. we've estimated London hit this point for COVID around 22 Mar 2020 when it was pretty much everywhere.

[This comment is no longer endorsed by its author]Reply
2
Jeff Kaufman 🔸
I'm not sure what you mean by this? (Yes, 1% cumulative incidence is high -- I wish the NAO were funded to the point that we could be talking about whether 0.01% or 0.001% was achievable.)

Hmm.... That cutoff is really making it hard to assess what's going on here IMO. Everything is kinda clustered close to the line making me suspect the selection effect is important.

This seems unnecessarily precise and long. I don't think many people would get a different takeaway from the current title but it has over twice as many words.

You could do a funnel type plot where your y-axis is EAs/capita and your x-axis is 1/sqrt(population), which is sort of what you'd expect the standard deviation to look like.

2
OscarD🔸
OK this is what we get, using the 25 EAs cutoff for the red line.

This seems intuitively in the right ballpark (within an order of magnitude of GiveWell), but I'd caution that, as far as I can tell, the World Bank and Bernstein et al. numbers are basically made up.

I've previously written about how to identify higher impact opportunities. In particular, we need to be careful about the counterfactuals here because a lot of the money on pandemic preparedness comes from governments who would otherwise spend on even less cost effective things.

2
Vasco Grilo🔸
Thanks for the comment, Joshua! Because we do not know the relative reduction in the expected annual deaths caused by their proposed measures, right? I guess their values are optimistic, such that GiveWell's top charities are more than 4.12 times as cost-effective.

Glad you found it useful.

Your question is hard to answer because tackling GCBRs is extremely inter-disciplinary and I'm not familiar enough with the field at large to identify clear gaps. Personal fit probably matters more than the specific field. Most work on pandemic preparedness is applicable to GCBRs, and there's lots of research groups on that.

Sorry that's not a very helpful answer. If you've got more specific questions I'm happy to try and help.

My top line summary is: in several areas, EV were operating below the standard the commission would expect, but have rectified the issues to the commission's satisfaction.

Is this about OCB?

The timings are description line up very closely to the public record, so I'm almost certain it must be.

Since the launch of GPT-4, the next relevant data point will be GPT-5. All the rest is ~noise. 

Yes, and the longer until this release the more we should update towards longer timelines

1
defun 🔸
The prediction on Metaculus has barely moved for a year.

Why are you ballparking $10b when all of the examples given are many multiples of that? $100b seems like a better estimate.

I also suspect we're targeting easy to eradicate diseases. Those without animal reservoirs that will cause resurgences and where there are effective interventions. Therefore, I'd suggest this is a lower bound.

1
Matt_Sharp
I'm also confused as to why $10bn per disease is suggested, given the much higher costs of the listed examples.  However, it seems plausible that costs per disease will substantially decrease as we learn more about biology and how to successfully run eradication campaigns. For example, developing a new vaccine technology against one virus could make it much easier and cheaper to develop vaccines against related viruses.

Can I get alerts when new jobs get added matching my criteria?

6
Probably Good
Currently we aren't able to send job alerts for the board, but it's a feature we'll definitely explore adding in the future. Thanks for the feedback!  

Your objections seem reasonable but I do not understand their implications due to a lack of finance background. Would you mind helping me understand how your points affect the takeaway? Specifically, do you think that the estimates presented here are biased, much more uncertain than the post implies, or something else?

Sure, the claim hides a lot of uncertainties. At a high level the article says “A implies X, Y and Z”, but you can’t possibly derive all of that information from the single number A. Really what’s the article should say is “X, Y and Z are consistent with the value of A”, which is a very different claim.

i don’t specifically disagree with X, Y and Z.

Thanks - I think this type of careful empirical analysis, and its distillation, is some of the next content on the forum. I found your section on varying parameters particularly helpful for quantifying how sensitive the approach is to these non-empirical inputs.

Based on the abstract, that study is based on a survey where they asked people about hypothetical future scenarios. Those surveys are generally considered pretty inaccurate (most people forecast their future decisions poorly) so I wouldn't put much weight on it.

So the question is basically whether the (upkeep costs + opportunity cost of money - benefit from events) is more or less than discount from selling quickly?

4
Habryka [Deactivated]
Yep, I think that would be a reasonable calculation. 

What do you mean by take a huge loss? I'm not sure paper losses are relevant here.

I mean it in the sense that they will have to sell substantially below market value if they want to sell it quickly.

This kind of property tends to have huge bid-ask-spreads and the usual thing to do is to continue operating the property while looking for a buyer (my guess is they would succeed at selling it eventually at market value, but it would take a while).

Interesting read, I'm left unconvinced that traditional pharma is moving much slower than optimal. That would seem to imply that they're leaving a lot of money on the table (quicker approval = longer selling the drug before patent expires).

I have three speculative ideas on why this might be. Cost of the process, ability to scale the process, and risk (e.g. amount of resources wasted if a drug fails at some stage in development).

As the article points out, pharma can do this when the incentives are right (COVID vaccines) which implies there's a reason to not do it normally.

You need a step beyond this though. Not just that we are coming up with harder moral problems, but that solving those problems is important to future moral progress.

Perhaps a structure as simple as the one that has worked historically will prove just as useful in the future, or, as you point out has happened in the past, wider societal changes (not progress in moral philosophy at an academic discipline) is the major driver. In either case, all this complex moral philosophy is not the important factor for practical moral progress across society.

3
Rafael Ruiz
Fair! I agree to that, at least until this point of time. But I think there could be a time where we could have picked most of the "social low-hanging fruit" (cases like the abolition of slavery, universal suffrage, universal education), so there's not a lot for easy social progress left to do. At least comparatively, then investing on the "moral philosophy low-hanging fruit" will look more worthwhile. Some important cases of philosophical moral problems that might have great axiological moral importance, at least under consequentialism/utilitarianism could be population ethics (totalism vs averagism), our duties towards wild animals, and the moral status of digital beings. I think figuring them out could have great importance. Of course, if we always just keep them as just an interesting philosophical thought experiment and we don't do anything about promoting any outcomes, they might not matter that much. But I'm guessing people in the year 2100 might want to start implementing some of those ideas.

Bear in mind that even if FTX can pay everyone back now, that does not mean they were solvent at the point they were put into bankruptcy.

1
bern
Agree. In fact, SBF himself described FTX International as insolvent on his substack.  Although I think people may be using the term "solvency" in slightly different ways in discussions around FTX. I think that in FTX's case, illiquidity effectively amounted to insolvency, and that it's uncertain how much they could have sold their illiquid assets for. If for some reason you were to trust SBF's own estimate of $8b, their total assets would have (just) covered their total liabilities. Sullivan & Cromwell's John Ray said in December 2022 "We’ve lost $8bn of customer money" and I think most people have interpreted this as FTX having a net asset value of minus $8b. Presumably, though, Ray was referring either to the temporary shortfall in liquid funds or to the accounting discrepancy that was uncovered that summer/fall. SBF also claimed that he could have raised enough liquidity to make customers substantially whole given a few more weeks, but was under extreme pressure to declare bankruptcy. I think there's a good chance this is accurate, in part because most of the pressure came from Sullivan & Cromwell and a former partner of the firm, who are now facing a class action lawsuit for their alleged role in the fraud. (If anyone has evidence that FTX's liabilities did in fact exceed its assets by $8b at the time of the bankruptcy, I would be interested in seeing it.)
8
Ben Millwood🔸
My understanding (for whatever it's worth) is that most of the reason why a full repayment looks feasible now is a combination of: * Creditors are paid back the dollar value of their assets at the time of bankruptcy. Economically it's a bit like everyone was forced to sell all their crypto to FTX at bankruptcy date, and then the crypto FTX held appreciated a bunch in the meantime. * FTX held a stake in Anthropic, and for general AI hype reasons that's likely to have appreciated a lot too. I think it's reasonable to think of both of these as luck, and certainly a company relying on them to pay their debts is not solvent.
2
Nathan Young
They almost certainly were not. (99%)

and even if they were solvent at the time, that does not mean they were not fraudulent.

If I took all my customers money, which I had promised to safekeep, and went to the nearest casino and put it all on red, even if I won it would still be fraud.

In your argument for 3, I think I accept the part that moral philosophising hasn't happened much historically. However, I can't really find the argument that it probably will in the future. Could you perhaps spell it out a bit more explicitly, or highlight where you think the case is being made please?

Great and interesting post though, I love seeing people rigourously exploring EA ideas and fitting them into the wider academic literature.

3
Rafael Ruiz
Sure! So I think most of our conceptual philosophical moral progress until now has been quite poor. If looked under the lens of moral consistency reasoning I outlined in point (3), cosmopolitanism, feminism, human rights, animal rights, and even longtermism all seem like slight variations on the same argument ("There are no morally relevant differences between Amy and Bob, so we should treat them equally"). In contrast, I think the fact that we are starting to develop cases like population ethics, infinite ethics, complicated variations of thought experiments (there are infinite variations of the trolley problem we could conjure up), that really test our limits of our moral sense and moral intuitions, hints at the fact that we might need a more systematic, perhaps computerized approach to moral philosophy. I think the likely path is that most conceptual moral progress in the future (in the sense of figuring out new theories and thought experiments) will happen with the assistance of AI systems. I can't point to anything very concrete, since I can't predict the future of moral philosophy in any concrete way, but I think philosophical ethics might become very conceptually advanced and depart heavily from common-sense morality. I think this has been an increasing gap since the enlightenment. Challenges to common-sense morality have been slowly increasing. We might be at the early beginning of that exponential takeoff. Of course, many of the moral systems that AIs will develop we will consider to be ridiculous. And some might be! But in other cases, we might be too backwards or morally tied to our biologically and culturally shaped moral intuitions and taboos to realize that it is in fact an advancement. For example, the Repugnant Conclusion in population ethics might be true (or the optimal decision in some sense, if you're a moral anti-realist), even if it goes against many of our moral intuitions.  The effort will take place in separating the wheat from the chaff

Thank you Ricardo, this is an insightful analysis. I'd like to see more EA Forum posts with this level of investigation invested into them. In particular, the balance of more longtermist and less global health funding is in contrast with other analyses on the forum.

I think your write-up could be improved more than the underlying analysis. To make this more accessible to others, and your work higher impact, I'd recommend the following.

  • Include your most important takeaways, and less information on your methods (eg the link to the notebook) in the tl;dr. Ve
... (read more)

This seems weird. We don't write 0156 for the year 156. I think this is likely to cause confusion.

This would surprise me. Surveillance is a very expensive ongoing cost, and the actions you should take upon detecting a new microbe which could potentially be a pathogen are unclear. Have you got a more detailed version of why you think this?

3
PandemicRiskMan
“Surveillance is a very expensive ongoing cost” 1 Do you have any estimates for that? The cost of annual surveillance is surely a pittance compared to the cost of a Covid-19 pandemic every 20-30 years. We have an insurance industry for this reason. One would also expect the cost of that surveillance to fall over time, and the quality of the info it provides to improve too. 2 The cost of developing, trialling, manufacturing, and distributing 8bn doses of an unproven vaccine to every corner of the world every time a novel PPP is discovered (!!) would surely be more than the annual surveillance costs. It also offers worse health outcomes. Plus vaccines are our last line of defence. If we aim to defend ourselves there, then it is only a question of time before we lose. Global vaccination is, to be frank, an insane approach to pandemic risk management. This, thankfully, is starting to be understood by some prominent epidemiologists: https://www.oecd-forum.org/posts/entering-the-age-of-pandemics-we-need-to-invest-in-pandemic-preparedness-even-while-covid-19-continues   “the actions you should take upon detecting a new microbe which could potentially be a pathogen are unclear” First you alert the world to the fact that there is an unidentified / novel pathogen circulating. Then you implement your national pandemic prevention plans. Remember, this is not a scientific matter. Making decisions in uncertain environments is risk management, not science. Real world planning, preparation, resource management, and tactical decision-making in uncertain environments are required to protect humanity from pandemics, but they are not scientific skills. They are not taught to, or by, scientists, so the methods of science are of little value in a crisis. A scientist can tell you what a hurricane is, whereas a risk manager can tell you what to do about it. That's the key difference. But, that's also a major hurdle that we'll need to overcome, as scientists are very influential but a

Do you know of anything else that feels similar to this? People in public areas collecting biological samples from volunteers (perhaps lightly compensated).

Afraid not. The closest I can think of is collecting samples from healthy volunteers without any benefit to them, but not in public areas. In particular, I'm thinking of swabbing in primary health settings (eg RGCP/UKHSA run something like this in England, I can't remember if it only includes those with respiratory symptoms) and testing blood donations (normally serological testing looking for antibo... (read more)

Thank you for that very detailed reply Jeff, I learnt a lot about how to think about costing this.

The easiest way to collect a pooled sample is the walk around some building and sample everyone. This gets you a big sample pretty cheaply, but it's not a great one if you want to understand the containing city because it's likely that many people in the building will get sick on a similar timeframe.

I agree this is true for an office block, but I would think you can do much better without much cost. For example, if you use a high-traffic commuter train sta... (read more)

4
Jeff Kaufman 🔸
Definitely! Right after writing to you I started thinking about this, estimating costs, and talking to coworkers; sorry for not posting back! I do think something along these lines could work well. My main update since then is that if you do it at a transit station you probably need to compensate people, but also that a small amount of compensation doesn't sink this. Giving people $5 or a candy bar for a swab is possible, and if a team of two people at a busy transit station can get 50-200 swabs in an hour that's your biggest sample acquisition cost. I still think $1k is practical for the sequencing. I'm trying to come up with examples of people doing something similar, which we'd want for presenting this to the IRB. Two examples so far: * XpresCheck for COVID tracking at airports (site, [consent brochure] (https://www.xprescheck.com/xpresresources/CDC_COVID_Testing_Brochure.pdf)) * Various companies that sample for bone marrow compatibility testing (ex: Be The Match) Do you know of anything else that feels similar to this? People in public areas collecting biological samples from volunteers (perhaps lightly compensated).

am I practicing my handwriting in 1439?

I'm not sure what the question is here, I find your metaphor opaque. I guess this is a reference to the invention of the printing press around then, which in some sense makes handwriting pointless. But, being able to have legible handwriting seems pretty useful up until at least this century, perhaps until widespread smartphones.

Thank you for this write-up, very interesting. I'm excited to see more investigations of different surveillance systems' potential.

Hopefully, the SIREN 2.0 study, running this winter, will generate some more data to answer this question.

A few questions now I've had time to consider this post a bit more. Apologies, if these are very basic, I'm pretty unfamiliar with metagenomics.

First, how do you relate relative abundance to detection probability? I would have thought the total number of reads of the pathogen of interest also matters. That is, if you tested... (read more)

4
Jeff Kaufman 🔸
Lots of great questions! Thanks for pointing this out; I hadn't seen it and it's super relevant. I don't see what sample type they're using in the press release, but any kind of ongoing metagenomics to look at respiratory viruses is great! It depends on your detection method, but modeling it as needing some number of cumulative reads hitting the pathogen is a good first approximation. If you think it would take N reads of the pathogen to flag it then if you know RA(1%) and the exponential growth rate you can make a back of the envelope estimate of how much sequencing you'd need on an ongoing basis to flag it before X% of people had ever been sick. For example, if you need 100 reads to flag, it doubles weekly, and RAi(1%) is 1e-7 then to flag at a cumulative incidence of 1% (and current weekly incidence of 0.5%) you'd need 100/1e-7 = 1e9 reads a week. (I chose 1% cumulative incidence and weekly doubling to make the mental math easier. At 1% CI half the people got sick this week and half in previous weeks, and the cumulative infection rate across all past sequencing should sum to 1%, so we can use RAi(1%) directly. Though I might have messed this up since I'm doing it in my head lying in bed.) If you collected a large enough sample volume and sequenced deeply though, yes. It doesn't, for three reasons: * Sequencing in bulk is a lot cheaper per read. You might pay $13k for 10B read pairs, or $1k for 100M. But that's just ~10x. * Some components (lab time, kits) vary in proportion to the number of samples and don't go up much as your samples are bigger. * It's only your sequencing costs that vary with relative abundance, and while with wastewater I expect the cost of sequencing to dominate that's not the case for any other sample type I can think of (maybe air?) If you're sampling from individuals the cost of getting the samples is likely quite high (we were recently quoted $80/person from a contractor, and while I think we can do better if you want 1k peopl

Tl;dr: epidemic and statistical modelling PhD looking for roles in biosecurity, global health, and quantitative generalist roles.

Skills & background: I am about to submit a biostatistics PhD (University of Cambridge, UK), focusing on statistical methods to estimate the incidence of COVID-19 in England and survival analysis. I have experience providing scientific advice to the UK government on the pandemic. Broad Bayesian statistical skillset, as well as skills in engaging critically with literature. View my past posts for less academic samples of my wo... (read more)

Feel free to message me if you're interested in going deeper into what a typical viral load might look like. I can generate trajectories, based on the data from the ATACCC study. Note that this is in viral RNA copies, not Ct values - they did the conversion as part of that study.

2
Jeff Kaufman 🔸
Thanks! I'm most interested in viral load in the sense of the relative abundance you get with untargeted shotgun sequencing (since you need sequencing (or something similarly general) to detect novel threats and/or avoid having a trivially-bypassable detection system) but there's not much literature on this.

Do you have timezone requirements?

2
technicalities
No

I don't have a strong opinion here. I would guess having the information out and findable is the most important. My initial instinct is directly or linked from the fund page or applicant info.

As someone considering applying to LTFF, I found even rough numbers here very useful. I would have guessed success rates 10x lower.

If it is fairly low-cost for you (e.g.: can be done as an automated database query), publishing this semi-regularly might be very helpful for potential applicants.

4
Linch
Thanks for the feedback! Do you have thoughts on what platform would be most helpful for you and other (potential) applicants? Independent EAF shortform, a point attached somewhere as part of our payout reports, listed on our website, or somewhere else?

We will be publishing more posts, including information about our other ideas, in the coming weeks.

I can't find these posts on the forum (I checked the pos history of both of this post's authors). Could you please point me towards them?

Thank you Ben! The 80% CI[1] is an output from the model.

Rough outline is.

  1. Start with an uniformative prior on the rate of accidental pandemics.
  2. Update this prior based on the number of accidental pandemics and amount of "risky research units" we've seen; this is roughly to Laplace's rule of succession in continuous time.
  3. Project forward the number of risky research units by extrapolating the exponential growth.
  4. If you include the uncertainty in the rate of accidental pandemics per risky research unit, and random variation, then it turns out the number of
... (read more)
2
Ben Stewart
Awesome, thanks!
Load more