I mean it in the sense that they will have to sell substantially below market value if they want to sell it quickly.
This kind of property tends to have huge bid-ask-spreads and the usual thing to do is to continue operating the property while looking for a buyer (my guess is they would succeed at selling it eventually at market value, but it would take a while).
Interesting read, I'm left unconvinced that traditional pharma is moving much slower than optimal. That would seem to imply that they're leaving a lot of money on the table (quicker approval = longer selling the drug before patent expires).
I have three speculative ideas on why this might be. Cost of the process, ability to scale the process, and risk (e.g. amount of resources wasted if a drug fails at some stage in development).
As the article points out, pharma can do this when the incentives are right (COVID vaccines) which implies there's a reason to not do it normally.
You need a step beyond this though. Not just that we are coming up with harder moral problems, but that solving those problems is important to future moral progress.
Perhaps a structure as simple as the one that has worked historically will prove just as useful in the future, or, as you point out has happened in the past, wider societal changes (not progress in moral philosophy at an academic discipline) is the major driver. In either case, all this complex moral philosophy is not the important factor for practical moral progress across society.
Bear in mind that even if FTX can pay everyone back now, that does not mean they were solvent at the point they were put into bankruptcy.
and even if they were solvent at the time, that does not mean they were not fraudulent.
If I took all my customers money, which I had promised to safekeep, and went to the nearest casino and put it all on red, even if I won it would still be fraud.
In your argument for 3, I think I accept the part that moral philosophising hasn't happened much historically. However, I can't really find the argument that it probably will in the future. Could you perhaps spell it out a bit more explicitly, or highlight where you think the case is being made please?
Great and interesting post though, I love seeing people rigourously exploring EA ideas and fitting them into the wider academic literature.
Thank you Ricardo, this is an insightful analysis. I'd like to see more EA Forum posts with this level of investigation invested into them. In particular, the balance of more longtermist and less global health funding is in contrast with other analyses on the forum.
I think your write-up could be improved more than the underlying analysis. To make this more accessible to others, and your work higher impact, I'd recommend the following.
This seems weird. We don't write 0156 for the year 156. I think this is likely to cause confusion.
This would surprise me. Surveillance is a very expensive ongoing cost, and the actions you should take upon detecting a new microbe which could potentially be a pathogen are unclear. Have you got a more detailed version of why you think this?
Do you know of anything else that feels similar to this? People in public areas collecting biological samples from volunteers (perhaps lightly compensated).
Afraid not. The closest I can think of is collecting samples from healthy volunteers without any benefit to them, but not in public areas. In particular, I'm thinking of swabbing in primary health settings (eg RGCP/UKHSA run something like this in England, I can't remember if it only includes those with respiratory symptoms) and testing blood donations (normally serological testing looking for antibo...
Thank you for that very detailed reply Jeff, I learnt a lot about how to think about costing this.
The easiest way to collect a pooled sample is the walk around some building and sample everyone. This gets you a big sample pretty cheaply, but it's not a great one if you want to understand the containing city because it's likely that many people in the building will get sick on a similar timeframe.
I agree this is true for an office block, but I would think you can do much better without much cost. For example, if you use a high-traffic commuter train sta...
am I practicing my handwriting in 1439?
I'm not sure what the question is here, I find your metaphor opaque. I guess this is a reference to the invention of the printing press around then, which in some sense makes handwriting pointless. But, being able to have legible handwriting seems pretty useful up until at least this century, perhaps until widespread smartphones.
Thank you for this write-up, very interesting. I'm excited to see more investigations of different surveillance systems' potential.
Hopefully, the SIREN 2.0 study, running this winter, will generate some more data to answer this question.
A few questions now I've had time to consider this post a bit more. Apologies, if these are very basic, I'm pretty unfamiliar with metagenomics.
First, how do you relate relative abundance to detection probability? I would have thought the total number of reads of the pathogen of interest also matters. That is, if you tested...
Tl;dr: epidemic and statistical modelling PhD looking for roles in biosecurity, global health, and quantitative generalist roles.
Skills & background: I am about to submit a biostatistics PhD (University of Cambridge, UK), focusing on statistical methods to estimate the incidence of COVID-19 in England and survival analysis. I have experience providing scientific advice to the UK government on the pandemic. Broad Bayesian statistical skillset, as well as skills in engaging critically with literature. View my past posts for less academic samples of my wo...
Feel free to message me if you're interested in going deeper into what a typical viral load might look like. I can generate trajectories, based on the data from the ATACCC study. Note that this is in viral RNA copies, not Ct values - they did the conversion as part of that study.
I don't have a strong opinion here. I would guess having the information out and findable is the most important. My initial instinct is directly or linked from the fund page or applicant info.
As someone considering applying to LTFF, I found even rough numbers here very useful. I would have guessed success rates 10x lower.
If it is fairly low-cost for you (e.g.: can be done as an automated database query), publishing this semi-regularly might be very helpful for potential applicants.
We will be publishing more posts, including information about our other ideas, in the coming weeks.
I can't find these posts on the forum (I checked the pos history of both of this post's authors). Could you please point me towards them?
Thank you Ben! The 80% CI[1] is an output from the model.
Rough outline is.
Link-commenting my Twitter thread of immediate reaction and summary of paper. Some light editing for readability. Would be interested on feedback if this slightly odd for a forum comment content is helpful or interesting to people.
Overall take: this is a well done survey, but all surveys of this sort have big caveats. I think this survey is as good as it is reasonable to expect a survey of AI researchers to be. But, there is still likely bias due to who chooses to respond, and it's unclear how much we should be deferring to this group. It would be good to ...
Good post, aligns with a lot of my (anecodtal) expreience in a related but different field (biostatistics, still doing computational work but not ML and much more mature as a field).
Under communication: I think your missing actively reading papers (or listening to presentations). Each time you read a paper, ask yourself if it was easy to understand the idea, why or why not? A big problem in reasearch writing IMO is that the reader often is not reading the paper for the main reason you wrote it. Perhaps they care more about your methodology than your result...
I agree.
Reflecting, in the everything-is-Gaussian case a prior doesn't help much. Here, your posterior mean is a weighted average of prior and likelihood, with the weights depending only on the variance of the two distributions. So if the likelihood mean increases but with constant variance then your posterior mean increases linearly. You'd probably need a bias term or something in your model (if you're doing this formally).
This might actually be an argument in favour of GiveWell's current approach, assuming they'd discount more as the study estimate becomes increasinly implausible.
I don't think this evaluation is especially useful, because it only presents one side of the argument. Why spreadsheets are bad, not their advantages or how errors typically occur in programming languages.
The bottom line you present (quoted below) is in fact not very action relevant. It's not strong enough to even support that the switching costs are worth it IMO.
...We are far from certain that writing cost-effectiveness analyses in an ordinary programming language would reduce the error rate compared to spreadsheets - quantitative estimates of the error ra
Pascal's mugging should be addressed by a prior which is more sceptical of extreme estimates.
GiveWell are approximating that process here:
We’re reluctant to take this estimate at face value because (i) this result has not been replicated elsewhere and (ii) it seems implausibly large given the more muted effects on intermediate outcomes (e.g., years of schooling).
Could you expand on this please? Isn't this going to be roughly equivalent to "we kept our GitHub repo private"?
I agree with your point 2. To be Bayesian: if your prior is much more uncertain than you likelihood, the likelihood dominates the posterior.
Isn't 1 addressed by Noah's submission? That you will rank noisily-estimated interventions higher.
It would be much more reasonable imo to say “Ord’s estimate is much higher than my own prior, and I didn’t see enough evidence to justify such a large update”.
Except the use of Bayesian language, how is that different to the following passage?
We saw in Parts 9-11 of this series that most experts are deeply skeptical of Ord’s claim, and that there are at least a dozen reasons to be wary. This means that we should demand especially detailed and strong arguments from Ord to overcome the case for skepticism.
OK, I can't commit right now, but I'll look out for if you're advertising again for February (or feel free to get in touch with me). Good luck, great project!
What's the timeline you're after for firm committal? I might be interested but need to prioritise what I'm do long over the next 2-3 months so would not be able to commit immediately.
What would be the best thing(s) to read for those of us who know ~nothing about Zach and his views/philosophy?
I'm planning to publish some forum posts as I get up to speed in the role, and I think those will be the best pieces to read to get a sense of my views. If it's helpful for getting a rough sense of timing, I'm still working full-time on EV at the moment, but will transition into my CEA role in mid-February.
Optimistically, Gavi and partners do their thing and we get a nice efficient rollout across the relevant areas. But I have very limited knowledge of this space. I don't know what the bottlenecks or process here is.
Thanks - this is definitely a relevant example, especially the health facilities. I have towards more uncertainty here.
The food security seems to be the impact of inverventions, rather than pure fear, which is the mechanism Gopal et al. suggest.
...The reduced mobility of farmers and other agricultural workers, but also the difficulty of getting products to harbours due to the quarantine zone, prevented affected countries from being able to produce and sell their goods [4,8,9]. The epidemic killed and drove out many farmers, leading to the abandonment of field
For a one-off, you should be able to copy and paste into a Google Doc or Word document then export. The formatting might be a bit iffy but a small amount of manual fixing should sort that.
A more involved alternative, but perhaps more reliable, would be to run the Markdown version through pandoc.
I down voted because it isn't directly relevant to the dispute. High-spending in longtermist EA communities is a question that has been frequently discussed on this forum without consensus views. I don't think restarting that argument here is productive.
Maybe! I'm hoping it at least saves people some energy. It's too late for me, but I confess I'm ambivalent myself about the point of all this. Spot-checking some high level claims is at least tractable, but are there decisions that depend on the outcome? What I care about isn't whether Nonlinear accurately represented what happened or what Ben said. I was unlikely to ever cross paths with Nonlinear or even Ben beforehand. I want people to get healthy professional experience, and I want the EA community to have healthy responses to internal controversy and ...
Is this post meant to be a provocative start of a discussion or the argument in its entirety? If the latter, it really needs some attempt to be more precise about tractability. How much of the problem will marginal funding solve?
he's accepted the position without even knowing why they did what they did at a high level
I don't think this is correct, from the same statement:
Before I took the job, I checked on the reasoning behind the change. The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models.
I think you're missing other features of study design. Notably: feasibility, timeliness, and robustness. Adaptive designs generally require knowledge of outcomes to inform randomisation of future enrolees. But this is often not known, especially if the time until the outcome you're measuring is a long time.
EDIT: the paper also points out various other practical issues with adaptive designs in section 3. These include making it harder to do valid statistical inference on your results (statistical inference is essentially assessing your uncertainty, such as ...
Great post! A particular issue is that E(cost/effect) is infinite or undefined if you have a non-zero probability that the effect is 0. This is very commonly the case.
Another interesting point, highlighted by your log normal example, is that higher variance will tend to increase the difference between E(1/X) and 1/E(X).
It seems likely that these roles will be extremely competitive to hire for. Most applicants will have similar values (ie: EA-ish). Considering the size of the pool, it seems likely that the top applicants will be similar in terms of quality. Therefore, why do you think there's a case that someone taking one of these roles will have high counterfactual impact?
Empirically, in hiring rounds I've previously been involved in for my team at Open Phil, it has often seemed to be the case that if the top 1-3 candidates just vanished, we wouldn't make a hire. I've also observed hiring rounds that concluded with zero hires. So, basically I dispute the premise that the top applicants will be similar in terms of quality (as judged by OP).
I'm sympathetic to the take "that seems pretty weird." It might be that Open Phil is making a mistake here, e.g. by having too high a bar. My unconfident best-guess would be that our bar h...
Afraid I don't have good ideas here.
Intuitively, I think there should be a way to take advantage of the fact that the outcomes are heavily structured. You have predictions on the same questions and they have a binary outcome.
OTOH, if in 20% of cases the worse forecaster is better on average, that suggests that there is just a hard bound on how much we can get.
I am so excited to see this, as it looks like it might address many uncertainties I have but have not had a chance to think deeply about. Do you have a rough timeline on when you'll be posting each post in the series?
Thanks, Joshua! We'll be posting these fairly rapidly. You can expect most of the work before the end of the month and the rest in early November.
As it stands I struggle to justify GHD work at all on cluelessness grounds. GiveWell-type analyses ignore a lot of foreseeable indirect effects of the interventions e.g. those on non-human animals. It isn't clear to me that GHD work is net positive.
Would you mind expanding a bit on why this applies to GHD and not other cause areas please? E.g.: wouldn't your concerns about animal welfare from GHD work also apply to x-risk work?
So the question is basically whether the (upkeep costs + opportunity cost of money - benefit from events) is more or less than discount from selling quickly?