Linch

"To see the world as it is, rather than as I wish it to be."

I work for the EA research nonprofit Rethink Priorities. Despite my official title, I don't really think of the stuff I do as "research." In particular, when I think of the word "research", I think of people who are expanding the frontiers of the world's knowledge, whereas often I'm more interested in expanding the frontiers of my knowledge, and/or disseminating it to the relevant parties.

I'm also really interested in forecasting.

People may or may not also be interested in my comments on Metaculus and Twitter:

Metaculus: https://pandemic.metaculus.com/accounts/profile/112057/

Twitter: https://twitter.com/LinchZhang

Comments

What are your questions for World Malaria Day with Rob Mather (AMF), Maddy Marasciulo (Malaria Consortium), and Alekos Simoni (Target Malaria)?

Questions for both:

How has the covid-19 pandemic affected your ability to raise money for malaria prevention from both EA and non-EA sources?  

How has it affected your ability to deliver {bednets, chemoprevention} to beneficiaries? 

Non-pharmaceutical interventions in pandemic preparedness and response

Do you have thoughts on pandemic prevention NPIs (eg vector control)? Many of these things are technically non-pharmaceutical interventions, though of course looks very different from mask mandates or social distancing orders!

Non-pharmaceutical interventions in pandemic preparedness and response

Thanks a lot for this! Like willbradshaw I agree that this post is "well-written, thoughtful, well-linked and thorough!"

What are some objections to anything I’ve written here?

If I were to nitpick, I think my biggest objection is that your approach to tackling the problem of NPIs for pandemic preparedness and response appears extremely atheoretical. I think this is fine for a scoping study that tries to estimate the scale of the problem, and fine (perhaps even highly underrated!) for clinical studies. But I think we can get decent results at lower cost with a bit of simple theory.

I believe this because I think the human body in general, and the immune system in particular, is woefully complicated, so it makes sense that we cannot have much faith in biologically plausible mechanisms for treatments, which leads us to necessitate correspondingly greater faith in end-to-end RCTs(and be in a state of radical cluelessness otherwise). But there are other parts of epidemiology that's simpler and more well-understood, such that for transmission we can be reasonably confident in our ability to dice the problem and isolate it into specific confusing subcomponents.

For example, suppose we are worried about a potential respiratory disease pandemic, and we want to figure out whether intervention X (say installing MERV filters for offices) has a sufficiently large impact on an (un)desired endpoint (eg symptomatic disease, hospitalizations). One approach might just be: 

Sounds plausible, but we can't know much with confidence in the absence of end-to-end empirical results. What we can do is run an RCT where we install MERV filters in the treatment group and don't install it in the control group with a sufficiently large sample size to power for differences that are big enough for us to care about, and compare results after the study's natural endpoint.

I think this is good, but potentially quite expensive/time-consuming (which is really bad in a fast-moving pandemic!). One way we can potentially do better:
 

Well, disease transmission isn't magic, and we're reasonably confident in the very high-level theory of respiratory diseases. So we can at least decompose the problem into two parts: 

  1. Treat  human bodies as a black box function that takes in some combination of scary microbe-laden particles and outputs some probability of undesired endpoints.
  2. Model the world as something that sends scary microbe-laden particles and figure out which interventions reduce such particles to  a level that the modeled function in (1) should consider too low to notice.

My decomposition isn't particularly interesting, but I think it's reasonably clean. With it, we can 

Tackle 1) with human challenge trials where microbe dose/frequency/timing is variable, to understand what are plausible ranges of parameters for how many droplets are needed to be bad.

Tackle 2) with some combination of 

  1. computational fluid dynamics simulations
  2. lab experiments on how much people breathe each other's air, and how fast air need to cycle to reduce that.
  3. field experiments on effect of MERV filters on closely analogous particles  (on the physical level)
  4. prior knowledge of the transmission patterns of other similar diseases
  5. ???

Now my decomposition is still quite high level, and I'm not sure that my suggested instrumentalizations here aren't dumb. But hopefully what I'm gesturing at makes sense?

A Biosecurity and Biorisk Reading+ List

Thanks a lot for this list!

This is a bit of a tangent, but one implicit assumption I find interesting in your list and when other EA biosecurity-focused people talk about existential biosecurity (eg, this talk by Kevin Esvelt ) is that there's relatively little focus on what I consider "classical epidemiology."  

This seems in contrast to the implicit beliefs of both a) serious EAs who haven't thought as much about biosecurity (weak evidence here: the problem/speaker selection 80000 hours podcasts) and b) public health people who are less aware of EA (weak evidence here: undergrad or grad students in public health who I sometimes talk to or are in an advisory position for). 

Putting numbers to this vague intuition, I would guess that your reading list here would suggest an optimal  biosecurity-focused portfolio will have a focus of ~5-20% in classical epidemiology, whereas many EA students would think the weighting of epidemiology should be closer to ~30-60%. 

I'm interested in whether you agree with my distinction here and consider it a fair characterization? If so, do you think it's worthwhile to have a writeup explaining why (or why not!) many EA-aligned students overweight epidemiology in their portfolio of considerations for important ways to reduce existential biorisk? 

EDIT: Relevant Twitter poll.

What are your main reservations about identifying as an effective altruist?

If I recall correctly, this was not your position several years ago , when we talked about this more(circa 2015, 2016 or so). Which is not too surprising -- I mean I sure hope I changed a lot in the intervening years! 

But assuming my memory of this is correct, do you recall when you made this shift, and the core reasons for it?  Interested if there's a short/fast way to retrace your intellectual journey so that other people might make the relevant updates. 

How much does performance differ between people?

I have indeed made that comment somewhere. It was one of the more insightful/memorable comments she made when I interviewed her, but tragically I didn't end up writing down that question in the final document (maybe due to my own lack of researcher taste? :P)

That said, human memory is fallible etc so maybe it'd be worthwhile to circle back to Liv and ask if she still endorses this, and/or ask other poker players how much they agree with it. 

How much does performance differ between people?

Thanks for this. I do think there's a bit of sloppiness in EA discussions about heavy-tailed distributions in general, and the specific question of differences in ex ante predictable job performance in particular. So it's really good to see clearer work/thinking about this.

I have two high-level operationalization concerns here: 

  1. Whether performance is ex ante predictable seems to be a larger function  of our predictive ability than of the world. As an extreme example of what I mean, if you take our world on November 7, 2016 and run  high-fidelity simulations 1,000,000 times , I expect 1 million/1 million of those simulations to end up with Donald Trump winning the 2016 US presidential election. Similarly, with perfect predictive ability, I think the correlation  between ex ante predicted work performance and ex post actual performance approach 1 (up to quantum) . This may seem like a minor technical point, but  I think it's important to be careful of the reasoning here when we ask whether claims are expected to generalize from domains with large and obvious track records and proxies (eg past paper citations to future paper citations) or even domains where the ex ante proxy may well have been defined ex post (Math Olympiad records to research mathematics) to domains of effective altruism where we're interested in something like counterfactual/Shapley impact*.
  2. There's counterfactual credit assignment issues for pretty much everything EA is concerned with, whereas if you're just interested in individual salaries or job performance in academia, a simple proxy like $s or citations is fine. Suppose Usain Bolt is 0.2 seconds slower at running  100 meters. Does anybody actually think this will result in huge differences in the popularity of sports, or percentage of economic output attributable to the "run really fast" fraction of the economy, never mind our probability of spreading utopia throughout the stars? But nonetheless Usain Bolt likely makes a lot more money, has a lot more prestige, etc than the 2nd/3rd fastest runners. Similarly, academics seem to worry constantly about getting "scooped" whereas they rarely worry about scooping others, so a small edge in intelligence or connections or whatever can be leveraged to a huge difference in potential citations, while being basically irrelevant to counterfactual impact. Whereas in EA research it matters a lot whether being "first" means you're 5 years ahead of the next-best candidate or 5 days.

Griping aside, I think this is a great piece and I look forward to perusing it and giving more careful comments in the coming weeks!

*ETA: In contrast, if it's the same variable(s) that we can use to ex ante predict a variety of good outcomes of work performance across domains, then we can be relatively more confident that this will generalize to EA notions. Eg, fundamental general mental ability, integrity, etc. 

Some global catastrophic risk estimates

I think somewhat higher chance of users being alive than that, because of the big correlated stuff that EAs care about.

Load More