I think vaccine resistance imposes a ceiling, but the expense and sheer difficulty of distributing mRNA vaccines in cold storage is also a major problem (it's why Covax refused donations for a bit), so a room temperature shelf stable vaccine is likely to be quite valuable.
I think that still ends up net good if your biases are decorrelated from existing grantmaker biases?
15% does not sounds too bad.
15% seems to me like very bad odds for a multi-year training program, especially given it doesn't count people who start a PhD program and then drop out.
I talked to ACE (Jacy Reese/Anthis in particular) in 2015 about ACE dramatically overstating effectiveness of leaflets. Jacy was extremely responsive in the call, and nothing changed until two years later when a dramatically more inflammatory article got wide distribution.
the latter, in part because of the former.
Offline, someone suggested the Marine Chronometer as a physics measurement device that straightforwardly created a lot of value by enabling long distance navigation at sea.
Last week we announced a prize for the best example of an evaluation. The winner of the evaluations prize is David Manheim, for his detailed suggestions on quantitative measures in psychology. I selected this answer because, although IAT was already on my list, David provided novel information about multiple tests that saved me a lot of work in evaluating them. David has had involvement with QURI (which funded this work) in the past and may again in the future, so this feels a little awkward, but ultimately it was the best suggestion so it didn’t feel right to take the prize away from him.
Honorable mentions to Orborde on financial stress tests, which was a very relevant suggestion that I was unfortunately already familiar with, and alexrjl on rock climbing route grades, which I would never have thought of in a million years but has less transferability to the kinds of things we want to evaluate.
How useful was this prize? I think running the contest was more useful than $50 of my time, however it was not as useful as it could have been because the target moved after we announced the contest. I went from writing about evaluations as a whole to specifically evaluations that worked, and I’m sure if I’d asked for examples of that they would have been provided. So possibly I should have waited to refine my question before asking for examples. On the other hand, the project was refined in part by looking at a wide array of examples (generated here and elsewhere), and it might have taken longer to hone in on a specific facet without the contest.
But "I will use evidence based thinking" isn't a policy, and is completely unverifiable.
B) Has at least implied that he wants to use EA thinking in the role
My default belief is that a politician implying something he knows the listener wants to hear is not evidence he's believes or will act on that implication. Do you disagree with that, in general or for Hsuing in particular?
I got nervous when I heard people were applying for forgiveness, so I looked into it. Here's what I found