1252 karmaJoined


Topic contributions

Should you "trust literatures, not papers"?
I replicated the literature on meritocratic promotion in China, and found that the evidence is not robust.

Do vaccinated children have higher income as adults?
I replicate a paper on the 1963 measles vaccine, and find that it is unable to answer the question.

I've written up my replication of Cook (2014) on racial violence and patenting by Black inventors.

Bottom line: I believe the conclusions, but I don't trust the results.

New replication: I find that the results in Moretti (AER 2021) are caused by coding errors.
The paper studies agglomeration effects for innovation, but the results supporting a causal interpretation don't hold up.

Angus Deaton writes that in academia and policy circles, “Past development practice is seen as a succession of fads, with one supposed magic bullet replacing another—from planning to infrastructure to human capital to structural adjustment to health and social capital to the environment and back to infrastructure—a process that seems not to be guided by progressive learning.”

This framing is weird. Obviously these factors have a positive causal effect on growth. But why would you expect a silver bullet? Conditions change over time, so the constraints on growth will change as well.

I'd expect this article to be pretty solid, but errors in top journals do happen.

  • person-affecting views
  • supporting a non-zero pure discount rate

I think non-longtermists don't hold these premises; rather, they object to longtermism on tractability grounds.

What AI safety work should altruists do? For example, AI companies are self-interestedly working on RLHF, so there's no need for altruists to work on it. (And even stronger, working on RLHF is actively harmful because it advances capabilities.)

Tweet-thread promoting Rotblat Day on Aug. 31, to commemorate the spirit of questioning whether a dangerous project should be continued.

Load more