Do vaccinated children have higher income as adults?
I replicate a paper on the 1963 measles vaccine, and find that it is unable to answer the question.
https://twitter.com/michael_wiebe/status/1750197740603367689
I've written up my replication of Cook (2014) on racial violence and patenting by Black inventors.
Bottom line: I believe the conclusions, but I don't trust the results.
https://twitter.com/michael_wiebe/status/1749831822262018476
New replication: I find that the results in Moretti (AER 2021) are caused by coding errors.
The paper studies agglomeration effects for innovation, but the results supporting a causal interpretation don't hold up.
https://twitter.com/michael_wiebe/status/1749462957132759489
Angus Deaton writes that in academia and policy circles, “Past development practice is seen as a succession of fads, with one supposed magic bullet replacing another—from planning to infrastructure to human capital to structural adjustment to health and social capital to the environment and back to infrastructure—a process that seems not to be guided by progressive learning.”
This framing is weird. Obviously these factors have a positive causal effect on growth. But why would you expect a silver bullet? Conditions change over time, so the constraints on growth will change as well.
- person-affecting views
- supporting a non-zero pure discount rate
I think non-longtermists don't hold these premises; rather, they object to longtermism on tractability grounds.
What AI safety work should altruists do? For example, AI companies are self-interestedly working on RLHF, so there's no need for altruists to work on it. (And even stronger, working on RLHF is actively harmful because it advances capabilities.)
Tweet-thread promoting Rotblat Day on Aug. 31, to commemorate the spirit of questioning whether a dangerous project should be continued.
On Rotblat day, people post what signs they would look for to determine if their work was being counterproductive.
How about May 7, the day of the German surrender?
I found the opening paragraph a bit confusing. Suggested edits:
An evaluation by the National Academy of Sciences estimates PEPFAR has saved millions of lives (PEPFAR itself claims 25 million).
The dominant conceptual apparatus economists use to evaluate social policies—comparative cost-effectiveness analysis, which focuses on a specific goal like saving lives, and ranks policies by lives saved per dollar—suggested America’s foreign aid budget could’ve been better spent on condoms and awareness campaigns, or even malaria and diarrheal diseases.
As already mentioned by others, these two claims are consistent.
Those papers don't look very convincing. Check out:
https://vincentbagilet.github.io/inference_pollution/
This paper looks to be the best:
https://www.aeaweb.org/articles?id=10.1257/aer.20180279
This is a relevant question if you're thinking about how hard you should try to drive engagement on a forecasting question.
What is the 'policy relevance' of answering the title question? Ie. if the answer is "yes, forecaster count strongly increases accuracy", how would you go about increasing the number of forecasters?
I don't think you can learn much from observational data like this about the causal effect of the number of forecasters on performance. Do you have any natural experiments that you could exploit? (ie. some 'random' factor affecting the number of forecasters, that's not correlated with forecaster skill.) Or can you run a randomized experiment?
It sounds like you're doing subsampling. Bootstrapping is random sampling with replacement.
If, for example, we kept increasing the size of the sample we draw, then eventually the variance would be guaranteed to go to zero (when the sample size equals the total number of forecasters and there is only one possible sample we can draw).
With bootstrapping, there are possible draws when the bootstrap sample size is equal to the actual sample size . (And you could choose a bootstrap sample size .)
Imagine two cities. In one, it is safe for women to walk around at night and in the second it is not. I think the former city is better even if women don’t want to walk around at night, because I think that option is valuable to people even if they do not take it. Preference-satisfaction approaches miss this.
Don't people also have preferences for having more options?
I'm surprised the Nigerian business plan competition was not included. (Chris Blattman writeup from 2015 here: "Is this the most effective development program in history?".)
I say "They were arguably right, ex ante, to advocate for and participate in a project to deter the Nazi use of nuclear weapons." Actions in 1939-42 or around 1957-1959 are defensible.
Given this, is it accurate to call Einstein's letter a 'tragedy'? The tragic part was continuing the nuclear program after the German program was shut down.
- 2 August 1939: Einstein-Szilárd letter to Roosevelt advocates for setting up a Manhattan Project. [...]
- June 1942: Hitler decides against an atomic program for practical reasons.
Is it accurate to say that the US and Germans were in a nuclear weapons race until 1942? So perhaps the takeaway is "if you're in a race, make sure to keep checking that the race is still on".
How much would I personally have to reduce X-risk to make this the optimal decision? Well, that’s simple. We just calculate:
- 25 billion * X = 20,000 lives saved
- X = 20,000 / 25 billion
- X = 0.0000008
- That’s 0.00008% in x-risk reduction for a single individual.
I'm not sure I follow this exercise. Here's how I'm thinking about it:
Option A: spend your career on malaria.
Option B: spend your career on x-risk.
How much would I personally have to reduce X-risk to make this the optimal decision?
Shouldn't this exercise start with the current P(extinction), and then calculate how much you need to reduce that probability? I think your approach is comparing two outcomes: save 25B lives with probability p, or save 20,000 lives with probability 1. Then the first option has higher expected value if p>20000/25B. But this isn't answering your question of personally reducing x-risk.
Also, I think you should calculate marginal expected value, ie., the value of additional resources conditional on the resources already allocated, to account for diminishing marginal returns.
Adding to the causal evidence, there's a 2019 paper that uses wind direction as an instrumental variable for PM2.5. They find that IV > OLS, implying that observational studies are biased downwards:
...Comparing the OLS estimates to the IV estimates in Tables 2 and 3 provides strong evidence that observational studies of the relationship between air pollution and health outcomes suffer from significant bias: virtually all our OLS estimates are smaller than the corresponding IV estimates. If the only source of bias were classical measurement error, whi
Related, John von Neumann on x-risk:
...Finally and, I believe, most importantly, prohibition of technology (invention and development, which are hardly separable from underlying scientific inquiry), is contrary to the whole ethos of the industrial age. It is irreconcilable with a major mode of intellectuality as our age understands it. It is hard to imagine such a restraint successfully imposed in our civilization. Only if those disasters that we fear had already occurred, only if humanity were already completely disillusioned about technological civilization
It sounds like you're arguing that we should estimate 'good done/additional resources' directly (via Fermi estimates), instead of indirectly using the ITN framework. But shouldn't these give the same answer?
And even when you can multiply the three quantities together, I feel like speaking in terms of importance, neglectedness and tractability might make you feel that there is no total ordering of intervention (“some have higher importance, some have higher tractability, whether you prefer one or the other is a matter a personal taste”)
I don't follow this. If you multiply I*T*N and get 'good done/additional resources', how is that not an ordering?
There seems to be a "intentions don't matter, results do" lesson that's relevant here. Intending to solve AI alignment is secondary, and doesn't mean that you're making progress on the problem.
And we don't want people saying "I'm working on AI" just for the social status, if that's not their comparative advantage and they're not actually being productive.
Yes that's exactly it! Even if a lot of people think that AI is the most important problem to work on, I would expect only a small minority to have a comparative advantage. I worry that students are setting themselves up for burnout and failure by feeling obligated to work on what's been billed as some as the most pressing/impactful cause area, and I worry that it's getting in the way of people exploring with different roles and figuring out and building out their actual comparative advantage
Hm, then I find necessitarianism quite strange. In practice, how do we identify people who exist regardless of our choices?
The longtermist claim is that because humans could in theory live for hundreds of millions or billions of years, and we have potential to get the risk of extinction very almost to 0, the biggest effects of our actions are almost all in how they affect the far future. Therefore, if we can find a way to predictably improve the far future this is likely to be, certainly from a utilitarian perspective, the best thing we can do.
I don't find this framing very useful. The importance-tractability-crowdedness framework gives us a sophisticated method for evaluating...
Because of this heavy tailed distribution of interventions
Is it actually heavy-tailed? It looks like an ordered bar chart, not a histogram, so it's hard to tell what the tails are like.
What do you think of the Bayesian solution, where you shrink your EV estimate towards a prior (thereby avoiding the fanatical outcomes)?
The three groups have completely converged by the end of the 180 day period
I find this surprising. Why don't the treated individuals stay on a permanently higher trajectory? Do they have a social reference point, and since they're ahead of their peers, they stop trying as hard?
Is the difference between actualism and necessitarianism that actualism cares about both (1) people who exist as a result of our choices, and (2) people who exist regardless of our choices; whereas necessitarianism cares only about (2)?
I wonder if we can back out what assumptions the 'peace pact' approach is making about these exchange rates. They are making allocations across cause areas, so they are implicitly using an exchange rate.
I get the weak impression that worldview diversification (partially) started as an approximation to expected value, and ended up being more of a peace pact between different cause areas. This peace pact disincentivizes comparisons between giving in different cause areas, which then leads to getting their marginal values out of sync.
Do you think there's an optimal 'exchange rate' between causes (eg. present vs future lives, animal vs human lives), and that we should just do our best to approximate it?
If we don't kill ourselves in the next few centuries or millennia, almost all humans that will ever exist will live in the future.
The idea is that, after a few millenia, we'll have spread out enough to reduce extinction risks to ~0?
Even without considering that, if we stay at ~140 million births per year, in 800 years 50% of all humans will have been born in our future.
And in ~7 millennia 90% of all humans will have been born in our future.
Should you "trust literatures, not papers"?
I replicated the literature on meritocratic promotion in China, and found that the evidence is not robust.
https://twitter.com/michael_wiebe/status/1750572525439062384