Teacher for 7 years; about to start working as a Researcher at CEARCH: https://exploratory-altruism.org/
I'll be constructing cost-effectiveness analyses of various cause areas, identifying the most promising opportunities for impactful work.
Feedback on CEARCH's work:
How to make better estimates with scarce data.
I think Ghandi's point nods to the British Empire's policy of heavily taxing salt as a way of extracting wealth from the Indian population. For a time this meant that salt became very expensive for poor people and many probably died early deaths linked to lack of salt.
However, I don't think anyone would suggest taxing salt at that level again! Like any food tax, the health benefits of a salt tax would have to be weighed against the costs of making food more expensive. You certainly wouldn't want it so high that poor people don't get enough of it.
Thanks again!
I think I have been trying to portray the point-estimate/interval-estimate trade-off as a difficult decision, but probably interval estimates are the obvious choice in most cases.
So I've re-done the "Should we always use interval estimates?" section to be less about pros/cons and more about exploring the importance of communicating uncertainty in your results. I have used the Ord example you mentioned.
Thanks for your feedback, Vasco. It's led me to make extensive changes to the post:
Overall I agree that interval estimation is better suited to the Drake equation than to GiveWell CEAs. But I'd summarise my reasons as follows:
Thanks again, and do let me know what you think!
My attempt to summarize why the model predicts that preventing famine in China and other countries will have a negative effect on the future:
Or as the author puts it in a discussion linked above:
To be blunt for the sake of transparency, in this model, the future would improve if the real GDP of China, Egypt, India, Iran, and Russia dropped to 0, as long as that did not significantly affect the level of democracy and real GDP of democratic countries. However, null real GDP would imply widespread starvation, which is obviously pretty bad! I am confused about this, because I also believe worse values are associated with a worse future. For example, they arguably lead to higher chances of global totalitarianism or great power war.
I agree with the author that the conclusion is confusing. Even concerning.
I'd suggest that the conclusion is out-of-sync with how most people feel about saving lives in poor, undemocratic countries. We typically don't hesitate to tackle neglected tropical diseases on the basis that doing so boosts the populations of dictatorships.
Perhaps it can be captured by ensuring we compare counterfactual impacts.
For an urgent, "now or never" cause, we can be confident that any impact we make wouldn't have happened otherwise.
For something non-urgent, there is a chance that if we leave it, somebody else could solve it or it could go away naturally. Hence we should discount the expected value of working on this (or in other words we should recognise that the counterfactual impact of working on non-urgent causes, which is what really matters, is lower than the apparent impact).
The s-risk people I'm familiar with are mostly interested in worst-case s-risk scenarios that involve vast populations of sentient beings over vast time periods. It's hard to form estimates for the scale of such scenarios, and so the importance is difficult to grasp. I don't think estimating the cost-effectiveness of working on these s-risks would be as simple as measuring in suffering-units instead of QALYs.
Tobias Baumann, for example, mentions in his book and a recent podcast that possibly the most important s-risk work we can do now is simply preparing to be ready in some future time when we will actually be able to do useful stuff. That includes things like "improving institutional decision-making" and probably also moral circle expansion work like curtailing factory farming.
I think Baumann also said somewhere that he can be reluctant to mention specific scenarios too much because it may lead to complacent feeling that we have dealt with the threats: in reality, the greatest s-risk danger is probably something we don't even know about yet.
I hope the above is a fair representation of Baumann's and others' views. I mostly agree with them, although it is a bit shady not to be able to specify what the greatest concerns are.
I could do a very basic cause-area sense-check of the form:
The greatest s-risks involve huge populations
SO
They probably occur in an interstellar civilisation
AND
Are likely to involve artificial minds (which could probably exist at a far greater density than people)
HENCE
Work on avoiding the worst s-risks is likely to involve influencing whether/how we become a spacefaring civilisation and whether/how we develop and use sentient minds.
Thanks for the reply and sorry for the long delay! I decided to dive in and write a post about it.
Can we see it yet?