SP

Stan Pinsent

Researcher @ CEARCH
283 karmaJoined Oct 2022Working (6-15 years)London, UK
https://stanpinsent.wordpress.com/

Bio

Participation
2

Teacher for 7 years; about to start working as a Researcher at CEARCH: https://exploratory-altruism.org/

I'll be constructing cost-effectiveness analyses of various cause areas, identifying the most promising opportunities for impactful work.

How others can help me

Feedback on CEARCH's work:

  • Strengths/weaknesses of approach
  • Strengths/weaknesses on how approach and results are communicated
  • Would you use our research to guide decisions? Why/why not?

 

How to make better estimates with scarce data.

Comments
36

Topic Contributions
1

I think Ghandi's point nods to the British Empire's policy of heavily taxing salt as a way of extracting wealth from the Indian population. For a time this meant that salt became very expensive for poor people and many probably died early deaths linked to lack of salt.

However, I don't think anyone would suggest taxing salt at that level again! Like any food tax, the health benefits of a salt tax would have to be weighed against the costs of making food more expensive. You certainly wouldn't want it so high that poor people don't get enough of it.

Thanks again!

I think I have been trying to portray the point-estimate/interval-estimate trade-off as a difficult decision, but probably interval estimates are the obvious choice in most cases.

So I've re-done the "Should we always use interval estimates?" section to be less about pros/cons and more about exploring the importance of communicating uncertainty in your results. I have used the Ord example you mentioned.

Thanks for your feedback, Vasco. It's led me to make extensive changes to the post:

  • More analysis on the pros/cons of modelling with distributions. I argue that sometimes it's good that the crudeness of point-estimate work reflects the crudeness of the evidence available. Interval-estimate work is more honest about uncertainty, but runs the risk of encouraging overconfidence in the final distribution.
  • I include the lognormal mean in my analysis of means. You have convinced me that the sensitivity of lognormal means to heavy right tails is a strength, not a weakness! But the lognormal mean appears to be sensitive to the size of the confidence interval you use to calculate it - which means subjective methods are required to pick the size, introducing bias.

Overall I agree that interval estimation is better suited to the Drake equation than to GiveWell CEAs. But I'd summarise my reasons as follows:

  • The Drake Equation really seeks to ask "how likely is it that we have intelligent alien neighbours?", but point-estimate methods answer the question "what is the expected number of intelligent alien neighbours?". With such high variability the expected number is virtually useless, but the distribution of this number allows us to estimate the number of alien neighbours. GiveWell CEAs probably have much less variation and hence a point-estimate answer is relatively more useful
  • Reliable research on the numbers that go into the Drake equation often doesn't exist, so it's not too bad to "make up" interval estimates to go into it. We know much more about the charities GiveWell studies, so made-up distributions (even those informed by reliable point-estimates) are much less permissible.

Thanks again, and do let me know what you think!

My attempt to summarize why the model predicts that preventing famine in China and other countries will have a negative effect on the future:

  • Much of the value of the future hinges on whether values become predominantly democratic or antidemocratic
  • The more prevalent antidemocratic values (or populations) are after a global disaster, the likelier it is that such values will become predominant
  • Hence preventing deaths in antidemocratic countries can have a negative effect on the future.

Or as the author puts it in a discussion linked above:

To be blunt for the sake of transparency, in this model, the future would improve if the real GDP of China, Egypt, India, Iran, and Russia dropped to 0, as long as that did not significantly affect the level of democracy and real GDP of democratic countries. However, null real GDP would imply widespread starvation, which is obviously pretty bad! I am confused about this, because I also believe worse values are associated with a worse future. For example, they arguably lead to higher chances of global totalitarianism or great power war.

I agree with the author that the conclusion is confusing. Even concerning.

I'd suggest that the conclusion is out-of-sync with how most people feel about saving lives in poor, undemocratic countries. We typically don't hesitate to tackle neglected tropical diseases on the basis that doing so boosts the populations of dictatorships.

Perhaps it can be captured by ensuring we compare counterfactual impacts.

For an urgent, "now or never" cause, we can be confident that any impact we make wouldn't have happened otherwise.

For something non-urgent, there is a chance that if we leave it, somebody else could solve it or it could go away naturally. Hence we should discount the expected value of working on this (or in other words we should recognise that the counterfactual impact of working on non-urgent causes, which is what really matters, is lower than the apparent impact).

Answer by Stan PinsentMar 27, 202342

The s-risk people I'm familiar with are mostly interested in worst-case s-risk scenarios that involve vast populations of sentient beings over vast time periods. It's hard to form estimates for the scale of such scenarios, and so the importance is difficult to grasp. I don't think estimating the cost-effectiveness of working on these s-risks would be as simple as measuring in suffering-units instead of QALYs.

Tobias Baumann, for example, mentions in his book and a recent podcast that possibly the most important s-risk work we can do now is simply preparing to be ready in some future time when we will actually be able to do useful stuff. That includes things like "improving institutional decision-making" and probably also moral circle expansion work like curtailing factory farming.

I think Baumann also said somewhere that he can be reluctant to mention specific scenarios too much because it may lead to complacent feeling that we have dealt with the threats: in reality, the greatest s-risk danger is probably something we don't even know about yet.

I hope the above is a fair representation of Baumann's and others' views. I mostly agree with them, although it is a bit shady not to be able to specify what the greatest concerns are. 

I could do a very basic cause-area sense-check of the form:

The greatest s-risks involve huge populations

SO

They probably occur in an interstellar civilisation

AND

Are likely to involve artificial minds (which could probably exist at a far greater density than people)

HENCE

Work on avoiding the worst s-risks is likely to involve influencing whether/how we become a spacefaring civilisation and whether/how we develop and use sentient minds.

Thanks. I'll read up on the power law dist and at the very least put a disclaimer in: I'm only checking which is better out of normal/lognormal.

Thanks for the reply and sorry for the long delay! I decided to dive in and write a post about it.

  • I check when using distributions is much better than point-estimates: it's when the ratio between upper/lower confidence bounds is high - in situations of high uncertainty like the probability-of-life example you mentioned.
  • I test your intuition that using lognormal is usually better than normal (and end up agreeing with you)
  • I check whether the lognormal distribution can be used to find a more reliable mean of two point estimates, but conclude that it's no good

Great summary. You must have an incredibly organised system for keeping track of your reading and what you take from each post!

I suspect this has given me most of the benefit of hours of unguided reading at a fraction of the time cost.

Load more