Background in philosophy, international development, statistics. Doing a technical AI PhD at Bristol.
Financial conflict of interest: technically the British government through the funding council.
It's a common view. Some GiveWell staff hold this view, and indeed most of their work involves short-term effects, probably for epistemic reasons. Michael Plant has written about the EA implications of person-affecting views, and emphasises improvements to world mental health.
Here's a back-of-the-envelope estimate for why person-affecting views might still be bound to prioritise existential risk though (for the reason you give, but with some numbers for easier comparison).
Dominic Roser and I have also puzzled over Christian longtermism a bit.
Great comment. I count only 65 percentage points - is the other third "something else happened"?
Or were you not conditioning on long-termist failure? (That would be scary.)
IKEA is an interesting case: it was bequeathed entirely to a nonprofit foundation with a very loose mission and no owner(?)
Not a silly question IMO. I thought about Satoshi Nakamoto's bitcoin - but if they're dead, then it's owned by their heirs, or failing that by the government of whatever jurisdiction they were in. In places like Britain I think a combination of "bona vacantia" (unclaimed estates go to the government) and "treasure trove" (old treasure also) cover the edge cases. And if all else fails there's "finders keepers".
A nice example of the second part, value dependence, is Ozy Brennan's series reviewing GiveWell charities.
Why might you donate to GiveDirectly?
You need a lot of warmfuzzies in order to motivate yourself to donate.
You think encouraging cash benchmarking is really important, and giving GiveDirectly more money will help that.
You want to encourage charities to do more RCTs on their programs by rewarding the charity that does that most enthusiastically.
You care about increasing people’s happiness and don’t care about saving the lives of small children, and prefer a certainty of a somewhat good outcome to a small chance of a very good outcome.
You believe, in principle, that we should let people make their own decisions about their lives.
You want an intervention that definitely has at least a small positive effect.
You have just looked at GDLive and are no longer responsible for your actions.
Collating predictions made by particularly big pundits and getting calibration curves for them. Bill Gates is getting a lot of attention now for warning of pandemic in 2015; what is his average though? (This is a bad example though, since I expect his advisors to be world-class and to totally suppress his variance.)
If this could be hosted somewhere with a lot of traffic, it could reinforce good epistemics.
A case study of the Scientific Revolution in Britain as intervention by a small group. This bears on one of the most surprising facts: the huge distance, 1.5 centuries, between the scientific and industrial revs. Could also shed light on the old marginal vs systemic argument: a synthesis is "do politics - to promote nonpolitical processes!"
Who am I?
Gavin Leech, a PhD student in AI at Bristol. I used to work in international development, official statistics, web development, data science.
Things people can talk to you about
Stats, forecasting, great books, development economics, pessimism about philosophy, aphorisms, why AI safety is eating people, fake frameworks like multi-agent mind. How to get technical after an Arts degree.
Things I'd like to talk to others about
The greatest technical books you've ever read. Research taste, and how it is transmitted. Non-opportunistic ways to do AI safety. How cluelessness and AIS interact; how hinginess and AIS interact.
Get in touch
firstname.lastname@example.org . I also like the sound of this open-letter site.
Suggested project for someone curious:
There are EA profiles of interesting influential (or influentially uninfluential) social movements - the Fabians, the neoliberals, the General Semanticists. But no one has written about the biggest: the scientific revolution in Britain as intentional intervention, a neoliberal style coterie.
A small number of the most powerful people in Britain - the Lord Chancellor, the king's physicians, the chaplain of the Elector Palatine / bishop of Chester, London's greatest architect, and so on - apparently pushed a massive philosophical change, founded some of the key institutions for the next 4 centuries, and thereby contributed to most of our subsequent achievements.
This was a nice introduction to the age.
To my knowledge, most of the big names (Bentham, Sidgwick, Mill, Hare, Parfit) were anti-speciesist to some degree; the unusual contribution of Singer is the insistence on equal consideration for nonhumans. It was just not obvious to their audiences for 100+ years afterward.
My understanding of multi-level U is that it permits not using explicit utility estimation, rather than forbidding using it. (U as not the only decision procedure, often too expensive.) It makes sense to read (naive, ideal) single-level consequentialism as the converse, forbidding or discouraging not using U estimation. Is this a straw man? Possibly, I'm not sure I've ever read anything by a strict estimate-everything single-level person.
I read it as 'getting some people who aren't economists, philosophers, or computer scientists'. (:
(Speaking as a philosophy+economics grad and a sort-of computer scientist.)