Background in philosophy, international development, statistics. Doing a technical AI PhD at Bristol.

Financial conflict of interest: technically the British government through the funding council.

technicalities's Comments

How can I apply person-affecting views to Effective Altruism?


It's a common view. Some GiveWell staff hold this view, and indeed most of their work involves short-term effects, probably for epistemic reasons. Michael Plant has written about the EA implications of person-affecting views, and emphasises improvements to world mental health.

Here's a back-of-the-envelope estimate for why person-affecting views might still be bound to prioritise existential risk though (for the reason you give, but with some numbers for easier comparison).

Dominic Roser and I have also puzzled over Christian longtermism a bit.

What would a pre-mortem for the long-termist project look like?

Great comment. I count only 65 percentage points - is the other third "something else happened"?

Or were you not conditioning on long-termist failure? (That would be scary.)

(How) Could an AI become an independent economic agent?

IKEA is an interesting case: it was bequeathed entirely to a nonprofit foundation with a very loose mission and no owner(?)

Not a silly question IMO. I thought about Satoshi Nakamoto's bitcoin - but if they're dead, then it's owned by their heirs, or failing that by the government of whatever jurisdiction they were in. In places like Britain I think a combination of "bona vacantia" (unclaimed estates go to the government) and "treasure trove" (old treasure also) cover the edge cases. And if all else fails there's "finders keepers".

What posts do you want someone to write?

A nice example of the second part, value dependence, is Ozy Brennan's series reviewing GiveWell charities.

Why might you donate to GiveDirectly?
You need a lot of warmfuzzies in order to motivate yourself to donate.
You think encouraging cash benchmarking is really important, and giving GiveDirectly more money will help that.
You want to encourage charities to do more RCTs on their programs by rewarding the charity that does that most enthusiastically.
You care about increasing people’s happiness and don’t care about saving the lives of small children, and prefer a certainty of a somewhat good outcome to a small chance of a very good outcome.
You believe, in principle, that we should let people make their own decisions about their lives.
You want an intervention that definitely has at least a small positive effect.
You have just looked at GDLive and are no longer responsible for your actions.
What posts do you want someone to write?

Collating predictions made by particularly big pundits and getting calibration curves for them. Bill Gates is getting a lot of attention now for warning of pandemic in 2015; what is his average though? (This is a bad example though, since I expect his advisors to be world-class and to totally suppress his variance.)

If this could be hosted somewhere with a lot of traffic, it could reinforce good epistemics.

What posts do you want someone to write?

A case study of the Scientific Revolution in Britain as intervention by a small group. This bears on one of the most surprising facts: the huge distance, 1.5 centuries, between the scientific and industrial revs. Could also shed light on the old marginal vs systemic argument: a synthesis is "do politics - to promote nonpolitical processes!"

What are some 1:1 meetings you'd like to arrange, and how can people find you?

Who am I?

Gavin Leech, a PhD student in AI at Bristol. I used to work in international development, official statistics, web development, data science.

Things people can talk to you about

Stats, forecasting, great books, development economics, pessimism about philosophy, aphorisms, why AI safety is eating people, fake frameworks like multi-agent mind. How to get technical after an Arts degree.

Things I'd like to talk to others about

The greatest technical books you've ever read. Research taste, and how it is transmitted. Non-opportunistic ways to do AI safety. How cluelessness and AIS interact; how hinginess and AIS interact.

Get in touch . I also like the sound of this open-letter site.

Open Thread #46

Suggested project for someone curious:

There are EA profiles of interesting influential (or influentially uninfluential) social movements - the Fabians, the neoliberals, the General Semanticists. But no one has written about the biggest: the scientific revolution in Britain as intentional intervention, a neoliberal style coterie.

A small number of the most powerful people in Britain - the Lord Chancellor, the king's physicians, the chaplain of the Elector Palatine / bishop of Chester, London's greatest architect, and so on - apparently pushed a massive philosophical change, founded some of the key institutions for the next 4 centuries, and thereby contributed to most of our subsequent achievements.


  • Elizabethan technology and institutions before Bacon. Scholasticism and mathematical magic
  • The protagonists: "The Invisible College"
  • The impact of Gresham College and the Royal Society (sceptical empiricism revived! Peer review! Data sharing! efficient causation! elevating random uncredentialed commoners like Hooke)
  • Pre-emptive conflict management (Bacon's and Boyle's manifestos and Utopias are all deeply Christian)
  • The long gestation: it took 100 years for it to bear any fruit (e.g. Boyle's law, the shocking triumph of Newton); it took 200 years before it really transformed society. This is not that surprising measured in person-years of work, but otherwise why did it take so long?
  • Counterfactual: was Bacon overdetermined by economic or intellectual trends? If it was inevitable, how much did they speed it up?
  • Somewhat tongue in cheek cost:benefit estimate.

This was a nice introduction to the age.

Launching An Introductory Online Textbook on Utilitarianism

To my knowledge, most of the big names (Bentham, Sidgwick, Mill, Hare, Parfit) were anti-speciesist to some degree; the unusual contribution of Singer is the insistence on equal consideration for nonhumans. It was just not obvious to their audiences for 100+ years afterward.

My understanding of multi-level U is that it permits not using explicit utility estimation, rather than forbidding using it. (U as not the only decision procedure, often too expensive.) It makes sense to read (naive, ideal) single-level consequentialism as the converse, forbidding or discouraging not using U estimation. Is this a straw man? Possibly, I'm not sure I've ever read anything by a strict estimate-everything single-level person.

What are the key ongoing debates in EA?

I read it as 'getting some people who aren't economists, philosophers, or computer scientists'. (:

(Speaking as a philosophy+economics grad and a sort-of computer scientist.)

Load More