technicalities

https://www.gleech.org/

Background in philosophy, international development, statistics. Doing a technical AI PhD at Bristol.

Financial conflict of interest: technically the British government through the funding council.

technicalities's Comments

What posts do you want someone to write?

A nice example of the second part, value dependence, is Ozy Brennan's series reviewing GiveWell charities.

Why might you donate to GiveDirectly?
You need a lot of warmfuzzies in order to motivate yourself to donate.
You think encouraging cash benchmarking is really important, and giving GiveDirectly more money will help that.
You want to encourage charities to do more RCTs on their programs by rewarding the charity that does that most enthusiastically.
You care about increasing people’s happiness and don’t care about saving the lives of small children, and prefer a certainty of a somewhat good outcome to a small chance of a very good outcome.
You believe, in principle, that we should let people make their own decisions about their lives.
You want an intervention that definitely has at least a small positive effect.
You have just looked at GDLive and are no longer responsible for your actions.
What posts do you want someone to write?

Collating predictions made by particularly big pundits and getting calibration curves for them. Bill Gates is getting a lot of attention now for warning of pandemic in 2015; what is his average though? (This is a bad example though, since I expect his advisors to be world-class and to totally suppress his variance.)

If this could be hosted somewhere with a lot of traffic, it could reinforce good epistemics.

What posts do you want someone to write?

A case study of the Scientific Revolution in Britain as intervention by a small group. This bears on one of the most surprising facts: the huge distance, 1.5 centuries, between the scientific and industrial revs. Could also shed light on the old marginal vs systemic argument: a synthesis is "do politics - to promote nonpolitical processes!"

https://forum.effectivealtruism.org/posts/RfKPzmtAwzSw49X9S/open-thread-46?commentId=rWn7HTvZaNHCedXNi

What are some 1:1 meetings you'd like to arrange, and how can people find you?

Who am I?

Gavin Leech, a PhD student in AI at Bristol. I used to work in international development, official statistics, web development, data science.

Things people can talk to you about

Stats, forecasting, great books, development economics, pessimism about philosophy, aphorisms, why AI safety is eating people, fake frameworks like multi-agent mind. How to get technical after an Arts degree.

Things I'd like to talk to others about

The greatest technical books you've ever read. Research taste, and how it is transmitted. Non-opportunistic ways to do AI safety. How cluelessness and AIS interact; how hinginess and AIS interact.

Get in touch

g@gleech.org . I also like the sound of this open-letter site.

Open Thread #46

Suggested project for someone curious:

There are EA profiles of interesting influential (or influentially uninfluential) social movements - the Fabians, the neoliberals, the General Semanticists. But no one has written about the biggest: the scientific revolution in Britain as intentional intervention, a neoliberal style coterie.

A small number of the most powerful people in Britain - the Lord Chancellor, the king's physicians, the chaplain of the Elector Palatine / bishop of Chester, London's greatest architect, and so on - apparently pushed a massive philosophical change, founded some of the key institutions for the next 4 centuries, and thereby contributed to most of our subsequent achievements.

Outline:

  • Elizabethan technology and institutions before Bacon. Scholasticism and mathematical magic
  • The protagonists: "The Invisible College"
  • The impact of Gresham College and the Royal Society (sceptical empiricism revived! Peer review! Data sharing! efficient causation! elevating random uncredentialed commoners like Hooke)
  • Pre-emptive conflict management (Bacon's and Boyle's manifestos and Utopias are all deeply Christian)
  • The long gestation: it took 100 years for it to bear any fruit (e.g. Boyle's law, the shocking triumph of Newton); it took 200 years before it really transformed society. This is not that surprising measured in person-years of work, but otherwise why did it take so long?
  • Counterfactual: was Bacon overdetermined by economic or intellectual trends? If it was inevitable, how much did they speed it up?
  • Somewhat tongue in cheek cost:benefit estimate.

This was a nice introduction to the age.

Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism

To my knowledge, most of the big names (Bentham, Sidgwick, Mill, Hare, Parfit) were anti-speciesist to some degree; the unusual contribution of Singer is the insistence on equal consideration for nonhumans. It was just not obvious to their audiences for 100+ years afterward.

My understanding of multi-level U is that it permits not using explicit utility estimation, rather than forbidding using it. (U as not the only decision procedure, often too expensive.) It makes sense to read (naive, ideal) single-level consequentialism as the converse, forbidding or discouraging not using U estimation. Is this a straw man? Possibly, I'm not sure I've ever read anything by a strict estimate-everything single-level person.

What are the key ongoing debates in EA?

I read it as 'getting some people who aren't economists, philosophers, or computer scientists'. (:

(Speaking as a philosophy+economics grad and a sort-of computer scientist.)

What are the key ongoing debates in EA?

Not sure. 2017 fits the beginning of the discussion though.

What are the key ongoing debates in EA?

I've had a few arguments about the 'worm wars', whether the bet on deworming kids, which was uncertain from the start, is undermined by the new evidence.

My interlocutor is very concerned about model error in cost-benefit analysis, about avoiding side effects (and 'double effect' in particular); and not just for the usual PR or future credibility reasons.

What are the best arguments that AGI is on the horizon?

It can seem strange that people act decisively about speculative things. So the first piece to understand is expected value: if something would be extremely important if it happened, then you can place quite low probability on it and still have warrant to act on it. (This is sometimes accused of being a decision-theory "mugging", but it isn't: we're talking about subjective probabilities in the range of 1% - 10%, not infinitesimals like those involved in Pascal's mugging.)

I think the most-defensible outside-view argument is: it could happen soon; it could be dangerous; aligning it could be very hard; and the product of these probabilities is not low enough to ignore.

1. When you survey general AI experts (not just safety or AGI people), they give a very wide distribution of predicting when we will have human-level AI (HLAI), with a central tendency of "10% chance of human-level AI... in the 2020s or 2030s". (This is weak evidence, since technology forecasting is very hard; these surveys are not random samples; but it seems like some evidence.)


2. We don't know what the risk of HLAI being dangerous is, but we have a couple of analogous precedents:

* the human precedent for world domination through intelligence / combinatorial generalisation / cunning

* the human precedent for 'inner optimisers': evolution was heavily optimising for genetic fitness, but produced a system, us, which optimises for a very different objective ("fun", or "status", or "gratification" or some bundle of nonfitness things).

* goal space is much larger than the human-friendly part of goal space (suggesting that a random objective will not be human-friendly, which combined with assumptions about goal maximisation and instrumental drives implies that most goals could be dangerous) .

* there's a common phenomenon of very stupid ML systems still developing "clever" unintended / hacky / dangerous behaviours


3. We don't know how hard alignment is, so we don't know how long it will take to solve. It may involve certain profound philosophical and mathematical questions, which have been worked on by some of the greatest thinkers for a long time. Here's a nice nontechnical statement of the potential difficulty. Some AI safety researchers are actually quite optimistic about our prospects for solving alignment, even without EA intervention, and work on it to cover things like the "value lock-in" case instead of the x-risk case.

Load More