richard_ngo

AI safety research engineer at DeepMind (all opinions my own, not theirs). I'm from New Zealand and now based in London; I also did my undergrad and masters degrees in the UK (in Computer Science, Philosophy, and Machine Learning). Blog: thinkingcomplete.blogspot.com

richard_ngo's Comments

Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism

I'd be more excited about seeing some coverage of suffering-focused ethics in general, rather than NU specifically. I think NU is a fairly extreme position, but the idea that suffering is the dominant component of the expected utility of the future is both consistent with standard utilitarian positions, and also captures the key point that most EA NU thinkers are making.

What are some 1:1 meetings you'd like to arrange, and how can people find you?

Who are you?

I'm Richard. I'm a research engineer on the AI safety team at DeepMind.

What are some things people can talk to you about? (e.g. your areas of experience/expertise)

AI safety, particularly high-level questions about what the problems are and how we should address them. Also machine learning more generally, particularly deep reinforcement learning. Also careers in AI safety.

I've been thinking a lot about futurism in general lately. Longtermism assumes large-scale sci-fi futures, but I don't think there's been much serious investigation into what they might look like, so I'm keen to get better discussion going (this post was an early step in that direction).

What are things you'd like to talk to other people about? (e.g. things you want to learn)

I'm interested in learning about evolutionary biology, especially the evolution of morality. Also the neuroscience of motivation and goals.

I'd be interested in learning more about mainstream philosophical views on agency and desire. I'd also be very interested in collaborating with philosophers who want to do this type of work, directed at improving our understanding of AI safety.

How can people get in touch with you?

Here, or email: ngor [at] google.com

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

What would convince you that preventing s-risks is a bigger priority than preventing x-risks?

Suppose that humanity unified to pursue a common goal, and you faced a gamble where that goal would be the most morally valuable goal with probability p, and the most morally disvaluable goal with probability 1-p. Given your current beliefs about those goals, at what value of p would you prefer this gamble over extinction?

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

We have a lot of philosophers and philosophically-minded people in EA, but only a tiny number of them are working on philosophical issues related to AI safety. Yet from my perspective as an AI safety researcher, it feels like there are some crucial questions which we need good philosophy to answer (many listed here; I'm particularly thinking about philosophy of mind and agency as applied to AI, a la Dennett). How do you think this funnel could be improved?

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

If you could convince a dozen of the world's best philosophers (who aren't already doing EA-aligned research) to work on topics of your choice, which questions would you ask them to investigate?

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

If you could only convey one idea from your new book to people who are already heavily involved in longtermism, what would it be?

What are the key ongoing debates in EA?

Thanks for the list! As a follow-up, I'll try list places online where such debates have occurred for each entry:

1. https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1

2. Toby Ord has estimates in The Precipice. I assume most discussion occurs on specific risks.

3. Lots of discussion on this; summary here: https://forum.effectivealtruism.org/posts/7uJcBNZhinomKtH9p/giving-now-vs-later-a-summary . Also more recently https://forum.effectivealtruism.org/posts/amdReARfSvgf5PpKK/phil-trammell-philanthropy-timing-and-the-hinge-of-history

4. Best discussion of this is probably here: https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like

5. Most stuff on https://longtermrisk.org/ addresses s-risks. In terms of pushback, Carl Shulman wrote http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html and Toby Ord wrote http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/ (although I don't find either compelling). Also a lot of Simon Knutsson's stuff, e.g. https://www.simonknutsson.com/thoughts-on-ords-why-im-not-a-negative-utilitarian

6a. https://forum.effectivealtruism.org/posts/LxmJJobC6DEneYSWB/effects-of-anti-aging-research-on-the-long-term-future , https://forum.effectivealtruism.org/posts/jYMdWskbrTWFXG6dH/a-general-framework-for-evaluating-aging-research-part-1

6b. https://forum.effectivealtruism.org/posts/W5AGTHm4pTd6TeEP3/should-longtermists-mostly-think-about-animals , https://forum.effectivealtruism.org/posts/ndvcrHfvay7sKjJGn/human-and-animal-interventions-the-long-term-view

6c. https://forum.effectivealtruism.org/posts/xh37hSqw287ufDbQ7/existential-risk-and-economic-growth-1

7. Nothing particularly comes to mind, although I assume there's stuff out there.

8. https://80000hours.org/2020/02/anonymous-answers-effective-altruism-community-and-growth/

9. E.g. here, which also links to more discussions: https://forum.effectivealtruism.org/posts/NLJpMEST6pJhyq99S/notes-could-climate-change-make-earth-uninhabitable-for

Harsanyi's simple “proof” of utilitarianism
Because we are indifferent between who has the 2 and who has the 0

Perhaps I'm missing something, but where does this claim come from? It doesn't seem to follow from the three starting assumptions.

Announcing the 2019-20 Donor Lottery
2018-19: a $100,000 lottery (no winners)

What happens to the money in this case?

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA
I think that they might have been better off if they'd instead spent their effort trying to become really good at ML in the hope of being better skilled up with the goal of working on AI safety later.

I'm broadly sympathetic to this, but I also want to note that there are some research directions in mainstream ML which do seem significantly more valuable than average. For example, I'm pretty excited about people getting really good at interpretability, so that they have an intuitive understanding of what's actually going on inside our models (particularly RL agents), even if they have no specific plans about how to apply this to safety.

Load More