Sorted by New


I'm Buck Shlegeris, I do research and outreach at MIRI, AMA
I tried to figure out whether MIRI’s directions for AI alignment were good, by reading a lot of stuff that had been written online; I did a pretty bad job of thinking about all this.

I'm curious about why you think you did a bad job at this. Could you roughly explain what you did and what you should have done instead?

Why You Should Visit Vancouver

If you can manage it, head to the Seattle Secular Solstice on Dec 10, 2016. Many of us from Vancouver are going.

MIRI Update and Fundraising Case

we prioritize research we think would be useful in less optimistic scenarios as well.

I don't think I've seen anything from MIRI on this before. Can you describe or point me to some of this research?

Ethical offsetting is antithetical to EA

Notice that the narrowest possible offset is avoiding an action. This perfectly undoes the harm one would have done by taking the action. Every time I stop myself from doing harm I can think of myself as buying an offset of the harm I would have done for the price it cost me to avoid it.

I think your arguments against offsetting apply to all actions. The conclusion would be to never avoid doing harm unless it's the cheapest way to help.

Lawyering to Give

End of History Illusion sounds like what you're looking for.