Incidentally, I also appreciate comments like the first quote - not only have you given a summary, you've also given an indication of how much of the value of the post is contained in the summary 🙏
If you’ve read the summary, I’m not sure how much benefit you’ll get from the rest of the post. Consider not reading it.
Okay. Still upvoting though for this general thing:
...things I’ve changed my mind on since my last post.
I had to re-read too, but I read it as "Slavery was not primarily abolished for economic reasons."
I feel like the reductio ad abusurdum of your argument then is "Never encourage (maybe even discourage) anything that helps someone unless that thing is moral reasoning."
"Why can't attitude change / moral progress still happen later?" E.g. when we're advocating for concern for wild animal suffering?
I know that authors sometimes forget to check comments on their posts, so in case you haven't received an answer and you're still looking for one, you might have more luck using Jessica's email which is listed here.
I dunno, I still think my summary works. (To be clear, I wasn't trying to be like, "You must be exaggerating, tsk tsk," - I think you're being honest and for me it's the most important part of your post so I wanted to draw attention to it.)
Tl;dr As far as you know, you're the only person in the world directly working on how to build AI that's capable of making moral progress i.e. thinking critically about goals as humans do.
(I find this pretty surprising and worrying so wanted to highlight.)
Alright Henry, don't get carried away. The Very Hungry Caterpillar was the best thing to happen to What We Owe The Future.
Currently at #52 on Amazon's Best Sellers list!
I imagine it's particularly good to get it to #50 so that it appears on the first page of results?