I argue that it's entirely the truth, the way that the term is used and understood.
Precisely. And supporting subsidized contraception is a long way away from both the formal definition of eugenics and its common understanding.
I feel that saying "subsidized contraception is not eugenics" is rhetorically better and more accurate than this approach.
Ah, you made the same point I did, but better :-)
>Most people endorse some form of 'eugenics'
No, they don't. It is akin to saying "most people endorse some form of 'communism'." We can point to a lot of overlap between theoretical communism and values that most people endorse; this doesn't mean that people endorse communism. That's because communism covers a lot more stuff, including a lot of historical examples and some related atrocities. Eugenics similarly covers a lot of historical examples, including some atrocities (not only in fascist countries), and this is what the term means to most people - and hence, in practice, what the term means.
Many people endorse screening embryos for genetic abnormalities. The same people would respond angrily if you said they endorsed eugenics; the same way that people who endorse minimum wages would respond angrily if you said they endorsed communism. Eugenics is evil because it descriptively describes something evil; trying to force it into some other technical meaning is incorrect.
Thanks, that makes sense.
I've been aware of those kind of issues; what I'm hoping is that we can get a framework to include these subtleties automatically (eg by having the AI learn them from observations or from human published papers) without having to put it all in by hand ourselves.
Hey there! It is a risk, but the reward is great :-)
An AI that is aware that value is fragile will behave in a much more cautious way. This gives a different dynamic to the extrapolation process.