Yes, I kind of did see this coming (although not in the US) and I've been working on a forum post for like a year and now I will finish it.
Yeah I wrote it in google docs and then couldn't figure out how to transfer the del and suffixes to the forum.
I think this is correct and EA thinks about neglectedness wrong. I've been meaning to formalise this for a while and will do that now.
If preference utilitarianism is correct there may be no utility function that accurately describes the true value of things. This will be the case if people's preferences aren't continuous or aren't complete, for instance if they're expressed as a vector. This generalises to other forms of consequentialism that don't have a utility function baked in.
A 6 line argument for AGI risk
(1) Sufficient intelligence has capitalities that are ultimately limited by physics and computability
(2) An AGI could be sufficiently intelligent that it's limited by physics and computability but humans can't be
(3) An AGI will come into existence
(4) If the AGIs goals aren't the same as humans, human goals will only be met for instrumental reasons and the AGIs goals will be met
(5) Meeting human goals won't be instrumentally useful in the long run for an unaligned AGI
(6) It is more morally valuable for human goals to be met than an AGIs goals
Thank you, those both look like exactly what I'm looking for
But thank you for replying, in hindsight by reply seems a bit dismissive :)
Not really because that paper is essentially just making the consequentialist claim that axiological long termism implies that the action we should take are those which help the long run future the most. The Good is still prior to the Right.
Hi Alex, the link isn't working