All posts

New & upvoted

Friday, 24 June 2022
Fri, 24 Jun 2022

Building effective altruism 6
Global health & development 4
Community 4
AI safety 2
Happier Lives Institute 1
Progress studies 1
More

Frontpage Posts

Personal Blogposts

17
dotsam
· 2y ago · 9m read

Quick takes

8
quinn
2y
4
We need an in-depth post on moral circle expansion (MCE), minoritarianism, and winning. I expect EA's MCE projects to be less popular than anti-abortion in the US (37% say ought to be illegal in all or most cases, while for one example veganism is at 6%) . I guess the specifics of how the anti-abortion movement operated may be too in the weeds of contingent and peculiar pseudodemocracy, winning elections with less than half of the votes and securing judges and so on, but it seems like we don't want to miss out on studying this. There may be insights.  While many EAs would (I think rightly) consider the anti-abortion people colleagues as MCE activists, some EAs may also (I think debatably) admire republicans for their ruthless, shrewd, occasionally thuggish commitment to winning. Regarding the latter, I would hope to hear out a case for principles over policy preference, keeping our hands clean, refusing to compromise our integrity, and so on. I'm about 50:50 on where I'd expect to fall personally, about the playing fair and nice stuff. I guess it's a question of how much republicans expect to suffer from externalities of thuggishness, if we want to use them to reason about the price we're willing to put on our integrity.  Moreover, I think this "colleagues as MCE activists" stuff is under-discussed. When you steelman the anti-abortion movement, you assume that they understand multiplication as well as we do, and are making a difficult and unhappy tradeoff about the QALY's lost to abortions needed by pregancies gone wrong or unclean black-market abortions or whathaveyou. I may feel like I oppose the anti-abortion people on multiplicationist/consequentialist grounds (I also just don't think reducing incidence of disvaluable things by outlawing them is a reasonable lever), but things get interesting when I model them as understanding the tradeoffs they're making.  (To be clear, this isn't "EA writer, culturally coded as a democrat for whatever college/lgbt/atheist reasons, is using a derogatory word like 'thuggish' to describe the outgroup", I'm alluding to empirical claims about how the structure of the government interacts with population density to create minority rule, and making a moral judgment about the norm-dissolving they fell back on when obama appointed a judge.) 
Marketing AI reform: You might be able to have a big impact on AI reform by changing the framing. Right now framing it as “AI” alignment sells the idea that there will be computers with agency. Or something like free will. Or they will choose acts like a human. It could instead be marketed as something like preventing “automated weapons” or “computational genocide”. By emphasizing the fact that a large part of the reason we work on this problem is that humans could use computers to systematically cleanse populations, we could win people to our side. Proposal: change the framing from “Computers might choose to kill us” to “Humans will use computers to kill us” regardless of whether either potential outcome is more likely than the other. You could probably get more funding, more serious attention, and better reception by just marketing the idea in a better way. Who knows, maybe some previously unsympathetic billionaire or government would be willing to commit hundreds of millions to this area just by changing the way we talk about it.