Yeah, I think I agree that going really hard to increase fertility would likely require bad authoritarianism, even beyond the authoritarianism arguably inherent in trying to do this. (Or at least, I weekly guess that there is an >50% chance of this.) I was probably mostly being pedantic.
'Increased government power is closely linked with authoritarianism.' Is this really true? Authoritarianism seems concentrated in poor countries with low state capacity to me, with some exceptions like China (not that rich yet, but definitely high state capacity).
Pretty astonishing that Lewis answered "put that way, no" to "do you think he knowingly stole customer money". Feels to me like evidence of the corrupting effect of getting special insider access to a super-rich and powerful person.
Remember that these values are conditional on hedonism. If you think that humans can experience greater amounts of value per limit time because of some non-hedonic goods that humans have access to that non-human animals don't, then you can accept everything the RP Moral Weight report says, but still think that human lives are far more valuable relative to non-human animal lives than just plugging in the numbers from the report would suggest. Personally I find the numbers low relative to my intuitions, but not necessarily counterintuitive conditional on hedonism. Most of the reasons I can think of to value humans more than animals don't have anything to do with the (to me, somewhat weird) idea that humans have more intense pleasures and pains than other conscious animals. Hedonism is very much a minority view amongst philosophers, and "objective list" theories that are at least arguably friendly to human specialness (or maybe to human and mammal specialness) are by far the most popular: https://survey2020.philpeople.org/survey/results/5206
In terms of digital sentience, I do think it is somewhat reassuring that we harm animals out of indifference, not malice. There's no particular general reason I can see to think that when we pursue our interests with indifference rather than hostility to the well-being of other sentients, that the side-effects of this for the other sentients will systematically and non-coincidentally harm rather than help them. Like, it could be that what suits our interests just happens to be to bunch of digital minds that have no feelings, or that mostly enjoy the tasks we give them, just as much as it could turn out that the best way to exploit them involves net suffering for the minds. Factory farming is basically a single case, so I don't think it should move us much towards "actually exploiting other sentients for our own gain will have systematically negative, rather than basically random effects on their welfare". After all, it's not even completely clear the pre factory farming was net negative for the animals involved.
I suspect the ideology of Politico and most EAs are not that different (i.e. technocratic liberal centrism).
That surprises me. Maybe I was just flat-out wrong!
In terms of 2) and 3), a restraining order being granted is decent evidence that someone didn't just mistakenly feel harassed.
I don't think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think it'll be hard to collaborate if one/both sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)
People should say that things are right when they agree with them, even if there wasn't strategic purpose in doing so. I doubt being sympathetic to left economic stuff on AI will do much to help persuade people whose complaint is that EAs are racists/sexist/authoritarian/naive utilitarian. Though it would certainly help with people who are just (totally reasonably!, I am worried about this!) concerned about EAs ties to the industry.