My understanding is that he basically thinks norms associated with social conservatives, in particular Mormons -- he lists "savings, mutual assistance, family values and no drug and alcohol abuse" in this NYT piece -- just make people better off. He's especially big on the teetotaling thing; he thinks alcohol abuse is a major social problem we don't do enough to address. I don't exactly know if he thinks it's more important for EA's to adopt conservative norms to improve their own welfare/productivity, or if EA's need to see the value of conservative norms for other people generally and start promoting them.
I don't think he's thinking of it as giving EA more mass appeal.
Fair point that the concern is only about hypothetical people who would genuinely try to optimize for the weird consequences of hedonic utilitarianism -- I guess it's an open question how much any actual person is like them.
That distinction is helpful. It sounds like you might hold the view I mentioned was possible but seemed implausible to me, namely thinking utility has to be instantiated in something approximately like a life for it to make the world a better place? Maybe one advantage of an objective list view over a hedonist view in that connection is that it seems more plausible for the objective list theorist to maintain that the stuff on the list has to be instantiated in a certain kind of entity before it actually matters (i.e. contributes to something's well-being) than it would be for the hedonist to maintain that. E.g. you can pretty comfortably say knowledge is valuable in humans, but not in large language models, whereas it seems a little harder to say pleasure is valuable in humans but not in momentary instantiations of a human or whatever. Obviously the questions "what does a thing's well-being consist in" and "what does a thing have to be like for it to have well-being at all" are in principle distinct, but if you think well-being consists entirely in pleasure it seems harder to say "and also the pleasure needs to be in a special kind of thing."
I'll have to check out the paper!
AP classes are sometimes graded on a 5.0 instead of 4.0 scale. A lot of high schools do this but not all of them do. Basically it's a way of rewarding students for taking harder classes.
There's a small philosophical literature on the ethics of sweatshops. There are some libertarian type people who defend basically the view you describe, that if people voluntarily work at these places, they must represent their best option, so sweatshops can't be morally wrong. This paper is representative of that view: https://philpapers.org/rec/ZWOSCA-2
Then there are people who try to argue that sweatshop labor really is wrong nonetheless. One idea is basically Kantian -- sweatshop owners treat their employees as a mere means to an end, which is wrong. Another idea is that sweatshop workers are wrongfully exploited. For instance, lots of people have the intuition that in a voluntary exchange, one party shouldn't use their bargaining power to reap all the gains from the trade. I.e. they think there is a "fair price" where buyer and seller both get an equitable share of the benefits.
Just for illustration, imagine I have an old Camaro that I'm willing to sell for $5,000. You're willing to buy it for up to $100,000. If I know you desperately need it, and I negotiate you up to $99,999, then I'm getting almost all the benefits from the trade (you are one dollar away from being indifferent about the exchange). Lots of people think that's wrong. This might be the intuition behind thinking price gouging is wrong, too.
So you can argue that sweatshops are basically similar (although that's an interesting empirical question). Maybe sweatshop laborers are just barely better off than their next best option, while the owners are profiting a huge amount from their labor. People might think that's exploitative, and that for things to be equitable both parties need to get a reasonable cut of the benefits from the exchange. This paper has some more discussion you might find interesting: https://philpapers.org/rec/MAYSEA
Really interesting post. Your responses to the stock arguments seem pretty compelling. Looking forward to seeing your positive proposal.
I wonder what you think of the idea that consistency is an externally imposed constraint on preferences akin to every other moral requirement. So from a totally self involved standpoint there might be no way to reason yourself into really striving for consistency, in the same way that for someone who didn't already care about suffering, there might be no way to reason themself into caring about it. Instead it's just this constraint we impose on people so they can function in a society. If you don't care about suffering, we do various things to punish you until you do -- and specifically we do stuff until you actually care about suffering for its own sake, not just pretend you do when that stops you from being punished. Similarly we enforce consistency as a constraint on preferences/moral views because of broader societal benefits of existing in a group of people who are like that (eg we can actually reason together).
Related to this, I feel like the antirealist might reply to the point about not having a strong enough brute preference for consistency to outweigh real stakes in terms of pleasure and suffering in a basically deontological way. Well-socialized people don't just have preferences of some particular strength for outcomes of various moral value; they internalize moral rules for action that just rule out certain behaviors even if they lead to better states (of course maybe this just means that aspect of socialization is bad). In the same way, it's not like I have a preference of whatever strength for worlds where I'm consistent, and then I weigh that preference against the value of the stuff I have to trade it off against. I just internalize an epistemic rule governing my moral attitudes and I abide by it.
None of this is actually an argument that people who don't currently care about consistency should start doing so. But I think you could spin a vindicatory story, from the point of view of someone who has internalized consistency as a norm, of the value of doing so. And that story would look a lot like the story you'd tell about the value of internalizing norms against lying, etc., even when doing so might maximize utility.