I thought this was an interesting case: the Dutch anti-trust regulator decided that an attempt by Dutch chicken companies to raise animal welfare was illegal, because it raised prices for consumers. The benefits to the chickens was apparently quite small, but as far as I can see they would have reached the same decision regardless of the size of the welfare improvement:
Sustainably produced poultry meat has enjoyed vastly increased sales in the Netherlands during the last five years due to schemes introduced by the Dutch Society for the Protection of Animals. The Chicken of Tomorrow is a recent initiative by a group of poultry producers and processors providing for higher animal welfare standards including additional space for the chickens and additional poultry litter as well as a longer lifetime of one to two days.
...
Market participants at varying levels of the poultry meat supply chain – including supermarkets, farmers, and meat processors – entered into agreements providing that only chicken meat that achieves certain animal-welfare requirements, such as those put in place as a part of the Chicken of Tomorrow initiative, should be available to consumers. These agreements led to Dutch supermarkets de-listing regular chicken meat and removing it from their shelves.
...
Following its review of the agreements and the requirements of Chicken of Tomorrow, the ACM found that the improvements offered by Chicken of Tomorrow were only limited: the birds only benefited from slightly more space and generally only lived a couple of days longer than conventional chickens. The ACM’s consumer research also found that these improvements came at a cost higher than consumers were generally willing to pay.
Therefore, the ACM concluded that agreements to remove regular chicken meat from supermarket shelves went too far and did not satisfy the efficiency arguments necessary for exemption.
These cases are also interesting for alignment agreements between AI labs, and it's interesting to see it playing out in practice. Cullen wrote about this here much better than I will.
Roughly speaking, if individual consumers would prefer use a riskier AI (because costs are externalized) then it seems like an agreement to make AI safer-but-more-expensive would run afoul of the same principles as this chicken-welfare agreement.
On paper, there are some reasons that the AI alignment case should be easier than the chicken-welfare case: (i) using unsafe AI hurts non-customer humans, and AI customers care more about other humans than they do about chickens, (ii) deploying unaligned AI actually likely hurts other AI customers in particular (since they will be the main ones competing with the unaligned but more sophisticated AI). So it's likely that every individual AI customer would benefit.
Unfortunately, it seems like the same thing could be true in the chicken case---every individual customer could prefer the world with the welfare agreement---and it wouldn't change the regulator's decision.
For example, suppose that Dutch consumers eat 100 million chickens a year, 10/year for each of 10 million customers. Customer surveys discover that customers would only be willing to pay $0.01 for a chicken to have more space and a slightly longer life, but that these reforms increase chicken prices by $1. So they strike down the reform.
But with welfare standards in place, each customer pays an extra $10/year for chicken and 100 million chickens have improved lives, with a cost per chicken of less than $0.0000001/chicken, thousands of times lower than their WTP. (This is the same dynamic described here.) So every chicken consumer prefers the world where the standards are in place, despite not being willing to pay money to improve the lives of the tiny number of chickens they eat personally. This seems to be a very common reaction to discussions of animal welfare ("what difference does my consumption make? I can't change the way most chickens are treated...")
Because the number of chicken-eaters is so large, the relevant question in the survey should be "Would you prefer that someone else pay $X in order to improve chicken welfare?", making a tradeoff between two strangers. That's the relevant question for them, since the welfare standards mostly affect other people.
Analogously, if you ask AI consumers "Would you prefer have an aligned AI, or a slightly more sophisticated unaligned AI?" they could easily all say "I want the more sophisticated one," even if every single human would be better off if there were an agreement to make only aligned AI. If an anti-trust regulator used the same standard as in this case, it seems like they would throw out an alignment agreement because of that, even knowing that it would make every single human worse off.
I still think in practice AI alignment agreements would be fine for a variety of reasons. For example, I think if you ran a customer survey it's likely people would say they prefer use aligned AI even if it would disadvantage them personally because public sentiment towards AI is very different and the regulatory impulse is stronger. (Though I find it hard to believe that anything would end up hinging on such a survey, and even more strongly I think it would never come to this because there would be much less political pressure to enforce anti-trust.)
Sorry if this is a lame question, but do you think that regulations and standards on ESG that explicitly mentioned animal welfare - something more like soft law, or "comply or explain", e.g., "companies must disclose animal welfare policies", or "social and environmental risks include losses due to... animal cruelty" - could be enough to start a change in US antitrust law interpretation on blacklisting products out of animal welfare concerns?