ZB

Zachary Brown🔸

269 karmaJoined

Comments
31

I basically agree with this critique of the results in the post, but want to add that I nonetheless think this is a very cool piece of research and I am excited to see more exploration along these lines!

One idea that I had -- maybe someone has done something like this? -- is to ask people to watch a film or read a novel and rate the life satisfaction of the characters in the story. For instance, they might be asked to answer a question like "How much does Jane Eyre feel satisfied by her life, on a scale of 1-10?". (Note that we aren't asking how much the respondent empathizes with Jane or would enjoy being her, simply how much satisfaction they believe Jane gets from Jane's life.) This might allow us to get a shared baseline for comparison. If people's assessments of Jane's life go up or down over time, (or differ between people) it seems unlikely that this is a result of a violation of "prediction invariance", since Jane Eyre is an unchanging novel with fixed facts about how Jane feels. Instead, it seems like this would indicate a change in measurement: i.e. how people assign numerical scores to particular welfare states.

I think rescaling could make it steeper or flatter, depending on the particular rescaling. Consider that there is nothing that requires the rescaling to be a linear transformation of the original scale (like you've written in your example). A rescaling that compresses the life satisfaction scores that were initially 0-5 into the range 0-3, while leaving the life satisfaction score of 8-10 unaffected will have a different effect on the slope than if we disproportionately compress the top end of life satisfaction scores.

Sorry if I expressed this poorly -- it's quite late :)

To synthesize a few of the comments on this post -- This comment sounds like a general instance of the issue that @geoffrey points out in another comment: what @Charlie Harrison is describing as a violation of "prediction invariance" may just be a violation of "measurement invariance"; in particular because happiness (the real thing, not the measure) may have a different relationship with GMEOH events over time.

Thanks for the write-up! As a donor to the fund, it's really nice to see these reports. I occasionally wonder if I could obtain more cost-effective results by donating independently without the overhead of a managed fund. These reports reassure me that I almost certainly could not. Really grateful to the team for finding these incredible funding opportunities.

This is a really great post!! I really appreciated the point industry consolidation point. I also appreciate how you describe advocacy for PLF as a "framing" loss, since it implicitly concedes that we will be factory farming. This framing loss is an issue with a lot of welfarist interventions, and I don't think means we need to rule such interventions out, but I think it does make these sorts of interventions less attractive for public-facing campaigns. I think people sometimes underestimate the badness of framing loss, and I think this post makes the point really sharply; thanks.

I wrote a similar post arguing that animal advocates should oppose PLF, available here. A few ideas from that piece that I think are complementary to this one:

  1. I think there are some narrative reasons why opposing the worst instances of PLF might make attractive campaign targets: the industry is still underdeveloped, automated farming is disturbing to the public, small farmers might be willing to support these campaigns (because of the concentration effects of PLF), and there are existing ties between animal advocates and AI firms (through EA). Some of these arguments are stronger than others of course.
  2. I think PLF is likely to disproportionately increase the efficiency of farming small animals. This is because it allows farmers to deploy individual level monitoring where it was previously infeasible (because the labor costs of monitoring individual animals on e.g. a chicken farm with tens of thousands of animals is too high). This is another reason why the total number of animals farmed is likely to increase as a result of increased PLF adoption.

Another article that people might be interested in is this one, which proposes specific ethical restrictions/guidelines for PLF.

Thanks for the comment. I was clearly too quick with that opening statement. Perhaps in part I let my epistemic guard down there out of general frustration at the neglectedness of the topic, and a desire to attract some attention with a bold opener. So much harm could accrue to nonhuman animals relative to humans, and I really want more discussion on this. PLF is -- I've argued, anyway -- a highly visible threat to the welfare of zillions, but rarely mentioned. I hope you'll forgive an immodest but emotional claim.

I've edited the opener and the footnote to be more defensible, in response to this comment.

I actually don't believe, in the median scenario, that AIs are likely to both outnumber sentient animals and have a high likelihood of suffering, but I don't really want that to be the focus of this piece. And either way, I don't believe that with high certainty: in that respect, the statement was not reflective of my views.

Some of this discussion reminds me of Mill's in his super underrated essay "Utility of Religion". He proposes there a kind of yangy humanistic religion, against a backdrop of atheism and concern about the evils of nature. Worth a read.

Thanks for the comment!

I agree that there's a mixed case for political tractability. I'm curious why you don't find the argument compelling about the particular people who have influence on AI policy being more amenable to animal-related concerns? (To put it bluntly, EAs care about animals and are influential in AI, and animal ag industry lobbying hasn't really touched this issue yet.)

I like the analogy to cage-free campaigns, although I think I would draw different lessons from the analogy. I don't really think that the support for cage-free campaigns comes from support for restrictions that help individual animals rather than support for restrictions that restrict the total number of farmed animals. Instead, I think it comes for support for traditional and "natural" ways of farming (where the chickens are imagined to roam free) rather than industrialised, modern, and intensive farming methods. On this view, cage-free campaigns succeed because they target only the farming methods that the public disapproves of. This theory can also explain why people express disapproval of factory farming, but a strong approval of farming and farmers.

I think PLF is a politically tractable target for regulation because, like cage-free campaigns, it targets only the type of farming people already dislike. When I say "End AI-run factory farms!", the slogan makes inherently salient the technological, non-natural, industrial nature of the farming method. Restrictions here might not be perceived as restrictions on farming, they'll be perceived only as restrictions on a certain sinister form of unnatural industrialised farming. (The general public mostly doesn't realise that most farming is industrialised.) To put this another way: I think the most politically tractable pro-animal movements are the ones that explicitly restrict their focus to Big Evil Factory Farms, and leave Friendly Farmer Joe alone. I think PLF restrictions share this character with cage-free campaigns.

And we know from cage-free campaigns that people are sometimes willing to tolerate restrictions of this sort even if they are personally costly.

I basically fail to imagine a scenario where publishing the Trust Agreement is very costly to Anthropic—especially just sharing certain details (like sharing percentages rather than saying "a supermajority")—except that the details are weak and would make Anthropic look bad.

Anthropic might be worried that the details are strong, and would make Anthropic look vulnerable to similar governance chaos to what happened at OpenAI during the board turnover saga. A large public conversation on this could be bad for Anthropic's reputation among its investors, team, or other stakeholders, who have concerns other than longterm safety, or might think that Anthropic's non profit-motivated governance is opaque or bad for whatever other reason. To put this another way: Anthropic is probably reputation-managing, but it might not be their safety reputation that they are trying to manage. It might be their reputation -- to potential investors, say -- as a reliable actor with predictable decision-making that won't be upturned at the whims of the trust.

I would expect, though, that Anthropic's major investors know the details of the governance structure and mechanics.

 

Load more