DirectedEvolution

3316Joined May 2019

Comments
348

Honey baked ham is 5g fat and 3g sugar/3oz. ~28g = 1oz, so that's 6% fat and 4% sugar, so ice cream is about 5x sugarier and ~2x fattier than honey-baked ham. In other words, for sugar and fat content, honey-drenched fat > ice cream > honey-baked ham. Honey-baked ham is therefore not a modern American equivalent to honey-drenched Gazelle fat, a sentence I never thought I'd write but I'm glad I had the chance to once in my life.

Thank you for contributing more information.

I understand and appreciate the thinking behind step in Ren's argument. However, the ultimate result is this: 

  • I experienced "disabling"-level pain for a couple of hours, by choice and with the freedom to stop whenever I want. This was a horrible experience that made everything else seem to not matter at all...
  • A single laying hen experiences hundreds of hours of this level of pain during their lifespan, which lasts perhaps a year and a half - and there are as many laying hens alive at any one time as there are humans. How would I feel if every single human were experiencing hundreds of hours of disabling pain? 

My main takeaway is that the breadth and variety of experience that arguably falls under the umbrella of "disabling pain" is enormous, and we can only have low-moderate confidence in animal welfare pain metrics. As a result, I am updating toward increased skepticism in high-level summaries of animal welfare research.

The impact of nest deprivation on laying hen welfare may still be among the most pressing animal welfare issues. But, if tractability was held constant, I might prefer to focus on alleviating physical pain among a smaller number of birds.

Also, to disagreevoters, I'm genuinely curious about why you disagree! Were you already appropriately skeptical before? Do you think I am being too skeptical? Why or why not?

I had more trouble understanding how nest deprivation could be equivalent to "**** me, make it stop. Like someone slicing into my leg with a hot, sharp live wire." So I looked up the underpinnings of this metric, in Ch. 6 of the book they build their analysis on (pg. 6-9 is the key material).

They base this on the fact that chickens pace, preen, show aggressive competition for nests when availability is limited, and will work as hard to push open heavy doors to access nests as they will to access food after 4-28 hours of food deprivation. Based on this, the authors categorize nest deprivation as a disabling experience that each hen endures for an average of about 45 minutes per day.

This is a technically accurate definition, but I still had trouble intuiting this as equivalent to a daily experience of disabling physical pain equivalent to having your leg sliced open with a hot, sharp live wire.

Researchers are limited to showing that chickens exhibit distress during nest deprivation, or, in more sophisticated research, that they work as hard to access nest boxes as they do to access food after 4-28 hours of food deprivation.

I am suspicious of the claim that these methods are adequate to allow us to make comparisons of physical and emotional pain across species. This is especially true with the willingness-to-work metric they use to compare the severity of nest deprivation and starvation on chickens.

  • Willingness-to-work is probably mediated by energy. After starvation, chickens will be low-energy, and willingness-to-work probably underestimates their suffering. A starving person would like to do 100 pushups to access an all-you-can-eat buffet, but physically is unable to do so. If he's also willing to do 100 pushups to join the football team, does that mean that keeping him off the team is as bad as starving him?
  • People show distressed behaviors in the absence of suffering. I bite my fingernails pretty severely. Sometimes, they even bleed. It's not motivated by severe anxiety in those moments. It's just force of habit. Chickens may be hardwired by evolution to work hard to access nests, without necessary suffering while they do so.
  • Our perceptions of how distressed a behavior is is culturally-specific, not to mention species-specific. I pace and walk around the neighborhood when I'm thinking hard. People get piercings and tattoos. People fight recreationally. We don't assume that people are experiencing high emotional distress in the moments they choose to do these things. Why do we assume that about chickens?

I've spent too long writing this comment, so I'm going to just stop here.

I've used ChatGPT for writing landing pages for my own websites, and as you say, it does a "good enough" job. It's the linguistic equivalent of a house decorated in knick knacks from Target. For whatever reason, we have had a cultural expectation that websites have to have this material in order to look respectable, but it's not business-critical beyond that.

By contrast, software remains business-critical. One of the key points that's being made again and again is that many business applications require extremely high levels of reliability. Traditional software and hardware engineering can accomplish that. For now, at least, large language models cannot, unless they are imitating existing high-reliability software solutions. 

A large language model can provide me with reliable working code for an existing sorting algorithm, but when applications become large, dynamic, and integrated with the real world, it won't be possible to built a whole application off a short, simple prompt. Instead, the work is going to be about using both human and AI-generated code to put together these applications more efficiently, debug them, improve the features, and so on.

This is one reason why I think that LLMs are unlikely to replace software engineers, even though they are replacing copy editors, and even though they can write code: SWEs create business-critical high-reliability products, while copy editors create non-critical low-reliability products, which LLMs are eminently suitable for.

IMO, the main potential power of a boycott is symbolic, and I think you only achieve that is by eschewing LLMs entirely. Instead, we can use them to communicate, plan, and produce examples. As I see it, this needs to be a story about engaged and thoughtful users advocating for real responsibility with potentially dangerous tech, not panicky luddites mounting a weak looking protest.

Seems to me that we’ll only see a change in course from relentless profit-seeking LLM development if intermediate AIs start misbehaving - smart enough to seek power and fight against control, but dumb enough to be caught and switched off.

I think instead of a boycott, this is a time to practice empathic communication with the public now that the tech is on everybody’s radar and AI x-risk arguments are getting a respectability boost from folks like Ezra Klein.

A poster on LessWrong recently harvested a comment from a NY Times reader that talked about x-risk in a way that clearly resonated with the readership. Figuring out how to scale that up seems like a good task for an LLM. In this theory of change, we need to double down on our communication skills to steer the conversation in appropriate ways. And we’ll need LLMs to help us do that. A boycott takes us out of the conversation, so I don’t think that’s the right play.

This might be an especially good time to enter the field. Instead of having to compete with more experienced SWEs in writing code the old fashioned way, you can be on a nearly level playing field with incorporating LLMs into your workflow. You’ll still need to learn a traditional language, at least for now, but you will be able to learn more quickly with the assistance of an LLM tutor. As the field increasingly adapts to a whole new way to write code, you can learn along with everybody else.

Have you already done some searching for articles on the subject? There’s a ton of content on this subject already. What have you tried already? What are you struggling with?

Interesting! Do you think that is a common view? And do you think that federal healthcare policy should be made by somehow tapping into commonsense moral intuitions? Or should a winning, even if unpopular, argument determine policy options?

Edit: perhaps we can value QALYs on the principle that we’re unlikely to be able to accurately track all contributors to total ETHU in practice, but having people maintain physical health is probably an important contributor to it in practice. Physical health has positive externalities that go beyond subjective well-being and therefore we should value it in setting healthcare policy.

This is getting into philosophical territory, so here’s a thought experiment. Let’s say you’d lost your legs. You had to choose between a $10 pill that instantly regrew your legs and restored your subjective well-being, and a $0 pill that only corrected any loss in subjective well-being from having lost your legs. Do you really choose the well-being only pill in this case?

Load more