huw

Co-Founder & CTO @ Kaya Guides
2258 karmaJoined Working (6-15 years)Sydney NSW, Australia
huw.cool

Bio

Participation
2

I live for a high disagree-to-upvote ratio

Comments
316

huw
2
0
0
1

(Yep, I’m not having a go at the mission here, more at the nuances of measurement)

Small drive-by question for you: In your opinion, if C. Elegans is conscious and has some moral significance, and suppose we could hypothetically train artificial neural networks to simulate a C. Elegans, would the resulting simulation have moral significance?

If so, what other consequences flow from this—do image recognition networks running on my phone have moral significance? Do LLMs? Are we already torturing billions of digital minds?

If not, what special sauce does C. Elegans have that an artificial neural network does not? (If you’re not sure, where do you think it might lie?)

(Asking out of genuine curiosity—haven’t had a lot of time to interface with this stuff)

I guess I don’t find your conclusion intuitive. I’m sure there are a range of preference questions you could ask these extreme sufferers. For example, whether they, at a 5/10 life satisfaction, would trade places with someone in a low-income country with a life satisfaction of 2/10 who does not have their condition.

  • If you believe that they would make this trade, then surely there is something that their life satisfaction score is simply failing to capture
  • If you believe that they wouldn’t make this trade, then either that preference game isn’t eliciting some true value of suffering, or otherwise, why should we allocate hypothetical marginal dollars to their suffering and not that of those with lower life satisfaction?

My hunch is that the former is true, that there is something you can elicit from these people that isn’t being captured in the Cantril Ladder. (In my work, we’ve found the Cantril Ladder to be unreliable in other ways). But on the other side of this, I do worry about rejecting people’s own accounts of their experiences—it may literally be true that these people are somewhat happy with their lives, and that we should focus our resources on those who report that they aren’t!

I take this as an indicator that we need to work harder to demonstrate that global mental health is a cause area worth investing in :)

Do you think it’s valuable to specifically measure negative affect instead of overall affect, in this case? Or would overall affect suffice?

Anthropic are now offering Claude for up to 75% off for Goodstack-eligible non-profits :)

Why do you think people who suffer so frequently and deeply rate their life satisfaction relatively highly?

(My best sense is some combination of:

  1. The Cantril Ladder is remembered rather than experiential measurement, and if we did just capture the area under the curve of their hedonic states we would see a much lower value
  2. The Cantril Ladder asks users to anchor on ‘the best life for you’, and users may not see a life for themselves without their suffering
  3. Suicide omitting users who are likely to view their lives as not worth living—thereby selecting for optimists
  4. The Cantril Ladder is on a scale of 1–10, which users perceive as linear, but extreme suffering is exponentially worse than mild suffering
  5. It’s not clear where the ‘life not worth living point’ actually is, and it may genuinely be around 4 points, so these people actually are reporting living awful lives)

Like, I can’t see a reason why wellbeing measures shouldn’t, in theory, capture these extremely negative states.

huw
4
0
0
2

I was born in Sydney, but this is like, a minor part of the reason I’ve decided to stay here for the time being. However, there aren’t a lot of working EAs here, especially not in global health.

However, I make up for this by travelling long-term for big parts of the year. I spend a fair chunk of time in country or doing long stints in London during their summer (which is quite nice). You could pick a top EA hub to live in, just spend summer there, and travel the rest of the time—or live somewhere nice, and travel to the EA hub for a few months.

Alternatively, you could move to a nice city near a lot of EA hubs and with easy transport options. I’ve considered Barcelona for this purpose, since it’s a day’s train from London and 2 days from Berlin, but has great weather year-round. I know a few European digital nomads tend to be based around the Côte d'Azur for this reason (independently of EA).

Load more