TK

Tristan Katz

Ethicist @ University of Fribourg, Switzerland
527 karmaJoined Working (0-5 years)

Bio

Participation
4

I recently completed a PhD exploring the implications of wild animal suffering for environmental management. You can read my research here: https://scholar.google.ch/citations?user=9gSjtY4AAAAJ&hl=en&oi=ao

I am now considering options in AI ethics, governance, or the intersection of AI and animal welfare.

Comments
88

Tristan Katz
1
0
1
20% agree

Most wild animals are wild animals, so the answer to this question should focus on them. It seems to me that the answer largely depends on how we understand "goes well for humans", and what we expect the counterfactual to be.

So in what are the possible scenarios?

  1. AGI empowers humans to make their own decisions, and to make better decisions. I expect this would greatly accelerate progress toward helping wild animals. This would be great.
  2. AGI replaces human decision-making. It then either:
    1. Reasons further from a starting point of human values, removing biases and inconsistencies - which I think would lead it to care more about animals.
    2. Or it could just lock in current human values. 

And what's the counterfactual?

  1. A continuation of the world as it is today: one where humanity gradually cares more and more about animal welfare, and in which there is at least a potential for caring about wild animals to be normalized. In this case, scenarios 1 and 2(a) seem good, but 2(b) seems very bad.
  2. A world in which the WAW movement fails. In this case even 2(b) doesn't look that bad, but 1 and 2(a) seem very good.

I'm not sure if this is complete. I'm also not sure how to assign probabilities - I don't think I know enough about AGI. But tentatively, I expect scenario 2 to be most likely, with (a) and (b) roughly equal, and counterfactual 1 to be most likely. For that reason I'm going with 20% likely to be good.

But I want to say that I would not take a 20% bet of winning everything vs losing everything, and this feels very close. I think this is a terrible gamble and we shouldn't do it. I hope that the debate results won't be understood as EAs saying that this is a bet worth taking.

[I realise I misremembered Horta & Teran's argument, so I edited that comment now]

I agree that people at WAI might have opinions about how one should do ecosystem restoration, but I doubt they would express them publicly because such such opinions are highly speculative at this stage. Maybe @mal_graham🔸 can correct me if I'm mistaken!

I think present and future WAW advocates would fiercely disagree about what ecosystems might be net good/bad, and any intervention aimed at making greening more likely would be highly controversial.

I suppose this is true, given different intuitions about population ethics. But 1) at some point these disagreements need to be overcome - so maybe we just need to take some moral uncertainty approach - and 2) maybe I'm optimistic that progress will even reduce the disagreements on these matters. I also think that a decision will be made on these matters one way or the other, so WAW really ought to make a call about pop. ethics questions and then try to influence the decision in the way that seems best. 

But I can also imagine that in other case the decision might be simpler, e.g. promoting indigenous trees in a given area might not radically increase or decrease the number of sentient beings, but might greatly change the welfare profile of the ecosystem.

Whatever the incentive for restoration is, it seems far stronger than the incentive to please the few detractors who do not want the landscape restored.

Incentives will vary depending on the context! For example, the regeneration of forest is actively opposed in much of Central Europe, because people have cultural ideas about what the landscape should look like. So there's a tension there between environmentalists and traditionalists, and I wouldn't say that the environmentalists are winning.

The situation I'm thinking of is not necessarily ecosystem restoration. It's changing one ecosystem to another (although admittedly, most ecological restoration is exactly that). But so the relevant question is whether one ecosystem-type has a higher level of welfare than another.

But yes, some such activities are happening anyway, such as desert greening - and we might be able to promote or oppose them, depending on whether they seem welfare-promoting or not. Since these activities are happening anyway, and usually aren't heavily politicised, I see no reason why some activism couldn't influence things one way or the other (e.g. by providing environmental reasons to encourage changes like desert greening, or leveraging conservative valuing of traditional landscapes to oppose it). Are there particular reasons why you're skeptical?

WAI to my knowledge doesn't discuss many interventions - they are positioning themselves as a science-promotion organization, not as an advocacy organization. My understanding is they want this to be taken seriously as a field of scientific study, and so they are avoiding promoting interventions for which there isn't solid data. And this is definitely something for which we don't yet have good data

Hi Jim, thanks for pushing back on this! To be honest, this was the intervention I'm least confident. I got the idea from this article by Horta & Teran, where they argue that ecosystems involving large herbivores such as elephants are likely to be higher average welfare than ecosystems without them, since large herbivores break down a lot of biomass, leaving less for smaller, faster-producing animals. I think that they are overconfident in their claim - as I point out in the full paper, it's not clear that elephants always have this effect. But still, I'm optimistic that within the next 50-100 years we might have enough info to make these kind of calls. Admittedly, not as soon as some of the other interventions.

But is your point more about the social/political challenge? I'm not aware of collaborations between restoration scientists and WAW scientists, so I can't give you reasons for optimism, but I also don't have reasons for pessimism! Do you? An intervention doesn't even need to be framed around WAW either - you could just fund an organization to lobby for desert greening (for example) in a particular area, and they could leverage whatever arguments they've got.

Not sure how satisfying that is, I'm interested to hear your thoughts.

*I realize the elephant example is actually from a different paper. In the referenced paper, they give a more general argument:

We may be able to make some rough predictions about the different ecosystems that different decisions would produce in the targeted area. Accordingly, we may be able to guess what kind of animals will be there in each case. Such animals might be among those who have higher survival rates and longer average lifespans, and who reproduce by having small numbers of offspring. Or they may, instead, be among those who reproduce in very large numbers and tend to die in their very early youth. The latter, who unfortunately are the majority in nature, typically have much harder lives. Their lives may be so hard that they often contain more suffering than pleasure.

Dang, this sounds really cool. So do I understand correctly that you're all disbanding after July? I would be very interested in this kind of thing from August or September... 👀

Thanks, I'm looking forward to this! Some questions that seem worth considering to me are:

1. Is AGI likely to lock in values? (if so it's probably bad for animals)
2. Is the answer to this question even knowable? (a lot of what I've heard on the topic has been like "AI could mean X but also not X")
3. If AGI is good/bad, how steerable is it? (e.g. maybe making sure that AGI goes well for humans is actually much easier)

I think these are fair points, but the tone seems deconstructive and a bit condescending. I think it's possible to disagree and to caution loudly while still respecting that the post was made in good faith.

For what it's worth I'm also surprised by the reaction. Within government departments in NZ (where I worked before) this is not allowed. Of course it still happens but it seems good to me for the organization to discourage it. 

*Edit for spelling

 Want to add this here: https://www.reddit.com/r/dataisbeautiful/comments/1rhv521/oc_dietary_v_nondietary_veganism_interest_over_16/

Reddit might not be the best source of data. But it confirms what I've heard elsewhere that 2018-2020 was the height of veganism as a health craze, and at least indicates that ethical vegans (if they are reflected by those who buy vegan clothes) are still rising. 

I think this is a super important question and want way more conversation about it - but could we re-frame your conclusion as being not that we shouldn't use AI, but should be mindful about how we're using AI?

The scenario you described appears to be a pretty bad use. But I think much of the harm you're seeing could be mitigated. Here are some ideas, just off the top of my head, addressing the issues you listed (in order):

Use of AI in research should -

  1. Consider the appropriateness of AI in that context (e.g. is this an area where we need the most up-to-date answers? Is this an area where we want to consider non-western perspectives?)
  2. Approach AI-generated answers critically, treating them as vibes-based answers rather than having any authority (and in group-work contexts, leaders should encourage this)
  3. Have AI write up its answers in bullet points rather than full text, so that a human is always contributing to the style
  4. Be a second or later-resort option (try to think creatively/critically first, rather than relying on AI - again, leaders can encourage this)
  5. In group settings: encourage new or unusual ideas (addresses the last two points). 

I know these are far from perfect solutions. Point 4 is admittedly quite hard to keep up (I feel myself struggling with this). But to me it feels similar to how a calculator makes people lazy (I'm sure I can't do mental arithmetic now as well as I could when I was 12), but is still a net win. It seems likely that if we create good habits/culture about using AI, its benefits can significantly outweigh the downsides, even in research.[1] But I do think that requires a lot of conversations, and maybe some research, into how to use it well and avoid those pitfalls. So I would love to see more posts discussing this.

  1. ^

    I think these benefits are pretty significant. For instance, (and as a counter-point to 5), I find AI can actually help to reign in crazy ideas, by acting as a sanity-check tool; I also find it's helpful to quickly spot holes in an argument when otherwise I would have only gotten feedback from a colleague some days later; and it can quickly structure disorganized ideas. But surely there are many more.

Load more