PhD student in theoretical computer science (distributed computing) in France. Currently transitioning to AI Safety and fundamental ML work.
I'm curious about the article, but the link points to nothing. ^^
Thanks a lot for this presentation and corresponding transcript. I am quite new to thinking about animal welfare at all, and even more about wildlife animal welfare, but I felt this presentation was easy to follow even from this point of view (my half decent knowledge of evolution might have helped).
I like the clarification of evolution, and more specifically, of the fact that natural selection selects away options with bad fitness or bad relative fitness, instead of optimizing fitness to the maximum. That's a common issue when using theoretical computer science for modeling natural systems: instead of looking for the best algorithms for our classical measures (like time or space), we need to take into account the specifics of evolution (some forms of simplicity in the algorithms for example) and not necessarily optimize completely.
On the level of details and nitpicks, I have a few comments:
Finally, on the specific topic of intervention for improving welfare, I have one worry: what of cases where two species have mutually exclusive needs? Something like a meat-eater species and the species it eats. In theses cases, I feel like evolution left us with some sort of zero-sum game, and there might be necessary welfare tradeoffs because of it.
Once upon a time, my desire to build a useful mastery and career made me neglect my family, and more precisely my little brothers. Not dramatically, but whenever we were together, I was too stuck up with my own issues to act like a good old brother, or even to interact correctly with them. At some point, I realized that giving time and attention to my family was also important to me, and thus that I could not simply allocate all my mental time and energy to "useful" things.
This happened before I discovered EA, and is not explicitly about the EA community, but that's what popped into my mind when reading to this great post. In a sense, I refused to do illegible work (being a good brother, friend and son) because I considered it worthless in comparison with my legible aspirations.
Beyond becoming utility monster, I think what you are pointing out is that optimizing what we can measure, even with caveat, yields a negligence of small things that matters a lot. And I agree that this issue is probably tractable, because the illegible but necessary tasks themselves tend to be small, not too much of a burden. If one wants to allocate their career to it, great. But everyone can contribute in the ways you point out. Just a bit every day. Like calling your mom from time to time.
That is to say, just like you don't have to have an EA job to be an effective altruist, you don't have to dedicate all your life to illegible work to contribute some.
On a tangent, what are your issues with quantum computing? Is it the hype? that might indeed be abusive for what we can do now. But the theory is fascinating, there are concrete applications where we should get positive benefits for humanity, and the actual researchers in the field try really hard to clarify what we know and what we don't about quantum computing.
Thanks a lot for this great post! I think the part I like the most, even more than the awesome deconstruction of arguments and their underlying hypotheses, is the sheer number of times you said "I don't know" or "I'm not sure" or "this might be false". I feel it places you at the same level than your audience (including me), in the sense that you have more experience and technical competence than the rest of us, but you still don't know THE TRUTH, or sometimes even good approximations to it. And the standard way to present clearly ideas and research is to structure them so that these points that we don't know are not the focus. So that was refreshing.
On the more technical side, I had a couple of questions and remarks concerning your different positions.
Anecdotally, almost everyone from older generations that I know eat snails, so it might indeed be generational. Whereas I know approximately the same number of people from each generation that dislike oysters (mostly texture).
Thanks for the effort in summarizing and synthesizing this tangle of notions! Notably, I learned about axiology, and I am very glad I did.
One potential addition to the discussion of decision theory might be the use of "normative", "descriptive" and "prescriptive" within decision theory itself, which is slightly different. To quote the Decision Theory FAQ on Less Wrong:
We can divide decision theory into three parts (Grant & Zandt 2009; Baron 2008). Normative decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose. Descriptive decision theory studies how non-ideal agents (e.g. humans) actually choose. Prescriptive decision theory studies how non-ideal agents can improve their decision-making (relative to the normative model) despite their imperfections.
Because that was one way I think about these words, I got confused by your use of "prescriptive", even though you used it correctly in this context.
Thanks for this thoughtful analysis! I must admit that I never really considered the welfare of snails as an issue, maybe because I am french and thus am culturally used to eating them.
One thing I wanted to confirm anecdotally is the consummation of snails in France. Even if snails with parsley butter is a classic french dish, it is eaten quite rarely (only for celebrations or Christmas and new year's eve diners). And I know many people that don't eat snails because they find it disgusting, even though they saw people eating them all their lives (similar to oysters in a sense).
As for what should be done, your case for the non-tractability and non-priority of snails welfare is pretty convincing. I still take from this post that undue pain (with or without sentience) is caused in snails, even from the position where it is okay to eat animals (My current position, which is in reassessing). I was quite horrified by the slime part.
The geometric intuition underlying this post already proves useful for me!
Yesterday, while discussing with a friend why I want to change my research topic to AI Safety instead of what I currently do (distributed computing), my first intuition was that AI safety aims at shaping the future, while distributed computing is relatively agnostic about it. But a far better intuition comes when considering the vector along the current trajectory in state space, starting at the current position of the world, and whose direction and length capture the trajectory and the speed at which we follow it.
From this perspective, the difference between distributed computing/hardware/cloud computing research and AI safety research is obvious in terms of vector operations:
And since I am not sure we are heading in the right direction, I prefer to be able to change the trajectory (at least potentially).
That's a great criterion! We might be able to find some weird counter-example, but it solves all of my issues. Because intellectual work/knowledge might be a part of all actions, but it isn't necessary on the main causal path.
I think this might actually deserve its own post.