RS

Ren Springlea

Research Scientist @ Animal Ask
770 karmaJoined Apr 2022animalask.org

Bio

Please note that I do not actively monitor my EA Forum messages or profile. You are welcome to contact me via email: ren (dot) springlea (at) animalask (dot) org.

My name is Ren, and my pronouns are they/them. My work focuses on animal advocacy. I have experience in ecology, fisheries science, and statistics from my time in academia and government. I'm also personally interested in a wide range of other cause areas, particularly around politics and social justice. I mostly work from a suffering-focused and neartermist perspective, though I'm sympathetic to other views.

Comments
38

Thanks, this is cool and I'll use it.

I think more broadly, my comment is roughly equally motivated by three main things: my own psychology; concerns about an author's karma influencing readers' subconscious evaluations of that author's posts and opinions; and, specifically for people who work full-time in the EA community, a vague sense that it feels a bit strange to have a numeric score attached to what is in many ways a professional, and often philosophical, body of work. (The third point of course has an analogy with academic research, but I think that's also a problem with academia.) But since you gave me a solution, I'm personall happy. Thanks again.

I would love an option to switch off the total karma count from one's profile. I've found myself noticing that it can occasionally create perverse incentives.

Strongly agree. I came on this thread to suggest this.

I have posted on the forum before, but I have recently developed some health problems (fatigue etc) that mean I can no longer afford the energy necessary to participate in comment discussions. This is the main reason why I am no longer posting. I would be far more incentivised to make future posts if I could turn off the options for people to make comments where I deem that comments would not add much value to the post (i.e. I would use this feature on lifestyle suggestions or resource recommendations, but not on philosophical hot takes).

Thank you, I appreciate you taking the time to construct this convincing and high-quality comment. I'll reflect on this in detail.

I did do some initial scoping work for longtermist animal stuff last year, of which AGI-enabled mass suffering was a major part of course, so might be time to dust that off.

Thank you for this post. I work in animal advocacy rather than AI, but I've been thinking about some similar effects of transformative AI on animal advocacy.

I've been shocked by the progress of AI, so I've been thinking it might be necessary to update how we think about the world in animal advocacy. Specifically, I've been thinking roughly along the lines of "There's a decent chance that the world will be unrecognisable in ~15-20 years or whatever, so we should probably be less confident in our ability to reliably impact the future via policies, so interventions that require ~15-20 years to pay off (e.g. cage-free campaigns, many legislative campaigns) may end up having 0 impact." This is still a hypothesis, and I might make a separate forum post about it.

It struck me that this is very similar to some of the points you make in this post.

In your post, you've said you're planning to act as though there are 4 years of the "AI midgame" and 3 years of the "AI endgame". If I translated this into animal advocacy terms, this could be equivalent to something like "we have ~7 years to deliver (that is, realise) as much good as we can for animals". (The actual number of years isn't so important, this is just for illustration.)

Would you agree with this? Or would you have some different recommendation for animal advocacy people who share your views about AI having the potential to pop off pretty soon?

(Some context as to my background views: I think preventing suffering is more important than generating happiness; I think the moral values of animals is comparable to humans, e.g. within 0-2 orders of magnitude depending on species; I don't think creating lives is morally good; I think human extinction is bad because it could directly cause suffering and death, but not so much because of its effects on loss of potential humans who do not yet exist; I think S-risks are very very bad; I'm skeptical that humans will go extinct in the near future; I think society is very fragile and could be changed unrecognisably very easily; I'm concerned more about misuse of AI than any deliberate actions/goals of an AI itself; I have a great deal of experience in animal advocacy and zero experience in anything AI-related. The person reading this certainly doesn't need to agree with any of these views, but I wanted to highlight my background views so that it's clear why I believe both "AI might pop off really soon" and "I still think helping animals is the best thing I can do", even if that latter belief isn't common among the AI community.)

Thanks everybody for the discussion on this post. I'm glad to see it has inspired some thought and debate, and that other people are sharing their experiences.

I've reached my limit for engaging with these comments, so now I need to return to my main tasks (doing my best to prevent suffering + self-care) and I won't reply to future comments (but happy to correct objective errors). Thanks again everyone.

Thanks for sharing this. It sounds like you found childbirth to be qualitatively more awful than your other experiences? I definitely agree with one of your takeaways - the fact that some experiences have been rates as even worse than this on the pain scale, for me, serves as a very strong motivation to reduce suffering in any way I can.

(I did ask around a fair bit before posting this article, and got the opinions of a number of people close to me who have gone through different painful experiences, both acute and chronic, many of which are mentioned on the pain scale graph. This is part of why I point out that the PRI scores I report aren't supposed to be taken as scientific or literal, emphasise that it's n=1, I'm untrained, definitely only moderate level, etc. But it does reinforce my point, which is basically "wow, all I did was mess around with a tattoo gun for an afternoon and it was this bad, that's all the more reason to do as much as we can to prevent others from experiencing actual pain.")

I mostly agree with what you've said, and I think that your view and my view are pretty much consistent. My main message isn't really "physical pain is worse than other types of suffering", rather: "I found even moderate physical pain to be really, really awful, which suggests that it's probably really, really morally urgent to prevent both extreme physical pain and other types of extreme suffering".

The hedonistic focus probably arose from the fact that I can subject myself to physical pain quite easily, but less so other types of suffering. I mention this in the limitations section.

Sure, makes sense. Thanks for your reply.

If I wanted to prove or support the claim: 
"given the choice between preventing extreme suffering and giving people more [pleasure/happiness/tranquility/truth], we should pick the latter option"
How would you recommend I go about proving or supporting that claim? I'd be keen to read or experience the strongest possible evidence for that claim. I've read a fair bit about pleasure and happiness, but for the other, less-tangible values (tranquility and truth) I'm less familiar with any arguments.

It would be a major update for me if I found evidence strong enough to convince me that giving people more tranquility and truth (and pleasure and happiness in any practical setting, under which I include many forms of longtermism) could be good enough to forego preventing extreme suffering. This would have major implications for my current work and my future directions, so I would like to understand this view as well as I can in case I'm wrong and therefore missing out on something important.

I'm happy to consider this further if there are people who would find value in the outcome (particularly if there are people who would change decisions based on the outcome). I think it would be tractable to design something safe and legal, whether through psychedelics or some other tool.

Load more