Constance Li

337Joined Sep 2022

Comments
29

I’d like to see some scenarios for each point to paint a clearer picture. I do agree with all 3 points although I would order them 3-2-1 in terms of strength of the arguments.

You would be surprised at what kind of reach you can have! Your post was up on the front page for a whole day and is now the second result when searching Kurzgesagt on the forum. Plus, you can also just email OPP and Kurzgesagt with a link to the post to increase the likelihood that they will see it. Who knows.. they might even comment and explain why they chose to create the video in the way they did or, better yet, edit the video. I recently had a random experience interacting with Dustin Moskovitz on Dank EA Memes so at this point I believe anything could happen.

Also, there’s a Facebook group called “Effective Altruism Editing and Review” that provides editing help for EA forum posts that you can check out. People will give you feedback on your post and through that you can learn the preferred style of writing and all the terms that are commonly used on the forum.

Is it just me or does anyone else feel like this is moving faster and more decisively than other white collar crimes?

I have a suspicion that it is escalating quickly due in large part to having so much public attention.

I took the “unless we can guarantee” part to mean something like, “we need to meet rigorous conditions before we can ethically seed wild animals onto other planets.”

The issue many people are taking with this post is semantic in nature. Having measured/methodical language does help with having more productive conversations. However, focusing on the specific words used detracts from the post's main point.

Kurzgesagt videos have an outsized influence. This video was released just 17 hours ago and already has 1 million views and is the #2 trending video on YouTube. Additionally, the studio was recommended for almost $3 million in grant money from Open Phil to “support the creation of videos on topics relevant to effective altruism and improving humanity’s long-run future.”

With great power (and grant money), comes great responsibility.

It would have only taken a couple seconds to say something like the following:

“Given the large amount of suffering experienced by animals in the wild on Earth, we have the opportunity to design the ecosystem of this new planet with just flora and microbe species that are carefully selected to support human life.”

That’s just one example of an alternative direction. My main point is that there was a moral opportunity that was lost. This Kurzgesagt video casually spread an idea (seeding wild animals to new planets) that could lead to s-risk and didn’t even mention that the potential for s-risk exists. They also missed the opportunity to spread awareness of the neglected issue of wild animal suffering. It’s a double loss.

Open Phil has also recommended a $3.5 million grant to Wild Animal Initiative, but the potential impact of their funding is now discounted because they missed the opportunity to increase the tractability of wild animal welfare through this Kurzgesagt video.

I think pointing this concern on the EA forum could potentially lead to the issue of wild animal suffering being considered more in future videos, whether it be directly through the creators of Kurzgesagt or indirectly through Open Phil suggesting it to Kurzgesagt. So in the end, I’m glad OP decided to make this post.

Another datapoint: I applied for a position with a city EA group back in October. A couple days ago, I was informed that hiring for that position had been paused due to funding although they were hoping it could be resumed for the new year.

Yes updating and creating an “Edit:” right after point #4 would be the ideal place to put this update so that it reaches the most readers.

Maya, I’m glad that you talked to Scott and got more information. I hope that the deeper context has provided some reassurance to you that there exist parts of EA as a community that do care about the concerns of women and that there is a path available to change the culture.

Hi Maya! Thank you for posting about your experience. I think it is a valuable to have this perspective and I'm sure it wasn't easy to write and post publicly. I'm not sure if you reached out to Scott, but if you did and made any updates regarding your belief of Kathy Forth's accusations, then I do think it would be very impactful if you could update your post to reflect that. It seems like this one part of your post triggered a lot of old trauma in the community and likely overshadowed the other concerns contained in the post. I believe an update (no matter in which direction) could really improve trust in the capacity for good-faith discussions around this difficult topic.

Thanks for putting this together! It was my first exposure to a hackathon since it was so well advertised and was open to everyone.

I agree that this is an important issue and it feels like the time is ticking down on our window of opportunity to address it. I can imagine some scenarios in which this value lock in can play out.

At some point, AGI programmers will reach the point where they have the opportunity to train AGI to recognize suffering vs happiness as a strategy to optimize it to do the most good. Will those programmers think to include non-human species? I could see a scenario where programmers with human-centric world views would only think to include datasets with pictures and videos of human happiness and suffering. But if the programmers value animal sentience as well, then they could include datasets of different types of animals as well!

Ideally the AGI could identify some happiness/suffering markers that could apply to most nonhuman and human animals (vocalizations, changes in movement patterns, or changes in body temperature), but if they can’t then we may need to segment out different classes of animals for individual analysis. Like how would AGI reliably figure out when a fish is suffering?

And on top of all this, they would need to program the AGI to consider the animals based on moral weights, which we are woefully unclear on right now.

There is just so much we don’t know about how to quantify animal suffering and happiness which would be relevant in programming AGI. It would be great to be able to identify these factors so we can eventually get that research into the hands of the AGI programmers who become responsible for AI take-off. Of course, all this research could be for negligible impact if the key AGI programmers do not think animal welfare is an important enough issue to take on.

Are there any AI alignment researchers currently working on the issue of including animals in the development of AI safety and aligned goals?

Working on a forum post about animals and longtermism. I have an outline on a google doc and would love to have collaborators or just people to give feedback about the content.

Load more