Kurzgesagt, in their recent video "How To Terraform Mars - WITH LASERS" (just came out a few hours ago as of writing this post), promotes the idea of seeding wildlife on other planets without considering the immense suffering it would cause for the animals on it. Instead of putting thought into the ethical implications of these actions, the video (as par for the course) focuses solely on the potential benefits for humans.

Sadly this problem isn't an isolated incident either, the pattern of ignoring the real risk for immense wild animal suffering is common in almost all major plans and discussions involving the terraforming of planets or space colonisation. 

Sure, a new green planet with lots of nature sounds cool in theory, but it would very likely mean subjecting countless animals to a lifetime of suffering. These animals would be forced to adapt to potentially hostile and unfamiliar environments, and face countless challenges without any choice in the matter. There's no way around it that I can see.

You might argue that in these proposed worlds, we'd create an environment for wild animals where there wouldn't be food scarcity, predators, disease, or even anthropogenic harms. Setting aside the immense improbability of such an world (imagine convincing a rhinoceros not to fight to the death for their territory against a wild boar or elephant), none of the terraformed videos or articles I've read have even hinted at wild animal suffering as a potential issue to be concerned about.

Also setting aside the conversation of whether or not we should extend human life into other planets and galaxies (for those who don't particularly follow longtermism, or the staunch antinatalists that might be reading this), wouldn't we be far better off just seeding these terraformed planets with plant life instead?

If the key decision makers of the future decide they have to bring animals to other planets (and we can't convince them otherwise), then just introducing herbivores would be preferred, at the very least. I'd still be staunchly against this unless we could somehow guarantee that the lives of every individual animal would be net-positive, but sadly— we're not even close to getting people to include this kind of consideration into these types of conversations. At least, not that I know of.

Don't get me wrong, Kurzgesagt has always been one of my favorite educational channels to watch. I'll continue to stay subscribed because I think they spread a lot of good, but their promotion of seeding wild life on other planets, without any consideration of the consequences, is unethical, and irresponsible.

Instead of blindly pursuing our own interests and trying to populate every inch of the galaxy with life, we should consider the impact of our actions on other future beings and strive to minimize suffering whenever possible— or in this case, preventing it from happening at all.

Thanks for reading.


Edit: I've just been told in a reply below that Open Phil recommended almost $3 million in grant money to “support the creation of videos on topics relevant to effective altruism and improving humanity’s long-run future.”

They (Constance LI) wrote:

This Kurzgesagt video casually spread an idea (seeding wild animals to new planets) that could lead to s-risk and didn’t even mention that the potential for s-risk exists. They also missed the opportunity to spread awareness of the neglected issue of wild animal suffering. It’s a double loss.

This is something I wanted to highlight as it's very relevant to my initial criticism of the video, and being that it's funded by OP/EA, seems to me to be a conflict worth pointing out. Again, I think Kurzgesagt is fantastic, I just think this particular video was irresponsible.

I also want to take this moment to thank everyone for their comments and positive criticisms, I'm new here but definitely taking pointers and expanding my knowledge on this subject. Much appreciated!

Comments33


Sorted by Click to highlight new comments since:
Larks
56
31
10

Even if someone thinks that wild animals matter a lot morally, this seems like an unreasonably demanding standard:

I'd still be staunchly against this unless we could somehow guarantee that the lives of every individual animal would be net-positive

The aspects of life where we can reasonably expect 'guarantees', or that every single member of a class will benefit from a policy, are few. We will never be able to guarantee that space colonization will help every single animal, but we'll never be able to guarantee that opposing it will help every single animal either. This requirement is a recipe for paralysis.

I took the “unless we can guarantee” part to mean something like, “we need to meet rigorous conditions before we can ethically seed wild animals onto other planets.”

The issue many people are taking with this post is semantic in nature. Having measured/methodical language does help with having more productive conversations. However, focusing on the specific words used detracts from the post's main point.

Kurzgesagt videos have an outsized influence. This video was released just 17 hours ago and already has 1 million views and is the #2 trending video on YouTube. Additionally, the studio was recommended for almost $3 million in grant money from Open Phil to “support the creation of videos on topics relevant to effective altruism and improving humanity’s long-run future.”

With great power (and grant money), comes great responsibility.

It would have only taken a couple seconds to say something like the following:

“Given the large amount of suffering experienced by animals in the wild on Earth, we have the opportunity to design the ecosystem of this new planet with just flora and microbe species that are carefully selected to support human life.”

That’s just one example of an alternative direction. My main point is that there was a moral opportunity that was lost. This Kurzgesagt video casually spread an idea (seeding wild animals to new planets) that could lead to s-risk and didn’t even mention that the potential for s-risk exists. They also missed the opportunity to spread awareness of the neglected issue of wild animal suffering. It’s a double loss.

Open Phil has also recommended a $3.5 million grant to Wild Animal Initiative, but the potential impact of their funding is now discounted because they missed the opportunity to increase the tractability of wild animal welfare through this Kurzgesagt video.

I think pointing this concern on the EA forum could potentially lead to the issue of wild animal suffering being considered more in future videos, whether it be directly through the creators of Kurzgesagt or indirectly through Open Phil suggesting it to Kurzgesagt. So in the end, I’m glad OP decided to make this post.

The issue many people are taking with this post is semantic in nature. Having measured/methodical language does help with having more productive conversations. However, focusing on the specific words used detracts from the post's main point.

You knocked it out of the park, this is what I was attempting to convey. I'm new to this community, english isn't my native language (although, funnily enough, it's my best), and I'm still getting used to the jargon and writing style. I appreciate the comment.

Kurzgesagt videos have an outsized influence. This video was released just 17 hours ago and already has 1 million views and is the #2 trending video on YouTube. Additionally, the studio was recommended for almost $3 million in grant money from Open Phil to “support the creation of videos on topics relevant to effective altruism and improving humanity’s long-run future.”

I had no idea that they were funded by Open Phil— I completely agree  that they failed to align their awareness efforts with one of EA's priorities here, and missed a large opportunity.

I think pointing this concern on the EA forum could potentially lead to the issue of wild animal suffering being considered more in future videos, whether it be directly through the creators of Kurzgesagt or indirectly through Open Phil suggesting it to Kurzgesagt. So in the end, I’m glad OP decided to make this post.

I'm very doubtful this would reach anyone at OP/Kurzgesagt, or that  my post could make much impact, but of course I'd be happy if it did contribute to any conversations in that direction.

 Thanks for the support Constance, appreciate it.

You would be surprised at what kind of reach you can have! Your post was up on the front page for a whole day and is now the second result when searching Kurzgesagt on the forum. Plus, you can also just email OPP and Kurzgesagt with a link to the post to increase the likelihood that they will see it. Who knows.. they might even comment and explain why they chose to create the video in the way they did or, better yet, edit the video. I recently had a random experience interacting with Dustin Moskovitz on Dank EA Memes so at this point I believe anything could happen.

Also, there’s a Facebook group called “Effective Altruism Editing and Review” that provides editing help for EA forum posts that you can check out. People will give you feedback on your post and through that you can learn the preferred style of writing and all the terms that are commonly used on the forum.

The issue many people are taking with this post is semantic in nature. Having measured/methodical language does help with having more productive conversations. However, focusing on the specific words used detracts from the post's main point.

It sounds like the author agrees with my interpretation and supports the extreme position so I disagree this is merely semantics.

Two quick points on this:

  1. I believe Constance might be referencing to my entire post in the general sense, not just that tidbit, but perhaps it's included. In hindsight, I could have worded this better as many folks are attempting to interpret my meaning behind that and  other lines.
  2.  I'm not sure I would agree that it's an extreme position to simply not take an action to bring wild life to other planets and instead just plants— but perhaps you mean in a sense from all-or-nothing viewpoint? 

I think, if I understand correctly: you're saying there could be a net-positive for animals on these new terraformed planets, as much as a net-negative, and that we simply don't know? My opinion of this is that without a guarantee that we don't accidentally cause another s-risk by introducing wildlife on these planets, that we simply shouldn't. I wouldn't say that's paralysis, but instead a direct action to take no action.

We can still terraform and seed planets with human and plant life, wildlife is not a requirement needed in order for us to live healthy and fulfilling lives. Considering we haven't solved Wild Animal Suffering on this planet, I'm advocating to not perpetuate the problem on the next one by promoting it as an essential step in terraforming planets.

I think in context, i.e. following OP's sentence "If the key decision makers of the future decide they have to bring animals to other planets ... introducing herbivores would be preferred ..." - by 'every individual animal', OP means every individual animal brought to other planets - not every single animal in existence. OP also seems to focusing on terraforming rather than space colonization.

So I'm not sure why you think that it's "an unreasonably demanding standard". There are certainly ways of assigning value that would say that creating additional lives with negative experiences makes it worse for those lives compared to refraining from creating them (e.g. minimalist axiologies). These may be rarer within the EA community, but they definitely exist outside of it (e.g. some forms of Buddhist ethics). If that's the case, and we're only talking about the lives created, then opposing bringing animals will indeed help every single animal involved. 

The implications of this is only that we find it preferable not to terraform - which isn't paralysis - just opposition to that particular policy.

OP means every individual animal brought to other planets - not every single animal in existence.

Every policy we can take has some knock-on effect, however small, on a vast array of different people (and animals). We can't just ignore some impacts by fiat. The justification for narrowing our scope and focusing only on some subset is typically because either some effects don't matter morally (e.g. wild animals don't matter, only humans) or because they are very small (most people will benefit and only a few will lose out). But OP rejects both these strategies, and takes a hard deontologic/rawlsian line, so even if he is only intending to discuss the impacts on some animals, logically he should apply the same to everyone - with paralysis the result. 

The implications of this is only that we find it preferable not to terraform - which isn't paralysis - just opposition to that particular policy.

No, because the world is so big and complicated that every single decision you make will make some animal worse off, and slightly change which animals will be born in the future, some of whom will suffer. OP is applying a standard here so demanding that, if applied generally, no action could pass muster.

Thanks for the comment Larks.

I could have worded that better, it wasn't meant to be a reasonable demand exactly because it's impossible to guarantee, so I'm in complete agreement with you and that position. Apologies.

This is why I wouldn't advocate for bringing in any wild animal life, and to only terraform / seed these new planets with flora and microbe species. That's the only realistic guarantee that we wouldn't be risking creating another s-risk in our future should we pursue such a path.

Timothy's reply is also on point to my thinking.

"Subjecting countless animals to a lifetime of suffering" probably describe the life of the average bird in the amazon (struggling to find food, shelter, avoid predators, protect its children) or the average fish/shrimp in the ocean.

If you argue that introducing animals to other planets will cause net suffering then it seems to follow that we should eliminate natural ecosystems here on earth

If you argue that introducing animals to other planets will cause net suffering then it seems to follow that we should eliminate natural ecosystems here on earth

Do you intend this as an endorsement, a reductio ad absurdum, or a neutral statement?

I personally strongly suspect that many (most?) wild animals alive on Earth today live lives of net suffering. Even so, there are a bunch of reasons not to try to "eliminate natural ecosystems" right now, including instrumental reliance on those ecosystems, avoidance of drastic & irreversible action before we understand the consequences & alternatives, and respect for non-utilitarian side constraints (most compellingly for me, respect for the personhood/rights of existing animals). None of these really apply to terraforming.

As an intuition pump, I personally strongly suspect that there are many people alive on Earth today living lives of net suffering, but it would obviously be awful to try to "eliminate" those people, at least for many common interpretations of that word.

I interpreted “ eliminate natural ecosystems” as more like eliminating global poverty in the human analogy. Seems bad to do a mass killing of all animals, and better to just make their lives very good, and give them the ability to mentally develop past mental ages of 3-7.

Seems bad to do a mass killing of all animals, and better to just make their lives very good, and give them the ability to mentally develop past mental ages of 3-7.

Well, that sentence turned sharply midway through.

I'm not sure about the last part. If I wanted to create lots more intelligent beings, genetically engineering a bunch of different species to be sapient seems like a rather labour-intensive route.

I agree that a lot turns on your interpretation of the word "eliminate" in the original comment.

I wouldn’t advocate for engineering species to be sapient (in the sense of having valenced experiences), but for those that already are, it seems sad they don’t have higher ceilings for their mental capabilities. Like having many people condemned to never develop past toddlerhood.

edit: also, this is a long-term goal. Not something I think makes sense to make happen now.

I'm not sure eliminate is the right way to put it. Reducing net primary productivity (NPP) in legally acceptable ways (e.g. converting lawns into gravel) could end up being cost-effective, but eliminate seems too strong here.

Doing NPP reduction in less acceptable ways could make a lot of people angry, which seems bad for advocacy to reduce wild animal suffering. As Brian Tomasik pointed out somewhere, most of expected future wild animal suffering wouldn't take place on Earth, so getting societal support to prevent terraforming seems more important.

If done immediately, this seems like it’d severely curtail humanity’s potential. But at some point in the future, this seems like a good idea.

Animals already exist on earth independently of humans. The difference with introducing life on Mars is that humans would take the decision to make the decision and expend the resources to do so.

That would have negative consequences for the people that already exist today and rely on earths biosphere, the same can not be said for these frivolous space colonization ventures

I've yet to see someone suggest actions. You should consider reaching out to them, to OpenPhil, or find someone in the EA forum who ha contacts at the channel/company.

 I wouldn't be surprised if their staff frequent the forum? 

As far as I am aware you can edit Youtube videos once published so perhaps there is still the opportunity to update the video with a snippet noting the potential implications for animal welfare. 

The topics they cover are very complex and speculative so its unsurprising that they don't hit the mark 100% of the time.

I think the main point this piece is making is broadly interesting, but I don't like the presentation. 

> Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible

This uses emotional language very similar to what I see  coming from people trying to ban content or creating long Twitter attacks. 

I'd push back against us getting emotionally charged about each instance like this.

If you have access to ChatGPT, I recommend using that to help write things in what it considers the style of the effective altruism forum.  [Edit: I don't mean this to be demeaning. There must also be good conversational norm documents on the EA Forum too. I'm sure some people would prefer those - personally, I'd find ChatGPT more useful here)

If you have access to ChatGPT, I recommend using that to help write things in what it considers the style of the effective altruism forum. 

FWIW I had a much stronger negative emotional reaction to this sentence than to the title & tone of the post.

Thanks for flagging!

I really didn't mean this in a demeaning way. I genuinely like ChatGPT a lot, think it can be really useful here.

I'd like to write posts about my preferred online etiquette and the usual etiquette of the community, but I think it's really tough to do so in a way that's general enough to cover most of the cases, but still somewhat interesting to read. 

This is one extreme example, but I've been writing about this a bit more on my Facebook feed.
https://www.facebook.com/ozzie.gooen/posts/pfbid02JiqiMnWoEme61EhUiZfDSTP9nFV5E1zBanThKfQMcDZXhW1zvRX9gutMeZnNnovsl

Yeah I regularly see Ozzie talk about replacing language styles with ChatGPT, hardly a new thing for this comment.

Thanks for the comment, Ozzie. I appreciate the constructive criticism.

I did go back and forth on a few title ideas, but ended up with this one as I believed it to be succinct and in-line with what I think, along with being engaging for readers to click. I can see how this may have rubbed you the wrong way, and I do have access to ChatGPT - I'll check it out.

I also agree that we should avoid emotionally charged language as much as possible, but I do think there's a balance to be struck in making the initial statement, for example a title, compelling enough for people to read while staying true to the content it's conveying.

In either case, thanks for the comment.

I wish people would stop optimizing their titles for what they think would be engaging to click on. I usually downvote such posts once I realize what was done.

I ended up upvoting this one bc I think it makes an important point.

I really appreciate this post. I also watched this video and was horrified at how no attention at all was paid to wild animal suffering, and I'm really glad there are other people who had the same experience. I think this is a serious moral blindspot in our society.

Quick callout to anyone intrigued by this topic that there is an Animals and Longtermism Discord. David, I get the sense you'll feel right at home with us there if you're not on it already :)

As a (semi humorous) devils advocate, if we applied existential risk/longtermist ideas to non human animals, couldn't the animal lives on Mars still be net positive, as they help their respective species flourish for billions of years, and reduce their risk of going extinct if they were only on earth?

I'm not sure I take this seriously yet, but it's interesting to think about.

I agree with almost all of the logic behind this argument, but there's one critical factor that wasn't taken into account. 

The vast majority of the stars in the milky way are beyond 10,000 light years away, so if people were seeding the galaxy with life at the fastest rate possible, they would have to continue making the mistake of the seeding the galaxy with standard earth organisms, for significantly longer than 10,000 years. There's no way the seeding itself could be locked in unless humanity persisted in the mistake for longer than 10,000 years, both lasers and von neuman probes can easily be overwritten by waves that are projected after 12022 C.E. that quickly replace the original animals with ethical synthetic animals (which will subsequently exist for hundreds millions of years). "Ethical synthetic life" might sound strange, but humanity only invented writing and civilization around 5,000 years ago and there was no modern math or science until at most 200 years ago. It's actually quite a stretch to consider a world without the ability to develop a preliminary model of ethical synthetic life by 3,000 C.E. let alone 12022 C.E. or 102022 C.E.

The problem with this post's analysis is that

setting aside the conversation of whether or not we should extend human life into other planets and galaxies (for those who don't particularly follow longtermism, or the staunch antinatalists that might be reading this), wouldn't we be far better off just seeding these terraformed planets with plant life instead?

Ignoring human actions isn't like trying to write the equations of general relativity with one arm behind your back or without ever drawing a square root symbol, it's like trying to write the equations of general relativity without acknowledging the existence of the number "3". It's a problem that's worth solving, so getting the right answer is worth observing all the factors in the equation. We don't even have any decent probability estimates for how likely it is that most anti-natalist problems will end up conclusively solved within the next 10,000 years. We do have good probability estimates that animal suffering will end up conclusively resolved as "yes", which is high, so it's worth considering all the variables in order to get the best results.   

Seeding the terraformed planets with plant life could actually be a terrible mistake, since some of the plant species could mutate and ruin many of the planets in the timescale of 50 years, or mutate into extremely large amounts of strange, suffering life that humanity will not be able to detect, comprehend, and/or overwrite for a very long time because it is millions of light years away.

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would