Hide table of contents
This is a linkpost for https://youtu.be/st9EJg_t6yc


Merely listening to alien messages might pose an extinction risk, perhaps even more so than sending messages into outer space. Our new video explores the threat posed by passive SETI and potential mitigation strategies.

Below, you can find the script of the video. Matthew Barnett, the author of this related post, wrote the first draft. Most of the original draft survives, but I've made significant restructuring, edits, deletions, and additions


One day, a few Earthly astronomers discover something truly remarkable. They’ve pointed their radio telescopes at a previously unexplored patch in the night sky, and recorded a binary message that is too inexplicable to have come from any natural source. Curious about what the distant aliens have sent us, the scientists begin trying to decipher the message. After an arduous process of code-breaking, the scientists find that the message encodes instructions on how to build a device. Unfortunately, the aliens left no description about what the device actually does.

Excited to share their discovery with the world, the astronomers agree to publish the alien instructions on the internet, and send a report to the United Nations. Immediately, the news captivates the entire world. For once, there is indisputable proof that we are not alone in the universe. And what’s more: the aliens have sent us a present, and no one knows what its purpose might be.

In a breathtaking frenzy that surpasses even the Space Race of the 1960s, engineers around the world rush to follow these instructions, to uncover the secrets behind the gift the aliens have left for us.

But soon after, a horrifying truth is revealed: the instructions do not describe a cure to all diseases, or a method of solving world hunger. Rather, the aliens have sent us explicit, and easy to follow instructions on how to build a very powerful bomb: an anti-matter explosive device with the yield of over one thousand hydrogen bombs. The most horrifying part is that the instructions require only common household materials, combined in just the right way.

The horror of this development begins to sink in around the world. Many propose that we should censor the information, in an attempt to prevent a catastrophe. But the reality is that the information is already loose. Sooner or later, someone will build the bombs, either from raw curiosity, or deliberate ill-intent. And then, right after that, the world will end.

This story is unrealistic. In real life, there’s probably no way to combine common household materials in just the right way to produce an antimatter bomb. Rather, this story illustrates the risk we take by listening to messages in the night sky, and being careless about how these potential messages are disseminated.

With this video, we don’t want to argue that humanity will necessarily go extinct if we listen to alien messages, nor that this is necessarily among the biggest threats we’re facing. In fact, the probability that humanity will go extinct in this exact way is small, but the risk we take from listening to alien messages is still an idea worth considering. As with all potential existential threats, the entire future of humanity is at stake.

We’ll model alien civilizations as being “grabby”, in the sense described by Robin Hanson’s paper on Grabby Aliens, which we covered in two previous videos. Grabby civilizations expand at a non-negligible fraction of the speed of light, and occupy all available star systems in their wake. By doing so, every grabby civilization creates a sphere of expanding influence. Together, all the grabby civilizations will one day enclose the universe with technology and intelligently designed structures.

However, since grabby aliens cannot expand at the speed of light, there is a second larger sphere centered around every grabby civilization’s origin, which is defined by the earliest radio signals sent by the alien civilization as it first gained the capacity for deep-space communication. This larger sphere expands at the speed of light, faster than the grabby civilization itself.

Let’s call the space between the first and second spheres the “outer shell” of the grabby alien civilization. If grabby alien civilizations leave highly distinct marks on galaxies and star systems they’ve occupied, then their civilization should be visible to any observers within this outer shell. As we noted in the grabby aliens videos, if we were in the outer shell of a grabby alien civilization, they would likely appear to be large in the night sky. On the other hand, if grabby civilizations left more subtle traces that we can’t currently spot with our technology, that would explain why we aren’t seeing them.

In this video, let’s assume that grabby aliens leave more subtle traces on the cosmos, making it plausible that Earth could be in the outer shell of a grabby alien civilization right now without us currently realizing that. This is a model variation, but it leaves the basics of the Grabby Aliens theory intact.

Here’s where things could turn out  dangerous for humanity. If, for example, a grabby alien civilization felt threatened by competition that it might encounter in the future, it could try to wipe out potential competitors inside this outer shell before they ever got the chance to meet physically. This is because, if they wanted, the grabby alien civilization could send out a manipulative deep-space message to any budding civilization in the outer shell gullible enough to listen, forcing their self-destruction.

In our illustrative story we made the example of instructions for building antimatter bombs with household material. A more realistic possibility could be instructions for building advanced artificial intelligence, which then turns out to be malicious.

We could make a number of plausible hypotheses about the content of the message, but it’s difficult to foresee what it would actually contain, as the alien civilization would be a lot more advanced than us, and, potentially, millions of years old. They would have much more advanced technology, and a lot of time to think carefully about what messages to send to budding civilizations. They  could spend centuries to craft the perfect message that would hijack or destroy infant civilizations that are unfortunate enough to tune in.

But maybe you’re still unconvinced. Potential first contact with aliens could even be the best thing to ever happen to humanity. Aliens might be very friendly to us, and could send us information that would help our civilization and raise our well-being to unprecedented levels. 

Perhaps this whole idea is rather silly. Our parochial, tribal brains are simply blind to the reality that very advanced aliens would have abandoned warfare, domination, and cold-hearted manipulation long ago, and would instead be devoted to the mission of uplifting all sentient life.

On the other hand, life on other planets probably arose by survival of the fittest, as our species did, which generally favors organisms that are expansionist and greedy for resources. Furthermore, we are more likely to get a message from an expansionist civilization than a non-expansionist civilization, since the latter civilizations will command far fewer resources and will presumably be more isolated from one another. This provides us even more reason to expect that any alien civilization that we detect might try to initiate a first strike against us.

It’s also important to keep in mind that the risk of a malicious alien message is still significant even if we think aliens are likely to be friendly. For instance, even if we believe that 90% of alien civilizations in the universe will be friendly to us in the future, the prospect of encountering the 10% that are unfriendly could be so horrifying that we are better plugging our ears and tuning out for now, at least until we grow up as a species, and figure out how to handle such information without triggering a catastrophe.

But even if SETI is dangerous, banning the search for extraterrestrial intelligence is an unrealistic goal at this moment in time. Even if it were the right thing to do to mitigate risk of premature human extinction, there is practically no chance that enough people will be convinced that this is the right course of action.

More realistically, we should instead think about what rules and norms humanity should adopt to robustly give our civilization a better chance at surviving a malicious SETI attack.

As a start, it seems wise to put in place a policy to review any confirmed alien messages for signs that they might be dangerous, before releasing any potentially devastating information to the public.

Consider two possible policies we could implement concerning how we review alien messages. 

In the first policy, we treat every alien message with an abundance of caution. After a signal from outer space is confirmed to be a genuine message from extraterrestrials, humanity forms a committee with the express purpose of debating whether this information should be released to the public, or whether it should be sealed away for at least another few decades, at which point another debate will take place.

In the second policy, after a signal is confirmed to be a genuine message from aliens, we immediately release all the data publicly, flooding the internet with whatever information aliens have sent us. In this second policy, there is no review process; everything we receive from aliens, no matter the content, is instantly declassified and handed over to the wider world without a moment’s hesitation.

If you are even mildly sympathetic to our thesis here — that SETI is risky for humanity — you probably agree that the second policy would be needlessly reckless, and might  put our species in danger. Yet, the second policy is precisely what the influential SETI Institute recommends humanity do in the event of successful alien contact. You can find more information in their document titled Protocols for an ETI Signal Detection, which was adopted unanimously by the SETI Permanent Study Group of the International Academy of Astronautics in 2010.

The idea that SETI might be dangerous is not new . It was perhaps first showcased in the 1961 British drama serial, A for Andromeda, in which aliens from Andromeda sent humanity the instructions on how to build an artificial intelligence whose final goal was to subjugate humanity. In the show, humans ended up victorious over the alien artificial intelligence, but  we would not be so lucky in the real world.

In intellectual communities and academia, the idea that SETI is dangerous has received very little attention, either positive or negative. In its place, the risk from METI has taken the spotlight, which is: sending messages to outer space rather than listening to them. This might explain why, as a species, we do not appear to currently be taking the risk from SETI very seriously.

Yet it’s imperative that humanity safeguards its own survival. If we survive the next few centuries, we have great potential as a species. In the long-run, we could reach the stars and become a grabby civilization ourselves, potentially expanding into thousands or millions of galaxies, creating trillions of worthwhile lives. But not necessarily endangering lives already present on other star systems, of course! To ensure we have a promising future, let’s proceed carefully with SETI. It could end up being the most important decision we ever make.


 

40

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

Since we're already in existential danger due to AI risk, it's not obvious that we shouldn't read a message that has only a 10% chance of being unfriendly, a friendly message could pretty reliably save us from other risks. Additionally, I can make an argument for friendly messages potentially being quite common:

If we could pre-commit now to never doing a SETI attack ourselves, or if we could commit to only sending friendly messages, then we'd know that many other civs, having at some point stood in the same place as us, will have also made the same commitment, and our risk would decrease.
But I'm not sure, it's a nontrivial question as to whether that would be a good deal for us to make, would the reduction in risk of being subjected to a SETI attack be greater than the expected losses of no longer being allowed to do SETI attacks?

Cross-posting with multiple authors is broken as a feature.

When Matthew had to approve co-authorship, the post appeared on the home page, but if clicked on, it only showed an error message.

Then I moved the post to drafts, and when I interacted with it using the three dots on the right side, there was another error message.

Now Matthew doesn't appear as a coauthor here.

Haven't read the post, but my answer to the title is "yes". SETI seems like a great example for researchers unilaterally rushing to do things that might be astronomically impactful and are very risky; driven by the fear that someone else will end up snatching the credit and glory for their brilliant idea.

[EDIT: changed "not net-positive" to "very risky".]

Great job once again! Loved it :)

More from Writer
Curated and popular this week
Relevant opportunities