A NY Times science journalist asked me whether there are any good papers, blogs, talks, etc from EA people on possible X-risks from the Search for Extraterrestrial Intelligence (SETI) or active Messaging to Extraterrestrial Intelligence (METI). They're interested in doing a feature piece on this.

Any suggestions?

I wrote a bit about this in a recent paper (https://www.primalpoly.com/s/Todd-Millerevpsych-of-ETIBioTheory2017.pdf), but haven't kept up on EA writings about possible downsides of alien contact.

So far, the most relevant work seems the Nick Bostrom paper on 'Information Hazards'? ( https://nickbostrom.com/information-hazards.pdf )?

18

0
0

Reactions

0
0
Comments11
Sorted by Click to highlight new comments since: Today at 12:51 AM

When I hear about articles like this, I worry about journalists conflating "could be an X-risk" with "is an X-risk as substantial as any other"; journalism tends to wash out differences in scale between problems.

If you're still in communication with the author, I'd recommend emphasizing that this risk has undergone much less study than AI alignment or biorisk and that there is no strong EA consensus against projects like SETI. It may be that more people in EA would prefer SETI to cease broadcasting than to maintain the status quo, but I haven't heard about any particular person actively trying to make them stop/reconsider their methods. (That said, this isn't my area of expertise and there may be persuasion underway of which I'm unaware.)

I'm mostly concerned about future articles that say something like "EAs are afraid of germs, AIs, and aliens", with no distinction of the third item from the first two.

([This is not a serious recommendation and something I might well change my mind about if I thought about it for one more hour:] Yes, though my tentative view is that there are fairly strong, and probably decisive, irreversability/option value reasons for holding off actions like SETI until their risks and benefits are better understood. NB the case is more subtle for SETI than METI, but I think the structure is the same: once we know there are aliens there is no way back to our previous epistemic state, and it might be that knowing about aliens is an info hazard.)

If we know that there are aliens and they are sending some information, everybody will try to download their message. It is infohazard.

I agree that information we received from aliens would likely spread widely. So in this sense I agree it would clearly be a potential info hazard.

It seems unclear to me whether the effect of such information spreading would be net good or net bad. If you see reasons why it would probably be net bad, I'd be curious to learn about them.

If such message will be a description of a computer and a program for it, it is net bad. Think about malevolent AI, which anyone able to download from stars.

Such viral message is aimed on the self-replication and thus will eventually convert Earth into its next node which use all our resources to send copies of the message farther.

Simple darwinian logic implies that such viral messages should numerically dominate between all alien messages if any exists. I wrote an article, linked below to discuss the idea in details

I have an article on this topic from last year.

Highlights

•Active SETI assumes that alien languages can be translated without context or meaningful interaction.

•According to prominent theories in the philosophy of language it is impossible to translate without context or interaction.

•The impossibility of communication between humanity and an ETI has important game-theoretical consequences.

•The failure of communication cause a “Hobbesian Trap”, where players are drawn to a risk-dominant equilibrium.

•In light of this, advertising our location to ETI is reckless.

https://www.sciencedirect.com/science/article/pii/S0016328718300405

The obvious paper that is related is Bostrom’s Where Are They? Why I Hope The Search For Extraterrestrial Life Finds Nothing. This argues not that the search itself would be an x-risk, but that finding advanced life in the universe would (via anthropics and the fermi equation) cause us to heavily update that some x-risk was in our near future. Very interesting.

(Relatedly, Nick was interviewed on this paper for the last ~1/3rd of his interview on the Sam Harris podcast.)

Alexey Turchin has said something about downloading an invasive ASI: http://www.sentientdevelopments.com/2010/09/turchin-seti-at-risk-of-downloading.html

Seems pretty implausible but not totally out of the question

The latest version was published as proper article in 2018:

The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI

Alexey Turchin. Journal of British Interpanetary Society 71 (2):71-79 (2018)

Thanks to everybody for your helpful links! I've shared your suggestions with the journalist, who is grateful. :)

I also have an article which compare different ETI-related risk, now under review in JBIS.

Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence

Curated and popular this week
Relevant opportunities