A number of people have raised about intentionally trying to make contact with extraterrestrials. Most famously, Stephen Hawking famously warned that based on the history of first-contacts on Earth we should fear enslavement, exploitation or annihilation by more advanced aliens and the METI proposal to beam high powered signals into space has drawn controversy as well as criticism from David Brin for METI's failure to engage in consultation with a broad range of experts. However, I've noticed a distinct lack of consideration of the potential benefits to alien life as a result of such contact.
For instance, while the proposal to send the google servers might limit our ability to trade in the future it also potentially provides the aliens with whatever benefits they might get from our scientific insights or our historical experiences. For instance, if we were to receive a detailed account of alien society's struggle with climate change on their planet that second piece of data could be invaluable in choosing our own course not to mention the benefit scientific advancements could offer.
Indeed, if, as many people seem to think, there is some extinction level disaster waiting for civilizations once they reach, or slightly surpass, our current level of technology then such preemptive broadcasts might be the only serious hope of getting at least one sapient species through this Great Filter. While it might be pretty unlikely that our transmission would start the chain of records from doomed civilizations that will eventually push one species past the filter the returns to utility from such an outcome are so massive that such considerations might well outweigh any effect on humanity in the utility calculus.
Anyway, given the huge potential upside (even if unlikely) of an intervention which might improve life across the entire galaxy (even if at very low probability) I was wondering if anyone has done even back of the envelope calculations to estimate how funding projects trying to transmit useful data to extraterrestrials compares to the cost effectiveness of more earthly projects.
Yes, I know any calculation will have to make lots of assumptions but if it would be very informative if it turns out that the math only works out to make it cost-effective if we assume our information is incredibly valuable or if it turns out that even a very very small change of helping an alien species avoid a Great Filter and spread across the galaxy makes it cost effective.
Cross posted at my blog: Rejecting Rationality (doesn't mean what you think it does)
There seems to be somewhat a consensus among effective altruists that the Rare Earth explanation is the most likely resolution to the Fermi Paradox. I tend to agree, but like you, I think that effective altruists generally underestimate the risk from aliens.
However, I would caution against a few assumptions that you made in the article. The first is the assumption that aliens would be anything like they show in the movies -- rouge civilizations restricted to quadrants in the galaxy. As many have pointed out in the past, a civilization with artificial superintelligence would likely be able to colonize the entire galaxy within just a few million years, which means that if aliens with advanced artificial intelligence existed, we probably would have seen evidence of them existing already. Of course, maybe they're hiding, but now you're running up against Occam's razor.
The second assumption is that we can affect the state of affairs of civilizations at our stage of development. Now, even given the generous assumption that we have the ability to share useful knowledge with aliens at our stage of development, it would be unlikely that we ever find aliens that are exactly at our development stage. A civilization just decades younger would be unavailable to contact without radio, and a civilization just centuries more advanced would probably have artificial intelligence already.
You need to weigh up the possibility of helping further life that is very alien (with radically different morality - see http://lesswrong.com/lw/y4/three_worlds_collide_08/ for great examples) against the chance of drawing unwanted attention to ourselves. My intuition is that the scale would lean heavily in favour of staying quiet. Unless there is some reason to believe that morality would somehow be convergent in the universe?
Perhaps you could re-evaluation this question in light of Bostrom's findings in Astronomical Waste? The overriding impacts relate to risk of extinction of all life (which alien contact could bring about, or perhaps could avoid) rather than opportunity costs of technological development.
Any investment related to ET is probably not cost-effective, because there are probably no ET in our Universe (or at least not in our neighborhood).
Here's my take on why: https://www.quora.com/Is-the-Fermi-Paradox-and-the-concept-of-parallel-universes-related-in-any-way/answer/Florent-Berthet.
Also, watch this excellent talk by Stuart Armstrong on the Fermi Paradox: https://www.youtube.com/watch?v=zQTfuI-9jIo&index=663&list=FLxEpt5QlyYGAge0ot24tuug
Many cost-effective interventions probably don't work. You have to look at the probabilities.