METI: What Percentage “Bark”, What Percentage “Bite”?

Ah, METI.

Short for “messaging to extraterrestrial intelligences”, it’s nothing if not extremely enticing.

“Enough of this waiting around! What if everyone is just listening? We’re too afraid of taking risks. If we were trying to find someone here on Earth, we would obviously send signals to them while we look. If we’re really serious about learning the frequency of intelligent life in the galaxy, we’d do the same thing!”

In class last week, we did an Earth-centric exercise about finding a group you know nothing about. Both of our in-class groups immediately jumped to METI as our main strategy. After that exercise, I definitely wished that we could just do METI and see what happens. And, under one condition, I would.

The character Coil from the legendarily long* superhero web-fiction Worm has the power to split the world into two timelines, see how they play out, and collapse the one he doesn’t like as much (he uses this power for evil, of course).

If I were Coil, I would 100% try METI, just to see what happens.

http://worm.wikia.com/wiki/Coil
If I were this guy, METI wouldn’t scare me

Because I am not Coil, I can’t just collapse a timeline in which humanity accidentally attracts the attention of something far up the feeding chain (as we expect almost all concurrent intelligences would be). I would prefer not being personally responsible for the extinction of our species, or even (less dramatically) the complete alteration of our future. As a global collective of governments (an idea which seems almost as silly as contacting ETI), perhaps authority could be taken if a decision was reached together. But even so, it’s a decision that has the potential to impact all of humanity, even echoing through generations. So can we ethically make that step, even with a global consensus?

Let me take a step back here: I am talking about a “perfect” METI. An isotropic METI signal that is guaranteed to be detected by any intelligent species that it encounters. It’s a philosophical question about an action that will be able to change humanity. As Gertz (2016) argues very sharply, the METI that has been done so far is decidedly not that. Modern METI is more of a publicity stunt with minimal methodology. The focus is often more on the taboo of messaging and the content of the message than on any reasonable way of tackling the problem (not sending a repeat of the signal some time later, for the purpose of scientific reproducibility, and having no plans to listen for a return message are smoking guns for me). One of the more recent communications was a series of songs sent by METI International. The songs weren’t even that good.

I am afraid of “perfect” METI. I am not afraid of current METI. But I think that over time, as technology advances and older technology gets cheaper, “perfect” METI will get closer and closer to being a reality. Some preemptive thought toward the issue is probably justified.

I wanted to get the opinion of one of my friends, who decided to stay anonymous. My friend is a research analyst at the Open Philanthropy Project and spends a lot of time thinking about how to quantify ethical quandaries and how to maximize the amount of “good” that can be created by given amounts of resources and actions. They shared their ideas about METI with me over text, so their responses have been edited for clarity.

 

“My instincts lean towards [METI] sounding risky if we don’t see any evidence of other civilizations intentionally doing the same. […] I would want to make SETI much better first and wait until we’ve explored a decent chunk of the sky. If, after we do that, it seems like we’re hearing nothing that seems like intentional loud messaging from other places I would think that’s strong evidence that either a) no one’s around, so METI wouldn’t be useful or b) somehow everyone [simultaneously] decided not to, so maybe it’s dangerous.”

I also asked them if they thought that it was possible to construct a cost-benefit calculation to decide if METI was a good idea or not.

“I think there probably would be a way to do that but I’m not sure I have enough context to do the analysis. [Naively], the main benefit is how much it speeds up our search for intelligence and the main cost is risk to Earth.”

This was interesting to me, because it seems like it might be possible to try this analysis without falling back on infinite goods and bads. That’s an idea, at least! Any treatment of it could probably produce results on a huge sliding scale a la the Drake equation because of the uncertainties involved, but it would be interesting to at least try to isolate the factors involved.

*it’s the length of A Song of Ice and Fire