Category Archives: science

BLC1: A candidate signal around Proxima

So, the media is abuzz about a BLC1, a candidate signal around Proxima. I’ve been all over Twitter about this, so I’m collecting my thoughts here.

But first, a disclaimer: as a member of the Breakthrough Listen Advisory Board and a Breakthrough Listen member’s current PhD adviser, I have a little bit more information than the public than this, but I am not a BL team member and have not seen the data. My comments here are purely general and, while they can provide context for what’s going on, they do not actually add anything to what’s known about the actual candidate signal beyond what is already in the press.

First, how does radio SETI work?

The Breakthrough Listen team uses radio telescopes to look for signs of radio technology in the form of (among other things) narrowband radio signals of the sort that can only be caused by technology. This is the sort of thing they’re looking for:

This is not the data from Proxima, it is an example.

This plot, from Howard Isaacson’s paper on the topic, shows the actual signal of extraterrestrial technology beaming a radio signal to the Earth. In this case, it’s not aliens: it’s Voyager II.

The vertical axis is time, going up.  Each bin is 10 seconds or so.  The horizontal axis is frequency, and each bin is a few Hz.  Note a few things about this signal:

  1. At any given moment, almost all of the power is concentrated into a single frequency bin. This is how we know the signal must be artificial. Radio signals from space come from electrons or atoms or molecules, which always have some temperature.  They also tend to come from large clouds of gas, which have lots of internal motions. Both thermal and bulk  motions generate Doppler shifts that blur out the frequencies they radiate at. Even the narrowest masers, like water or cyclotron maters, must have widths 4 orders of magnitude broader than the signal above.
  2. The signal is not perfectly narrow band.  There are two faint “sidelobes” visible on either side (there are bigger ones, too, outside the plot). This is due to signal modulation, illustrating that the signal contains information—it is not a pure “dialtone” or “doorbell”.
  3. The signal’s frequency is shifting towards lower frequencies as time increases (upwards). This is how we know the signal is not from Earth: the telescope is on the Earth, which is rotating. As this happens the telescope is first moving towards the source (as the source rises), then moving away (as the source sets).  This creates an ever increasing redshift, making the signal “drift” to lower and lower frequencies during the observation. A source on the surface of the Earth would not be moving with respect to the telescope, and so would show no Doppler shift.
    Note that this shift is the change in the Doppler shift—we can’t calculate the total Doppler shift without knowing the transmission frequency.

The problem is that the spectrum is filled with these sorts of signals. Every now and then one is from an interplanetary probe, but most are from Earth-orbiting satellites and terrestrial sources.  Breakthrough Listen employs sophisticated software that sorts through the millions of signals they can detect and find the ones from space.  The way the team rules out signals from anything other than their celestial object is by nodding the telescope. If the signal is from something on Earth, then they’ll see it no matter which way the telescope is pointing. If it’s from space, it will only appear when they are pointing at the target. For instance, here’s a pernicious false positive from Emilio Enriquez’s paper on the topic:

This is not the data from Proxima, it is an example.

This signal is apparently modulated in a nasty way: it was strongly detected only when they were pointed at the star HIP 65352 (the first, third, and fifth rows) but not when they pointed away (in the second and fourth rows). It also has a slight drift: the signal has shifted by a few Hz by the end of the series of observations.

But, you’ll notice, the signal is also present in the off pointings. It’s very weak then, below their threshold I suspect, which is why the algorithm flagged this as interesting. But if it were really coming from HIP 65352 there’s no way it could be present in those off pointings. This is probably something terrestrial with a poorly stabilized oscillator putting power into the sidelobes of the telescope or something. The exact nature of this signal is not important—all that matters for SETI is that it is not from HIP 65352.

And the radio spectrum is filled with these sorts of false positives! Sifting through them all is hard, and takes a lot of time, and has trained the team to get very good at identifying radio frequency interference. Indeed, no signal has ever survived the four tests I described above after human inspection: narrow band, drifting signal, only present in the on pointings, never present in the off pointings.

Until now!

What did the team find?

The Breakthrough Listen project uses the Parkes radio telescope in Australia as one of its tools to search for technosignatures. In this case, they were “piggybacking” on observations of Proxima, the nearest star to Earth, which were looking for radio emissions from stellar flares. These were long stares for many hours per day, for many days. The signals they were looking for are broadband, with complex frequency and temporal structure—basically, if you tuned into it with a radio receiver like the ones we use for FM or AM transmission, it would be present at every frequency, and sound like very complicated static.

But the equipment on the telescope can also be used for SETI, and so the BL team was using the telescope “commensally” to do a SETI experiment simultaneously to the flare study.

And in these data, a signal has apparently survived all of their tests!

Now, this does not mean it’s aliens, as the team has pointed out. It means they have, for the first time, a signal that can’t be easily ruled out as RFI. It’s probably RFI of some pernicious nature, but we don’t know what. Pete Worden of the Breakthrough Listen team says it is “99.9% likely” to be RFI.

We know the signal was present for around three hours, present in 5, 30-minute “on” pointings and not at all in the interspersed “off” pointings. We also know it has a positive drift rate, it appears at 982.002 MHz, and that it appears to be unmodulated.

Other than that, we don’t know much!  But there are some things we can conclude based on this little bit of information.

Why isn’t the team releasing more information?

I cannot speak for the team but I know they’re committed to transparency and scientific rigor. They also think hard about how to convey results to the media, and are careful about things like press releases and peer review of results.

Unfortunately, this news leaked out before the team had finished their analysis, so we’re left to read tea leaves and parse vague newspaper statements instead of reading their paper on the topic (which does not exist because they’re not done with their analysis!)

Someone in the “astronomical community” (we don’t know if they are even a member of the team) leaked the story to the Guardian. Their hand having been forced, the team then gave interviews to Scientific American and NatGeo with some more details, emphasizing that the signal is probably RFI.

Now, I’m pretty grumpy about this. SETI has extensive post-detection protocols that were not followed by the leaker, exactly to avoid this sort of situation. Especially since the team was definitely going to announce this, there’s no need for the leak.

But really what I’m grumpy about is that the team did not get to announce this on their own terms in a way that made clear what was going on. Instead we have lots of speculation and questions that not even the team can answer (because they haven’t finished their analysis yet!)

So what are the odds it’s aliens?

As Pete Worden tweeted:

And in the SciAm article:

“The most likely thing is that it’s some human cause,” says Pete Worden, executive director of the Breakthrough Initiatives. “And when I say most likely, it’s like 99.9 [percent].”

What should we make of the fact that the drift rate is positive? Isn’t that the opposite of what we expect?

It’s unclear how to interpret this.

The fact that it drifts at all is consistent with a non-terrestrial origin. The fact that it drifts more than you’d expect from the motion of Parkes by itself means that the source is either “chirping” its signal to go up in frequency, or that it is not correcting for its own acceleration, and it is accelerating towards the Earth (not directly towards the Earth, like it’s coming for us or something, but just that we’re in the same hemisphere of its sky as the direction it’s accelerating).

Some SETI practitioners expect that a signal would be non-drifting in the frame of the Solar System barycenter, meaning that after we correct for our motion, the signal would have just one frequency. This defies that expectation.

It also can’t be from the rotation of a planet that hosts the transmitter—those shifts would also be negative.  But it could be from the orbital motion of a planet, or from a free-floating transmitter, or from a transmitter on a moon.

The most likely explanation is probably that it is a source on the surface of the earth whose frequency is, for whatever reason, very slowly changing.

Until we know more about the drift, though, there’s not much we could say.

Are we sure it’s coming from the direction of Proxima?

Not completely. If it’s ground-based interference, it’s definitely not coming from that direction. If it’s really from space, it could actually be coming from any place in a 16 arcminute circle around Proxima—about half the width of the full moon.

What do we make of the frequency?

I’m no expert, but apparently 982.002 MHz is in a relatively unused part of the radio spectrum, where there is not a lot of radio frequency interference. It is in or near what radio astronomers call L band (I guess it’s technically UHF because it’s below 1 GHz), which has long been favored as a place to do SETI because it is in the broad minimum between noise and opacity from the electrons throughout the Galaxy and Earth’s ionosphere on one side, and that from water and other molecules in Earth’s atmosphere, and the cosmic microwave background on the other.

From seti.net. The x-axis is in GHZ, so 982 MHz is just to the left of the 1.

It’s also not far from the “water hole” favored for a long time as the place to look in radio SETI.

Some have pointed out that the signal is suspiciously close to an integer value of MHz, which would argue for a terrestrial origin (since aliens presumably would not use Hz as a standard. The deviation from an exact number of MHz is also consistent with imperfect oscillators in typical radio equipment).

The articles mention Proxima b. Could the signal be coming from that planet?

Maybe? Until we know more we really can’t say. The team itself does not appear to have favored this idea (it seems to me to have come from the authors of the newspaper articles) and indeed they have privately communicated to me that they have not analyzed this possibility because they’re focused on the RFI origin right now.

We also don’t know the orbital inclination or rotational properties of Proxima b, so we don’t know what acceleration signal it would provide. Without a good model for that planet and without knowing what the team has seen, we can only speculate.

That said, if the signal repeats and turns out to be from Proxima, and if the signal is not being inherently modulated, then we could use the drifts to infer the accelerations of the transmitted, and possibly determine whether it’s on the surface of a planet, and determine the rotational and orbital period of that planet.

But seriously, isn’t it horribly unlikely that of all places the first signal we’d find are from the nearest star, Proxima?

The original Guardian article had a misguided take on this one:

“The chances against this being an artificial signal from Proxima Centauri seem staggering,” said Lewis Dartnell, an astrobiologist and professor of science communication at the University of Westminster. “We’ve been looking for alien life for so long now and the idea that it could turn out to be on our front doorstep, in the very next star system, is piling improbabilities upon improbabilities.

“If there is intelligent life there, it would almost certainly have spread much more widely across the galaxy. The chances of the only two civilisations in the entire galaxy happening to be neighbours, among 400bn stars, absolutely stretches the bounds of rationality.”

This is wrong, because it’s based on a lot of unexamined priors and assumptions.

First, it assumes that signals of this sort must be very rare coming from only a handful of stars in the Galaxy. While that is certainly very plausible, the idea that nearly every star might have some sort of technology around it is older than SETI itself! Indeed, it is at the heart of the Fermi Paradox, which asks: since interstellar spaceflight is possible with ordinary rockets, and since the Galaxy can be populated by such rockets in less time than it’s been around, why aren’t aliens here in the Solar System right now?

One answer is “they don’t exist.”  Another is “they don’t spread around very much”. Another is “they are most places, but avoid the Solar System for some reason, perhaps because life is present here.” Another is “they have been here in the Solar System but aren’t here now.”  Another is “there is alien technology in the Solar System but we haven’t noticed it”.

Dartnell’s “improbabilities upon improbabilities” presumes that the second answer above is correct, but there is plenty of heritage in the SETI literature that explores the other answers, as well.

But even if it’s true that interstellar travel of creatures is rare and Parnell is right that it’s therefore unlikely that Proxima is inhabited, there is still a good argument to be made that Proxima is the most likely star to send us signals—perhaps even the only such star!

If there exists a Galactic community, either a diaspora or a lot of stars with technological life, or even just a single planet with life that has sent its technology everywhere, then it might set up a communication network. This is, after all, what SETI hopes to find.

But when you want to communicate with many places over very large distances, point-to-point communication is a poor way to go about it. When you call your friend on your mobile phone, your phones aren’t sending radio signals to each other. That would require way too much power and complexity. Instead, your phone sends its signal to the nearest cell tower. This makes the power requirements of your phone (and the tower) much more reasonable. This tower then sends the signal, via many means, on a complex route through many central nodes until it arrives at your friend’s nearest cell tower, and they get the signal that way.

By this logic, Proxima is the most likely place for the “last mile” portion of any message to the Solar System. Indeed, it may be the only star transmitting to us!

And note that this scheme does not assume that the message is meant for us—the Solar System may just be one stop in a network.

But if they were trying to get our attention, then they need to do something we would find obvious to look for, which means they’d have to guess which stars we’d guess to search for their signal. There are a lot of stars to choose from—which is the most obvious place for us to look?  It’s hard to argue for a better target than Proxima.

Now, this could all be wrong, but the point is we don’t know what sort of luminosity function or spatial distribution transmitters might have, and it’s easy to construct plausible scenarios where Proxima or some other very nearby star is the first one we’d detect.

So what could it be if not aliens?

I don’t really know. I’m not an expert in RFI, and even if I were, I haven’t seen the data.

Jonathan McDowell and I have had some fun on Twitter exploring an interesting possibility:

There’s a special kind of orbit that takes satellites way out to +/- 63 degrees declination and sort of hang there at apogee for a while in a long elliptical orbit.  Such satellites would also have a positive draft rate, since they’re accelerating towards the Earth. Jonathan, who keeps careful track of everything artificial in space (literally) has been trying to see if any actual satellite might do this in the direction of Proxima, but he didn’t find any in his database.

So what’s next?

Mainly, we wait for the team to finish their work and present their results.

Things that I imagine the team are and will be doing include:

  • Pointing Parkes at Proxima a lot to see if the signal repeats! Unfortunately, there are not a lot of facilities in the Southern Hemisphere that can do this work. MeerKAT may be up to the task soon, but is hard to get time on. Depending on the strength of the signal it may be possible to point smaller telescopes at Proxima to search for it as well.
  • Scouring all of their data for other examples of this signal. If it’s RFI, there’s a good chance they’ve seen it before when not pointing at Proxima
  • Searching carefully for other signals from Proxima. If there is one signal, there may be many more.
  • Considering lots of sources of RFI—what devices transmit at 982 MHz? Could any satellite or train of satellites stay in the Parkes beam for 3 hours? Could it be a hoax?

If it never repeats and if the team can’t find a good RFI explanation then I’m afraid it will be another Wow! Signal; an intriguing “Maybe?” that we’ll just have to wonder about forever. We can’t study it if it’s so ephemeral that we never get a good look at it again!

But mostly, we talk about how cool SETI is and we wait!

Where should we look for life?

Where should NASA look for biosignatures?

When I give talks to the public (and to technical audiences sometimes) I often get asked whether we might be focusing our search for life too narrowly. Mightn’t life be silicon-based, or in other string theory dimensions, or under the ice sheets of distant moons?

Indeed, David Stevenson had a nice commentary in Physics Today on the topic. He called it “The habitability mantra: Hunting the Stark” and he complained that focus on the Habitable Zone around nearby stars was a “perhaps the most distressing example of limited imagination” because it excluded searches for life that might be found elsewhere, noting that NASA planetary science missions spend most of their efforts outside of the Sun’s Habitable Zone!

I rebutted him, or tried to, by pointing out that we focus on the Habitable Zone not because it is the only place we imagine life to be, but because we need to define the parameters of the search and that’s a good place to start—following the only lead we have in the hunt.

This issue is becoming more salient as NASA plans for the next coronagraphic missions like HabEx and LUVOIR, and especially as JWST prioritizes which exoplanets to attempt to search for life via transmission spectroscopy. Which stars and planets should we look at?

Now, there is an approach that says we should not prioritize things to focus on potential biosignatures, since we don’t really know where to find those because they might not be from Earth-like life. Instead, we should look for a wide range of planets and let the data tell us where the life is, or at least whether our approach to the Habitable Zone has evidence to support its applicability to biosignatures. In particular, there is the feeling that there won’t be all that many good targets to observe, so we won’t really have the luxury of choosing the “best” candidates to find life.

I do like this approach, but note that one still needs to prioritize planets you think will have life.  For instance, advocates of this approach typically do not suggest we put potential planets orbiting pulsars, red giants, or white dwarfs on their list.  Why?  Because any planets we would be able to image around those stars are so far from our expectations of places to look for life that it doesn’t really feel like astrobiology. They might still be good targets, but are generally not “in bounds” for comparative planetology with life in mind.

In other words, even those advocating casting a wide net come with some priors regarding where good places to look are. We should of course be prepared to be surprised, but if the goal is to find life then our resource allocation should at least roughly track our priors on where we will find it.

Plus, if we’re going to be spending billions or tens of billions of dollars on a space mission, we had better have a quantitative idea of what we’re doing.  It’s not enough to have a squishy sort of feeling that G and K stars are good targets but giants and B stars are bad—we should be able to quantify where that sense is coming from and formalize it.  Then we can answer important questions like: should mid-F stars be on the target lists?  What about subgiants?

To that end, Noah Tuchow here at Penn State has been working on how to do that quantification.  Noah starts by pointing out the the reason we think giants are bad targets is that the planets have been rapidly heating recently.  Planets that used to be temperate are now very hot, and those that were once frozen are now temperate, and these changes have happened on what, for Earth, are timescales faster than big evolutionary timescales. We don’t expect biosignatures to have had time to arise in their new environments on any planets orbiting a giant star, because that phase doesn’t last very long and is dynamic.

Similarly, we think planets orbiting very young stars aren’t great targets because life has not had time to develop. Indeed, biosignatures did not arise on Earth until it was a couple of billion of years old, and oxygen was not noticeable until a few hundred million years ago. Surely, then, older planets will have a higher chance of having detectable biosignatures than younger planets, right?

So we can bound the problem: we should favor planets that have been in the Habitable Zone longer.  Sure, planets outside of the Habitable Zone may have life, but unless it’s surface life that has had time to change its atmosphere, we’ll never find it with LUVOIR or HabEx, anyway.

Noah defines the habitable duration of a planet to be how long it has been in the Habitable Zone.  One tricky bit is that as stars age they get slightly brighter, so planets that used to be in the Habitable Zone (like, for instance, Venus) eventually get too hot and leave it. Planets like Mars that start outside the Habitable Zone can enter it, but there is considerable skepticism in some quarters that they could ever thaw out because their ice would reflect so much sunlight away.  Those are called “Cold Start” planets (sorry planet formation people!) and we need to decide if we’ll prioritize them in our searches or not.

Then, you can apply your favorite planet abundance numbers (we don’t really know how many terrestrial planets to expect around these stars) and your favorite model of biosignature genesis. Do you think that life has a fixed probability per unit time of arising and creating biosignatures? We have a probability distribution function for that.  Do you think all planets older than 2 Gyr are basically equally likely to host life? We’ve got one for that, too.  Pick your model for planets and abiogenesis and biosignatures, and Noah’s approach allows you to compute which stars are most likely to host life.

Then, given such a model, you can turn a habitable duration into a fraction of stars with detectable biosignatures. Now, this number is certainly wrong—we don’t know how often life will arise!  But if we compare two stars’ numbers, then this uncertainty cancels and we can determine their relative likelihood of hosting life is robust.  Very young planets have less chance of hosting life than very old ones, regardless of what the overall rate of abiogenesis is.

Noah calls that number the relative biosignature yield and he applies some reasonable guesses for planet and life occurrence to compare stars. It turns out things are pretty sensitive to your assumptions!

The plots above show the relative yield for different assumptions about life and planet occurrence.  Red means more planets than blue (because these are relative abundances you can compare colors within a plot but not between plots).

The bottom row shows what you get if you just target any old Habitable Zone of any old star without worrying about how long a given planet has been in there (that is, how fast the star is evolving).  The all red plot on the right shows the situation if planets are logarithmically spaced around their stars (Bode’s Law, basically).  In that case, all stars are roughly equally good targets.  But look what happens if planets are more evenly spaced  (like we see in tightly packed systems): in that case you should favor F stars over K stars, and by a lot!

Now look at the top row, which says the longer planets have been in the Habitable Zone the better. Suddenly, old, low mass stars are much better targets than all of those younger F stars (which makes sense), but whether you should favor F or G stars depends on the underlying planet distribution.

Finally, the middle row shows what happens if you throw out the Cold Start planets—surprisingly, it’s not a huge difference, but it will matter at the margins.

Now you might worry that this is false precision since we don’t know actually know how life arises, so our model of abiogenesis that made these plots is wrong, so it’s all just GIGO (Garbage In, Garbage Out). But that’s throwing out the baby with the bathwater.

This approach is really a way of translating the assumptions we have already been making tacitly (“don’t look at giant stars!”) into quantitative decisions about mission priorities. It will let us quantitatively determine whether, for instance, mid-F stars or subgiants are within our uncertainties about these things, or if our priors on them hosting biosignatures are really small. It will also tell us how well we have to know a star’s property to say something about our expectations that it hosts life.

Models like this will also be important for interpreting detections and null results. If we do/don’t find something, what does that mean?  Without a model, we can’t interpret those results in terms of our understanding of life in the universe.

In other words, without a model, all of our astrobiologically-based target selection and results interpretations for these missions is just hand-waving, and Noah has the model.

You can find the paper here. Take a look! I think it’s very nicely written, and really lays things out well.

 

Planck Frequencies as Schelling Points in SETI

Early when I was learning about SETI I was reading about “magic frequencies” and the “Water Hole.”

Back in the early days of radio SETI, instrumental bandwidths were pretty narrow, so Frank Drake and others had to guess what frequencies to observe at to find deliberate signals. One wants a high frequency to avoid interference from the Earth’s ionosphere and background noise from the Galaxy. But one also wants to observe at a high frequency to avoid lots of emission from Earth’s atmosphere.  There is a “sweet spot” between these problems, in the range of 1–10 GHz:

Figure showing background levels as a function of frequency, with a minimum between 1-10 GHz

From seti.net

In this broad minimum are two famous and strong astrophysical emission lines: the spin-flip hyperfine transition due to hydrogen (the “21 cm line”), and the emission from the hydroxyl radical (OH-).  Since these two species combine to form water, and since water is essential to life-as-we-know-it, the region between these two lines is known as the “water hole”. This has a nice pun to it too as a reference to a place where savannah animals (or barflies) gather.  As Barney Oliver put it “Where should we meet? The watering hole!”

Trying to determine exactly which frequency in the Water Hole to search became a game of guessing “magic frequencies” (I think the term is due to Jill Tarter, though I could be wrong) to tune one’s telescope to.

When I was learning about all of this, I was reading the Wikipedia article on the Water Hole and I saw this intriguing link:

Screenshot of the Wikipedia page on the Water Hole showing a link to "Schelling Points"

Clicking on that last link sent me down a nifty rabbit hole and eventually got Schelling points introduced into the SETI literature.

I wrote a while back on Schelling points and their relevance to SETI.  Go there for the full story, but briefly: Thomas Schelling was an economist and game theorist who considered games where players must cooperate (everyone wins or everyone loses) but cannot communicate. His example was finding someone else who is also looking for you in New York City.  The prospects for winning such a game seems hopeless, but Schelling’s insight was that it was actually pretty easy if you could correctly guess the other person’s strategy, since some strategies are clearly bad (a random search) and others are plausibly good (go to a major landmark).

These optimal strategies are characterized by what we now call Schelling points: in New York City, it’s the Empire State Building and noon are good ones.

Amazingly, ABC News Primetime got people to actually play that game and they won! In hours!

When introducing this idea to the world in his book The Strategy of Conflict, Schelling wrote:

[A good example] is meeting on the same radio frequency with whoever may be signaling us from outer space. “At what frequency shall we look? A long spectrum search for a weak signal of unknown frequency is difficult.  But, just in the most favored radio region there lies a unique, objective standard of frequency, which must be known to every observer in the universe: the outstanding radio emission line at 1420 megacycles of neutral hydrogen” (Giuseppe Cocconi and Philip Morrison, Nature, Sep. 19, 1959, pp. 844-846). The reasoning is amplified by John Lear: “Any astronomer on earth would say ‘Why, 1420 megacycles of course! That’s the characteristic radio emission line of neutral hydrogen.  Hydrogen being the most plentiful element beyond the earth, our neighbors would expect it to be looked for even by tyros in astronomy’” (“The Search for Intelligent Life on Other Planets,” Saturday Review, Jan. 2, 1960, pp. 39-43). What signal to look for? Cocconi and Morrison suggest a sequence of small prime numbers of pulses, or simple arithmetic sums.

So the water hole is a Schelling point!  Or it could be—we need to guess the mind of ET and ask: what frequencies would they guess we would guess, and perhaps water and ionospheres and radio just aren’t their thing?  Schelling’s players can win the game only because they have a common cultural heritage and know about the Empire State Building being famous. If we played that game with aliens, we’d probably lose.

So what do we know we have in common with aliens?  Max Planck had an idea.

Image of Max Planck

Max Planck

Max Planck is one of the most important figures in modern physics, famous for many key insights but among them his constant, h, and his eponymous “natural units.”  Planck realized that there is a fundamental length scale of the universe, set by the nature of space and time, gravity, and quantum mechanics. We call it the “Planck length” and it is given by:

Planck Length formula

Very roughly and heuristically, it is the wavelength of a photon so energetic that its wavelength is equal to its Schwartzchild radius (that is, a photon so dense with energy that it would be a black hole).  Today, we recognize this as the scale on which quantum mechanics and General Relativity give different answers or become mutually incompatible.  Dividing by the speed of light, one defines a fundamental timescale of the universe, which could be interpreted as an observing frequency.

In his famous paper on the topic written in 1900, he wrote:

It is interesting to note that with the help of the [above constants] it is possible to introduce units…which…remain meaningful for all times and also for extraterrestrial and non-human cultures, and therefore can be understood as ’natural units’

and that

…these units keep their values as long as the laws of gravitation, the speed of light in vacuum, and the two laws of thermodynamics hold; therefore they must, when measured by other intelligences with different methods, always yield the same.

So he imagined that these units would be known to extraterrestrial physicists, unlike, say, kilograms and seconds which are completely arbitrary and anthropocentric. Since what we’re looking for is a frequency that we know they would know that we know, this seems like a good Schelling point!

The problem is that the Planck time is way way too short—photons at those frequencies are little black holes (or something—we don’t have physics for it) so don’t exist (or can’t be produced, anyway.)  So how could we use them?

Well, there is another fundamental physics constant, another fundamental unit in nature, which is the fundamental unit of charge. Aliens would have to know that!  Combining the charge of the electron with the speed of light and Planck’s constant h one gets the fine structure constant:

Formula for the fine structure constant

which has a value near 1/137 and is dimensionless.  This is a constant of nature that we do not have a way to calculate purely mathematically or from first principles—it measures the degree to which electrons “couple” to photons, and so governs how all of electromagnetism and light works.

So, can we get another frequency if we multiply the inverse of the Planck time by the fine structure constant. That frequency turns out to still be too high to observe, but there’s no reason we couldn’t keep multiplying by the fine structure constant until we get a useful frequency. This process would actually generate a large number of frequencies—a frequency “comb” with many “teeth.”  Some of the frequencies generated this way are at interesting frequencies: 610.3 nm in the optical, and 26.16 GHz in the microwave, for instance, are both easily observed from Earth and in fact the sorts of frequencies we might use for communication.  These are Planck’s Schelling points!

But there are some caveats here.  Are Planck’s units really that universal?  Look at those equations above: they both have a factor of 2π in them.  Where did that come from?

Well, Planck liked to define things in terms of angular frequency, meaning the time it takes the phase of an oscillator to change by 1 radian. The 2π goes away if you choose to define frequency in terms of cycles per unit time (as astronomers do for light or engineers do for AC electricity).  It’s arbitrary!  So, we could also define both constants without the 2π. Maybe aliens like it better that way?  So we can build a frequency comb that way, and that’s another potential Schelling point.

Also, maybe we’re overcomplicating things.  If we’re going to choose a base to raise to a power then maybe the fine structure constant isn’t the natural one to use—any mathematician will tell you that the natural base to use is the base of the natural logarithm, e! (Different e from the one above, though). Then you can do it all with only 3 physical constants instead of 4, so maybe more obvious? So that’s another potential Schelling point.

Or maybe you want to make sure the frequencies have physical meaning akin to the 21 cm line Shelling mentioned, and as long as you’re thinking about light you might like to use the mass of the electron instead of the gravitational constant G.  In that case you could define your base unit of energy as half of its rest energy of the electron and use the fine structure constant to make your comb.  Seems a bit arbitrary, at first, but the energies defined by that comb are

(m_e c^2)/2 \alpha^n

and when n=2 we have an important unit in physics, the Rydberg, related to the energy it takes to ionize hydrogen (in reality there’s a small correction factor because protons are not infinitely massive, but this is the fundamental unit).  This unit was known even to classical physics and so is a very “natural” way to define a universal energy or frequency.  So there’s yet another frequency comb.

We could surely define more. The truth is, Planck’s insight isn’t all that helpful for guessing exactly which frequencies we should look at because we still need to make lots of choices and we don’t have any guide beyond what seems natural.

But still, it’s a useful illustrations of both the power and limitations of Schelling’s idea. Also, we can add the frequencies that appear in these frequency combs to the lists of “magic frequencies” we check for—more ideas about places to look can’t hurt, because today modern radio observatories can search billions of frequencies at once, so it costs nothing to check a few more more that they observed.

But there may be another insight here: these frequency combs generate multiple frequencies, and perhaps we should look for signals at all of them.  After all, unlike looking for someone in New York, there is little preventing us from looking in more than one channel at once, or from their signals being at more than one frequency at once.  Perhaps we should be looking for combs of signals, or at multiple wavelengths simultaneously!

Anyway, this idea of Schelling points has gained a lot of traction since I made passing reference to it in a review article a while back, but it has no proper, refereed citation in a SETI context (beyond Schelling’s offhand remark in his book).  So I’ve written up the idea formally, including the Planck Frequency Comb as a case study, in a new paper for the International Journal of Astrobiology. You can read it on the arXiv here.

 

Thanks to Sabine Hossenfelder and Michael Hippke for these translations of Planck’s 1900 paper.

Is SETI dangerous?

Interdisciplinarity in science can be wonderful: combining expertise across disciplines leads to new insights and progress because it’s only when people from those disciplines communicate about a particular problem that progress is made, and that happens much more rarely than communications among members of a single discipline.

It’s important, though, when working across disciplines to actually engage experts in those other fields. There’s a particular kind of arrogance, common among physicists, that a very good scientist can wander into another discipline, learn about it by reading some papers, and start making important contributions right away. xkcd nailed it:

xkcd comic. A physicist is lecturing an annoyed person who has beer working at a blackboard and laptop with notes strewn about. "You're trying to predict the behavior of <complicated system>? Just model it as a <simple object>, and then add some secondary terms to account for <complications I just thought of>. Easy, right? So, why does <your field> need a whole journal, anyway? Caption: Liberal arts majors may be annoying sometimes, but there's nothing more obnoxious than a physicist first encountering a new subject.And my favorite takedown of the type is from SMBC (go read it!)

There’s a new paper about the dangers of SETI out by Kenneth W. Wisian and John W. Traphagan in Space Policy, described here on Centauri Dreams. In it, they describe the worldwide “arms” race, similar to the one in the film Arrival, to communicate with ETIs once contact is established. They say this is an unappreciated aspect of SETI and that SETI facilities should take precautions similar to those at nuclear power plants.  Specifically, they write:

In the vigorous academic debate over the risks of the Search for ExtraTerrestrial Intelligence (SETI) and active Messaging ExtraTerrestrial Intelligence (ETI) (METI), a significant factor has been largely over- looked. Specifically, the risk of merely detecting an alien signal from passive SETI activity is usually considered to be negligible. The history of international relations viewed through the lens of the realpolitik tradition of realist political thought suggests, however, that there is a measurable risk of conflict over the perceived benefit of monopoly access to ETI communication channels. This possibility needs to be considered when analyzing the potential risks and benefits of contact with ETI.

I have major issues with their “realpolitik” analysis, but I’m not an expert in global politics, international affairs, or risk aversion so I’m not going to critique that part here. Instead, I’ll stick to my expertise and point out that the article would be much stronger if the authors had consulted some SETI experts, because it is based on some very dubious assumptions about the nature of contact.

The authors seems to think it is clear that once a signal is identified:

  1. Only around “a dozen” facilities in the world will be able to receive the signal, and that states will be able to somehow restrict this capability from other states. The authors think this covers both laser and radio.
  2. That it will be possible to send a signal to the ETI transmitter, and that this capability will have perceived advantages to states.

While there are some contact scenarios where these assumptions are valid, they are rather narrow.

First, modern radio telescopes are large and expensive because they are general purpose instruments. They can often point in any direction, and have a suite of specialized instrumentation designed to operate over a huge range of frequencies.

But once a signal is discovered, the requirements to pick it up shrink dramatically. Only a single receiver is required, and its bandwidth need be no wider than the signal itself. The telescope need only point at the parts of the sky where the signal comes from, so it need only have a single drive motor. And the size of the dish need not be enormous, unless the signal just happens to be of a strength that large telescopes can decode it but small ones cannot, which is possible but a priori unlikely.

Indeed, there are an enormous number of radio dishes designed to communicate with Earth satellites that could easily be repurposed for such an effort, and can even be combined to achieve sensitivities similar to a single very large telescope, if signal strength is an issue. And there is no shortage of radio engineers and communications experts around the world that can solve the problem quickly and easily. The scale of such a project is probably of order tens of thousands to millions of dollars, depending on the strength and kind of signal involved. The number of actors that could do this worldwide is huge. Also, such efforts would be indistinguishable from normal radio astronomy or satellite communications, so very hard to curtail without ending those industries.

The situation is similar for a laser signal: if it is a laser “flash” then the difficulty is primarily in very fast detectors that can pick it up. Here, the technology is not as mature, and if the flashes are *extremely* fast it is possible that the necessary technology could be controlled but, again, this assumes a very particular kind of laser signal. And, again, there are an enormous number of optical telescopes which will have similar sensitivity to optical flashes as existing optical SETI experiments (which, again, are only expensive because they search a huge fraction of the sky for signals of unknown duration).

Finally, there is the issue of two-way communication: unless the signal is coming from within the solar system or the very closest stars, the “ping time” back and forth is at least a decade, and likely much longer. There is no “conversation” in this case: the first response to our communications would be ten years down the line! So the real dangers are transmitters within the solar system or signals that contain useful information without the need for us to send signals.

In summary, the concerns expressed in this article apply to a narrow range of contact scenarios in which the signal is, somehow, only accessible to those with highly specialized equipment or from a transmitter within the solar system. The first seems highly unlikely; I do not know to evaluate the second, but note that such signals are not searched for routinely in the radio, anyway.

I’d be happy to engage with experts in space law on a paper on the topic, if I know any?

Science is not logical

OK, time for some armchair philosophy of science!

You often hear about how logic and deductive reasoning are at the heart of science, or expressions that science is a formal, logical system for uncovering truth. Many scientists have heard definitions of science that include statements like “science never proves anything, it only disproves things” or “only testable hypotheses are scientific.” But these are not actually reflective of how science is done. They are not even ideals we aspire to!

You might think that logic is the foundation of scientific reasoning, and indeed it plays an essential role. But logic often leads to conclusions at odds with the scientific method.  Take, for instance, the “Raven Paradox”, expertly explained here by Sabine Hossenfelder:

Sabine offers the “Bayesian” solution to the paradox, but also nods to the fact that philosophers of science have managed to punch a bunch of holes into it. Even if you accept that solution, the paradox is still there, insisting that in principle the scientific method allows you to study the color of ravens by examining the color of everything in the universe except ravens.

I think part of the problem is that the statement “All ravens are black” sounds like a scientific statement or hypothesis, but when we actually make a scientific statement like “all ravens are black” we mean it in something closer to the vernacular sense than the logical one. For instance:

  • “Ravens” is not really well defined. Which subspecies? Where is the boundary between past (and future!) species in its evolutionary descent?
  • “Black” is not well defined.  How black? Does very dark blue count?
  • “Are” is not well defined. Ravens’ eyes are not black. Their blood is not black.

Also, logically, “all ravens are black” is strictly true even if no ravens exist! (Because “all non-black things are not ravens” is an equivalent statement and trivially true in that case). Weirdly, “all ravens are red” is strictly true in that case, as well! This is not really consistent with what scientists mean when we say something like “all ravens are black”, which presumes the existence of ravens. We would argue that a statement like that in a universe that contains no ravens is basically meaningless (having no truth value) and actually misleading, not trivially true, as logic insists.

So the logical statement “all ravens are black” is supposed to be very precise, but that is very different from our mental conception of its implications when we hear the sentence, which are squishier. We understand we’re not to take it strictly literally, but that is exactly what logic demands we do!  And if we don’t take it in exactly the strict logical sense, then we cannot apply the rules of formal logic to it. This means that the logical conclusion that observing a blue sock is support for “all ravens is black” does not reflect the actual scientific method.

You might argue that “black” and “raven” are just examples, and that in science we can be more precise about what we mean and recover a logical statement, but really almost everything we do in science is ultimately subject to the same squishiness at some level.

Also, and more damningly:

If we were to see a non-white raven—one that has been painted white, an albino, or one with a fungal infection of its wings— we would not necessarily consider it evidence against “all ravens are black”!  We understand that “all ravens are black” is a general rule with all kinds of technical exceptions. Indeed, a cardinal rule in science is that all laws admit exceptions! Logically, this is very close to the “no true Scotsman fallacy,” but it is actually great strength of science, that we do not reach for universal laws from evidence limited in scope, only trends and general understandings. After all, even GR must fail at the Planck length. 

So even the word “all” does not have the same meaning in science as it does in logic!

More generally, in science we follow inductive reasoning. This means that seeing a black raven supports our hypothesis that all ravens are black. But in logic there is no “support” or “probability,” there is only truth and falsity. On the other hand, in science there are broad, essential classes of statements for which we never have truth, only hypotheses, credence, guesses, and suppositions. Philosophers have struggled for years to put inductive reasoning on firm logical footing, but the Raven Paradox shows how hard it is, and how it leads to counter-intuitive results.

I would go further and argue that strictly logical conclusions like those of the Raven Paradox are inconsistent with the scientific method. I would simply give up and admit: the scientific method is not actually logical!

After all, science is a human endeavor, and humans are not Vulcans. Logic is a tool we use, a model of how we reason about things, and that’s OK: “All models are wrong, but some are useful.”  Modeling the Earth as a sphere (or an oblate spheroid, or higher levels of approximation) is how we do any science that requires knowledge of its shape but it’s not true. Newton’s laws are an incredibly useful model for how all things move in the universe, but they are not true (if nothing else, they fail in the relativistic limit).

Similarly, logic is a very useful and essential model for scientific reasoning, and the philosophy of science is a good way to interrogate how useful it is. But we should not pretend that scientists follow strict adherence to logic or that the scientific method is well defined as a logical enterprise—I’m not even sure that’s possible in principle!

The astrophysical sources of RV jitter

A big day for our understanding of RV jitter!
 
Penn State graduate student Jacob Luhn has just posted two important papers to the arXiv. You can read his excellent writeup of the first of them here:
 
It took Jacob a HUGE amount of work to determine the *empirical* RV jitter of hundreds of stars from decades of observations from Keck/HIRES. These are “hand crafted” jitter values, free of planets, containing only the HIRES instrumental jitter plus astrophysical jitter.
(Along the way, we wondered how to put error bars on jitter, which is itself a deviation. What’s the standard deviation of a standard deviation? Jacob found the formula—it’s in the paper if you’d like to see how it’s done (you have to use the kurtosis). )
You may have seen Jacob’s work at various meetings: young stars and evolved have high jitter, so there is a “jitter minimum” where they are quietest.
But this paper has more! It turns out the location of the jitter minimum depends in a predictable way on a star’s mass.
Figure from Jacob's paper illustrating the dependence of jitter on log(g) and mass.
 
The second paper describes the properties of F stars with low jitter.
 
But don’t F stars all have high jitter?
 
Nope. Jacob has found many stars in the “jitter minimum” are F stars with < 5 m/s of RV jitter. This has important implications for following up transiting planets.
 
My favorite consequence of this work is that we will be able to now *predict* the RV jitter of a star from its mass, R’HK, and log(g) *empirically*, incorporating *all* sources of RV noise . Right now, such predictions are only good to ~factor of 2. Jacob can predict it to <25%!
 
But predicting RV jitter is a story for another paper, coming soon. For now, enjoy these papers at AJ and on @jacobkluhn’s blog:
https://iopscience.iop.org/article/10.3847/1538-3881/ab855a
https://iopscience.iop.org/article/10.3847/1538-3881/ab775c

On Meeting Your Heroes

Freeman Dyson died on Friday. He was a giant in science, possibly the most accomplished and foundational living physicist without a Nobel Prize. He was 96.

Tania/Contrasto, via Redux

He had a big influence on my turn to SETI. I’ve written about him several times on this blog, including about his “First Law of SETI Investigations”, his role in the development of adaptive optics, how that intersected with Project Orion and General Atomic, and of course his eponymous spheres that I’ve spent some time looking for.

I got to meet him twice. Once was when Franck Marchis invited him, Jill Tarter, Matt Povich, and me to talk about Dyson spheres on a Google Hangout for the SETI Institute:

The second time was a at UC San Diego. I was there to give a talk, and walking down the hallway of the astronomy part of the physics department I saw “F. Dyson” on one of the doors. I asked, and was surprised to learn that he spent his winters in San Diego where his grandchildren lived, and that he had an office in the department.

And he was there that day.

And he’d be at my talk.

About Dyson spheres.

Indeed, his face was on the second slide.

The talk went well and afterwords he invited me to lunch to discuss it. He asked if I was free. I looked at my schedule: of course I had a lunch appointment. “Yes, it looks like I’m free!” I said, then briefly excused myself to explain the change to my host.

Freeman Dyson and me after my talk at UCSD

Freeman Dyson and me after my talk at UCSD

I asked where we should go and he said “I like Burger King.” So he walked me to the student union where he got a hotdog, and we sat at a table for four, next to a slightly annoyed undergraduate looking at his phone.  We talked about Dyson spheres and SETI, I’m sure. I also could not resist and asked embarrassingly naïve questions about experimental tests of the vacuum energy and the like. “I don’t think that’s a promising line of research” he politely deflected.


I have a list of bands I’ll see if they come to town, and a shorter list of bands I’ll see if they come within driving distance. It’s not a list of my favorite bands, it’s a list of bands that might be on their last tour that I want to have seen at least once. I’ve seen Dylan (twice!), Springsteen (twice!), The Who (Quandrophenia in Philadelphia), Bob Seeger, Rod Stewart, Elton John, Metallica (twice!), the Rolling Stones, Paul McCartney, and more. Cher caught a cold and so they canceled the State College show (I really bought the tickets for Pat Benitar’s opening act, though).

I missed Prince. You never know.

I’ve twice missed talking to my heroes because they were old and I dallied. I invited Nikolai Kardashev to this summer’s SETI Symposium, but I got a decline from someone managing his email account, and then we learned last August that he had passed away at 87.

When I was organizing the letter writing campaign for a prize from the AAS for Frank Kameny, I got his contact information (the phone number at his house). I wanted to call him to tell him what we were doing, but I decided to wait until the prize was official so I could tell him the good news. On August 1, 2011 I learned the AAS was officially going to consider the prize. On October 12, Kameny passed away at 86. On October 15, the AAS announced the prize, which had to be posthumous.

I never called. I’m not sure Frank knew about the effort at all, that his old professional society was finally honoring him.


I’m glad I met Freeman. I’m sad I won’t get his feedback on the big review article on Dyson Spheres that I’ve written that will be published this summer. I probably should have sent it to him earlier.

Technosignatures White Papers

Here, in one place, are the white papers submitted last year to the Astronomy & Astrophysics decadal survey panels:

  1. “Searching for Technosignatures: Implications of Detection and Non-Detection” Haqq-Misra et al. (pdf, ADS)
  2. “The Promise of Data Science for the Technosignatures Field” Berea et al. (pdf, ADS)
  3. “A Technosignature Carrying a Message Will Likely Inform us of Crucial Biological Details of Life Outside our Solar System” Lesyna (pdf, ADS)
  4. “The radio search for technosignatures in the decade 2020—2030” Margot et al. (pdf, ADS)
  5. ” Technosignatures in Transit” Wright et al. (pdf, ADS)
  6. “Technosignatures in the Thermal Infrared” Wright et al. (pdf, ADS)
  7. “Searches for Technosignatures in Astronomy and Astrophysics” Wright  (pdf, ADS)
  8. “Observing the Earth as a Communicating Exoplanet” DeMarines et al. (pdf, ADS)
  9. ” Searches for Technosignatures: The State of the Profession” Wright et al. (pdf, ADS)

And, because it’s relevant and salient: the Houston Workshop report to NASA by the technosignatures community:

“NASA and the Search for Technosignatures: A Report from the NASA Technosignatures Workshop” (Gelino & Wright, eds.)  (pdf, arXiv)

Background to the 2019 Nobel Prize in Physics

Fifty percent of the 2019 Nobel Prize in Physics goes to Michel Mayor and Didier Queloz for the discovery of 51 Pegasi b!  I had a tweet thread on the topic go viral, so I thought I’d formalize it here (and correct some of the goofs I made in the original).

A hearty congratulations to Michel Mayor & Didier Queloz, for kickstarting the field that I’ve built my career in! Their discovery of 51 Peg b happened in my senior year of high school, and I started working in exoplanets in 2000, when ~20 were known.

A thread:

The Nobels serve a funny place in science: they are wonderful public outreach tools, and a chance for us all to reflect on the discoveries that shape science. The discussions they engender are, IMO, priceless.

They also have their flaws: because they are only be awarded to 3 at a time, they inevitably celebrate the people instead of the discovery.

(This technically a requirement from Alfred Nobel’s will, but there are other requirements, like that the discovery be in the past year, that the committee ignores. Also, the Peace Prize is regularly awarded to teams, but the science prizes have never followed suit.)

Anyway, many of the discoveries awarded Nobels are from those who saw farther because they “stood on the shoulders of giants.” The “pre-history” of exoplanets is a hobby of mine, so below is a thread explaining the caveats to 51 Peg b being the “first” exoplanet discovered.

The first exoplanet discovered was HD 114762b by David Latham et al. (where “al.” includes Mayor!) in 1989. It is a super-Jupiter orbiting a late F dwarf (so, a “sun like star” for my money), published in Nature:

https://www.nature.com/articles/339038a0

Dave is a conservative and careful scientist. At the time there were no known exoplanets *or* brown dwarfs, and they only knew the *minimum* mass of the object, so there was a *tiny* chance it could have been a star. He hedged in the title, calling it “a probable brown dwarf”.

I wonder: if Dave had been more cavalier and declared it a planet, would *that* have kickstarted the exoplanet revolution? Would he be going to Stockholm in a few months?

Meanwhile, Gordon Walker, Bruce Campbell, and Stephenson Yang were using a hydrogen fluoride cell to calibrate their spectrograph. In 1988 they published the detection of gamma Cephei Ab, a giant planet around a red giant star:

https://ui.adsabs.harvard.edu/abs/1988ApJ…331..902C/abstract

They were also very careful. At least four of the other signals reported there turned out to be spurious. They did not claim they had discovered any planets, just noted the intriguing signals. In follow up papers they decided the gamma Cep signal was spurious. Turns out it was actually correct!

Again, what if they had trumpeted these weak signals as planets and parlayed that into more funding to continue their work? Would they have confirmed them and moved on to stars with stronger signals? Would they be headed to Stockholm?

Moving on: in 1993 Artie Hatzes and Bill Cochran announced a signal indicative of a giant planet around the giant star beta Gem (aka Pollux, one of the twin stars in Gemini).

Like gamma Cep A, the signal was weak. Like Campbell Walker & Yang, they hedged about its reality. But again, it turns out it’s real!

https://ui.adsabs.harvard.edu/abs/1993ApJ…413..339H/abstract

Then, in 1991 Matthew Bailes and Andrew Lyne announced they had discovered an 10 Earth-mass planet around a *pulsar*. This was big news! Totally unexpected! What was going on!? They planned to discuss in more detail in a talk at the AAS that January.

But when the big moment came, Bailes retracted: they had made a mistake in their calculation of the Earth’s motion. There was no planet, after all. That made more sense. He got a standing ovation for his candor.

But in the VERY NEXT TALK Alex Wolszczan got up and announced that he and Dale Frail had discovered *two* Earth-massed planets around a different pulsar! They would later announce a third, and that remains the lowest mass planet known.

Some wondered: Was this one really right? Had they done their barycentric correction properly? It held up. The first rocky exoplanets ever discovered, and the last to be discovered for *20 years*.

And there would be more. In 1993 Stein Sigurdsson and Don Backer interpreted the anomalous period second derivative of binary millisecond pulsar PSR 1620-26 as being due to a giant planet. This, too held up.

https://ui.adsabs.harvard.edu/abs/1993ApJ…415L..43S/abstract
https://ui.adsabs.harvard.edu/abs/1993Natur.365..817B/abstract

Meanwhile, in a famous “near miss”, Marcy & Butler were slogging through their iodine work. They actually had the data of multiple exoplanets on disk when Mayor & Queloz announced 51 Peg b, but not the computing power to analyze it.

If you’re interested in more detail, you can read this “pre-history” in section 4 of my review article with Scott Gaudi here:

https://arxiv.org/abs/1210.2471

None of this, BTW, is meant to detract from Michel & Didier’s big day. 51 Peg b was the first exoplanet with the right combination of minimum mass, strength of detection, and host star characteristics to electrify the entire astronomy community and mark the exoplanet epoch. As I wrote above, they kickstarted the exoplanet revolution. It makes sense that Mayor & Queloz got the prize!

This is to make sure that the Nobel serves its best purpose: educating, and promoting and celebrating scientific discovery.

Observers and Theorists Being Wrong

Once upon in time, probably in graduate school, someone told me an aphorism, which went something like this:

A theorist only has to be right once to garner a reputation as a good scientist, but an observer only has to be wrong once to ruin theirs.

Alex Filippenko—spreader of the aphorism

I asked about it on Twitter and Facebook, and multiple people pointed to Alex Filippenko as the originator (which may also be where I heard it, when I TA’d for him at Berkeley). I asked Alex, and he wrote that he heard it from other grad students at Caltech, perhaps Richard Wade. I asked Richard, and he wrote “I think it was a fairly common expression around Caltech when I was a grad student, so Alex could heard it from me. I probably heard it from other grads.”

So, I’m not sure where it comes from, but it’s a great quote!

Some of Facebook and Twitter objected to the sentiment it expresses— ” I’d say it’s best forgotten. No good comes from playing it safe the whole damn time… 😉” quipped David Kipping. Brian Metzger writes “I think one has to make an important distinction: theory that in principle is well-motivated and has a sound physical basis but just turns out to be the wrong explanation (but might still lead to progress by posing new questions), versus theory that e.g. employs bad physics or already disproven assumptions and couldn’t in principle have been correct. ”

But I think it’s got a kernel of truth worth discussing.

As I’ve drifted into theory from observation I’ve been struck by how much more comfortable theorists are being wrong than observers (sometimes I call this Steinn’s bad influence on me ;).

But it makes sense. Theorists are expected to work on hypotheses that might turn out to be wrong, and there is no discredit in one’s theory turning out to be wrong if it was interesting and spurred work that eventually turned up the right answer. There’s no real reason for an equivalent “you’re doing a good job even if you’re wrong” land for observers.

I think David Kipping and Alex Teachey’s laudable and cautious approach to their exomoon candidate illustrates the divide. As an observation project, especially a high profile one, they must be extra careful not to overstate the evidence, careful to call things “candidates” and not “discoveries”, and careful to emphasize the uncertainty inherent in the problem. Their peers, journalists, and the public will scrutinize their verbiage and they will get blowback if it turns out to not be an exomoon and their presentation of the evidence, in retrospect, was overstated.

But a theoretical analysis of the abundance of exomoons (or exoplanets!) that turns out to be off by orders of magnitude can still get cited favorably a decade later if it included novel and important components. After all, everyone understands that theory is hard and that we build theories up piece by piece, and so we’ll get it wrong many times before we get it right. And so such work rarely includes the careful hedging that Kipping and Teachey used in their work.

Or, to give a more dramatic example: If inflation turns out to be completely wrong, the theorists who dedicated their careers to it will still be considered good theorists, but the BICEP2 team that got a subtle issue with dust wrong have a whole book about the very public and embarrassing debacle that followed their (incorrect) detection of sign of inflation in the CMB.

I’m not saying this dichotomy in unfair or inappropriate—on the contrary, I think it’s appropriate!—I’m just pointing out how the aphorism resonates because it identifies something real and tacit about the way we judge science.

Freeman Dyson’s First Law of SETI Investigations

It’s come up a few times, so let me state here for the record the origin of Freeman Dyson’s First Law of SETI Investigations:

It’s from an email he sent me. We were discussing a paper of mine in anticipation of an outreach event we were planning:

and he remarked of the Ĝ strategy:

I am happy to see that your plan is consistent with the First Law of SETI investigations: every search for alien civilizations should be planned to give interesting results even when no aliens are discovered.

I asked permission to repeat this, and he agreed. It’s consistent with his general approach about SETI, searching for the physical limits of technology in a way that also generates ancillary science and makes minimal assumptions about agency.

I think Freeman himself means this as a counterpoint to radio or laser SETI, which has the benefit of working against low natural background but this apparent disadvantage that they are unlikely to discover new natural phenomena in the course of their searches. I think this perspective is often overstated—radio SETI is closely aligned with pulsar and FRB astrophysics, and generates great science along the way, and there are natural sources of very brief optical flashes, too.

[Edit: Frank Drake had the same sentiment in his overlooked 1965 paper here (first appearance of the Drake Equation, BTW:):

Our experience with Project Ozma showed that the constant acquisition of nothing but negative results can be discouraging. A scientist must have some flow of positive results, or his interest flags. Thus, any project aimed at the detection of intelligent extraterrestrial life should simultaneously conduct more conventional research. Perhaps time should be divided about equally between conventional research and the intelligent signal search. From our experience, this is the arrangement most likely to produce the quickest success.

]

Justifying Science Funding in an Unjust World

I was asked recently a stock question by interviewers about how to justify spending SETI when “some people would say” that we have so many problems needing solving , so many better places to spend the money.

There are a few answers to this.

One is that it’s a false choice: certainly if I had to choose between NSF funding for basic research, including SETI, and feeding starving people, I choose feeding the starving people every time. But that is not the choice we face: humanity produces more than enough food to feed the planet and cutting the NSF budget won’t feed any starving people.

Similarly, if I had to choose, I’d rather our government guarantee all people the basics of modern life—shelter, health care, safety, clean water and nutritious food, life, liberty, and the pursuit of happiness, and all that—than search the skies for technosignatures. But that’s not the choice Congress makes every year when it makes up its budget—we can easily afford all of those things.

Another answer is beautifully illustrated by classic Congressional testimony by R. R. Wilson, director of Fermilab, justifying building the lab’s first accelerator in a time when the national defense dominated the budget conversation:

R. R. Wilson

SENATOR PASTORE. Is there anything connected in the hopes of this accelerator that in any way involves the security of the country?

DR. WILSON. No, sir; I do not believe so.

SENATOR PASTORE. Nothing at all?

DR. WILSON. Nothing at all.

SENATOR PASTORE. It has no value in that respect?

DR. WILSON. It only has to do with the respect with which we regard one another, the dignity of men, our love of culture. It has to do with those things.

It has nothing to do with the military. I am sorry.

SENATOR PASTORE. Don’t be sorry for it.

DR. WILSON. I am not, but I cannot in honesty say it has any such application.

Senator John Pastore, dunkee

SENATOR PASTORE. Is there anything here that projects us in a position of being competitive with the Russians, with regard to this race?

DR. WILSON. Only from a long-range point of view, of a developing technology. Otherwise, it has to do with: Are we good painters, good sculptors, great poets? I mean all the things that we really venerate and honor in our country and are patriotic about.

In that sense, this new knowledge has all to do with honor and country but it has nothing to do directly with defending our country except to help make it worth defending.

“To help make [America] worth defending”—quite the rhetorical dunk there.

But more broadly, basic science is an essential part of culture, civilizations, and humanity. We have more than enough labor and wealth to do SETI and be safe. Indeed, we spend hundreds of billions per year on national security—a future that President Eisenhower warned us against—and against this, basic science expenditures are a rounding error.

The third answer is: because we can easily afford it.

I once heard a figure that America spends more on doggie treats than on publicly funded science.  I threw this figure out in that interview, but worried I had it wrong.  So I looked it up.

The US pet food and treat market is almost $30 billion. Of this, dog and cat “treat” sales reached $4.39 billion in 2017. The FY17 appropriation for NASA’s science mission directorate was $5.76  billion.

So, I was wrong: NASA spent 30% more on science in 2019 than America did on dog and cat treats than. But we spend far more on dog and cat food.

But looking at some other points of comparison:

Since America spends over $2.5 billion on dog treats each year, its probably safe to say that Americans spend about twice as much on doggie treats than their federal taxes do on astronomy.

The point, obviously, is not that we spend too much on doggie treats or that there is some obviously more correct ratio between these two expenditures—it’s that America is a very, very rich country (even ignoring the “1%” that doesn’t spend much on pet treats) and the amount we spend on things as important to culture as basic science is actually quite small, comparable to niche consumer markets like pet treats.

I’m not arguing we should spend less on basic human needs, pet treats, or any of these other things that define modern life. Regardless of whether we fund science with new taxes or by cutting other expenditures like the military budget, we can easily afford to spend a lot more on a lot of those things, including SETI.

 

The hats astronomers wear

Once upon a time science departments in universities often had draftsmen on staff that would produce figures for scientific publications. Today, those positions are much rarer (and called “graphic designers”) except in very large institutions because scientists themselves are expected to do much of that work. Other work routinely done by administrative staff in the past—like travel reimbursements—are now done by faculty themselves.

Part of this is that computerization and other technologies have made these tasks easier and so it’s not unreasonable to expect a typical scientist to do the job quickly and competently. But it also means that the modern astronomer (queue Gilbert & Sullivan “I am the very model of a…”) has to do a wide variety of tasks outside of their training.

Today in group meeting we tried to make a list. This whole exercise was inspired by this Jonathan Fortney tweet thread about “scientists” vs. “engineers”:

and about giving yourself permission to not be interested in certain parts of the job of scientist that other people find endlessly fascinating

My punchline is that it’s fine to love and get really good at a few of the aspects of being a scientist, and that it’s OK to not be good at or enjoy other aspects.

Indeed, I sometimes chafe when people say things like “every scientist has an obligation to communicate their science to the public” or “all astronomers should learn Python” and such. I think its fine and good that astronomers specialize in different parts of the job, and that collaborations consist of a group of people that, together, have all the pieces needed to do great science.

Also, part of my definition of a good job is one where you spend most of your time doing things you both like and are good at, and a minimum of time doing things you dislike and are bad at. Enumerating the parts of the job can help you find the job you like (or turn your current job into that). So it’s OK not to be good at everything.

Here’s the list of “hats” astronomers wear that we came up with.  What did we miss?

  • Teacher / instructor
  • Science popularizer
  • Public ambassador of science
  • Salesperson
  • Writer
    •  proposals
    • research articles
    • emails
    • popular materials
    • journalism (e.g. press releases)
  • Copyeditor
  • Graphic(s) designer
  • Examiner (tests, defenses)
  • Peer reviewer
  • Mentor
  • Research adviser
  • Manager
  • Administrator
    • Travel
    • Grants administration
    • Budgeting
  • Computer programmer
    • Team coding
    • Public code
    • “Private” code
  • Computer systems administrator
  • Web developer
  • Marketer
  • Engineer
  • Physicist
  • Theorist
  • Observer
  • Data analyst
  • Statistician
  • Philosopher of Science
  • Ethicist

[Update: good suggestions from Twitter:

Counselor

Advocate

Legislator

Atmospheric scientist and meteorologist

]

A Second Pleiades in the Sky

Astronomers have discovered a second Pleiades in the sky.

Wait, what?  The Pleiades are an obvious feature of the night sky, known to ancient peoples around the world and a common test of visual acuity.  In a telescope they look like this:

The Pleiades

Four or five bright stars make them obvious in even a moderately dark sky, and the nebulosity adds a nice touch that makes them interesting. Astronomers like them because they were all formed from the same birth cloud, so they share composition, age, and distance. This makes for a great stellar laboratory, since it lets us isolate and focus on the few differences among the stars, like their mass and whether they have binary companions.

So if these are so obvious, how could there be another one?  Answer: if they’re so spread out across the sky that no one noticed they were related!  Back in February Meingast et al. announced that they had spotted a group of hundreds of stars all moving in the same direction—but they were spread all out against the sky.  If you plot most of the sky out flat, here’s what they look like (in red):

The Pisces-Eridanus Stream. The Milky Way, which spans 360 degrees of the sky, appears as a circle in this projection. The stream is over 120 degrees across.

Co-moving stars like this are like clusters but more spread out. It took the precise measurements recently announced by the Gaia team to notice that these stars were all moving in the same direction.

Many of these stars, like the Pleiades are bright enough to see with the naked eye:

When Meingast et al. published, I got excited because they guessed that the stream was 1 billion years old—if correct, that would make it one of the closest older clusters in the sky. Jason Curtis had spent a whole PhD dissertation and more proving Ruprecht 147 was a true cluster, 3 billions years old and only 300 parsecs away. Eunkyu Han worked hard to check out another claimed nearby old cluster, Lodén 1.  Here was one 3 times younger and 3 times closer: another important discovery! I got excited and tweeted at Jason about it.

We started wondering what to call it, and also getting suspicious of the reported age. So we pulled in the expert on “moving groups” and stellar ages, Eric Mamajek:

Jason, Eric, and I then took the conversation offline. First, it needed a name.  Eric came up with the “right” answer:

Next, the stream seemed like it had to be younger than 1 billion years to us, but how old was it?  Then Jason Curtis went to town with TESS, the all-sky planet hunting telescope. It had already hunted for planets around many of the stream’s stars, and Jason was able to quickly measure the stars’ rotation periods.  He tells the story here:

I encourage you to read the whole tweet thread!

Basically, Jason was able to show that the stars in the stream are spinning way too fast to be 1 billion years old. In fact they were spinning just as fast as the Pleiades—so they are probably almost exactly the same age.  They’re also the same distance, and there are just as many of them!

I was actually kind of disappointed:

Chris Lintott was grumpy at my framing of the cluster as a “second Pleiades” because the public would understand that to mean “another thing I can see with my eyes in the sky that wasn’t there before”, which is misleading:

His point is well taken, but Heidi Hammel would say that good science communication means linking to things people know about.  Eric explained well what I meant and why the discovery is a big deal:

This discovery came very fast, and was only possible because of the hard work of the teams that made Gaia, Kepler, and TESS possible.  Because those are all-sky surveys committed to making data public as fast as possible, unexpected gems like this can go from tweets to papers in a matter of weeks.  Jason Curtis pointed out that most of the actual science only took him hours:

It’s a new era of stellar astronomy. So exciting!

In Defense of Magnitudes

Astronomical magnitudes get a bad rap.

Hipparchos 1.jpeg Hipparchus, supposedly

The Greek astronomer Hipparchus famously mapped the sky and assigned each star a “magnitude” (or size) based on its apparent brightness.  The human eye is a surprisingly precise photometer (you can with just a little effort estimate brightnesses differentially to about 0.1 magnitude; I’m sure dedicated amateurs can do better) So Hipparchus could have been thorough about this, but he was actually quite general: he just lumped them into 6 categories: “stars of the first magnitude” (the brightest), “stars of the second magnitude” and so on.

But while the human eye is precise it’s not linear: it’s actually closer to being a logarithmic detector. This gives it a great dynamic range but it means that what seems to be “twice as bright” is actually much, much brighter than that.

In 1856 Norman Pogson formalized this in modern scientific terms by proposing that one magnitude be equal to a change in brightness of the fifth root of 100, with a zero point that roughly aligned with Hipparcus’s rankings so that “first magnitude stars” would have values around 1. This captured the logarithmic scale and spirit of the original system, and has frustrated astronomers ever since.

Astronomers regularly complain about this archaic system. A lot of this comes from trying to explain it in Astronomy 101 or even Astronomy 201 where our students expect a number attached to brightness to increase for brighter objects, and where we have to teach them a system literally no other discipline uses. Especially at the Astronomy 101 level, where we are loathe to use logarithms, we often skip the topic altogether.

But I think astronomers don’t realize how good we have it.

First of all, the scale increases in the direction of discovery: there are very few objects with negative magnitudes (the Sun and Moon, sometimes Venus, a few stars in certain bands) but lots of objects up in the 20’s where the biggest telescopes are discovering new things. Big numbers = bigger achievements is much better than “we’re down to -10 now!”, in my opinion.

Secondly, the numbers have a nice span. The difference between 6 and 7 is just enough to be worth another number. This is because the fifth root of 100 is only 8% larger than the natural logarithmic base e, which is the closest thing we have to a mathematically rigorous answer to the question “how much is a lot?”.

But most importantly, the system is a beautiful compromise between simplicity and precision that allows for very fast mental math and approximations for any magnitude gap.

This is because we long ago settled on base-10 for our mathematics, and the magnitude system is naturally in base 10. 15 magnitudes is a factor of 1,000,000, because every 5 magnitudes is exactly 100.  2.5 magnitudes is a factor of exactly 10.

It doesn’t take much practice to get very fast at this. If we used, say, e as the base instead, the 8% difference would compound with each magnitude.  exp(15) is 3.6 times larger than 100,000.

Finally, and most importantly IMO, because this interval is very close to a factor of e, we get the lovely fact that very small magnitude differences translate pretty well to fractional differences.  So, a change of 0.01 magnitudes is almost exactly 1% (only 8% off, actually). That’s so useful when trying to do quick mental estimates.  For instance: a transiting planet with a 10 mmag depth covers 1% of the star, so it has 10% of the star’s radius (since sqrt(0.01) = 0.1). A 1 mmag transit therefore corresponds to 10x less surface covered, so it has 3% of the star’s radius. Easy!

I think of it as akin to the twelfth-root-of-two intervals on an equal-tempered instrument. No interval on such an instrument produces the mathematically perfect 3:2, 4:3, or 5:4 harmonic, but they’re all close enough and in exchange you can transpose music and shift keys with ease and without loss of musical fidelity. The pedants may complain, but it’s worked great for centuries.

Do NASA and the NSF support SETI?

Does the federal government support SETI?  We usually say it does not, but in the 2019 audit of the SETI Institute, there is a letter from NASA protesting this characterization.  It contains this language:

The OIG’s statement on the absence of NASA’s funding for SETI research is misleading and the finding incorrect.1 NASA has funded the development of several instruments that enable such searches

Michael New made a similar point at the Houston NASA Technosignatures workshop: NASA has funded some SETI work since 1993 (including that workshop itself).

The footnote in the text above mentions 3 grants explicitly, but I think they missed a few. Working with Jill Tarter and others I’ve tried to count every NASA and NSF grant for SETI work since 1993. I don’t know of any from the ’90’s, but, as the report states, there are some in the past 15 years.

Here’s my list:

NASA:

  • “A 2 Billion Channel Multibeam Spectrometer for SETI” 2 years 2005–2007, $398,040 (PI: Marcy, NNG05GK06G, in response to Origins of Solar System NRA-01-OSS-01-ASTID)
  • “Arecibo Multibeam Sky Survey for Direct Detection of Inhabited Planets,” 4 years 2009–2013, $485,642   (PI: Korpella, NNX09AN69G in response to Exobiology 2008 NNH08ZDA001N-EXOB) Money funded running the SERENDIP IV survey and SETI@Home
  • “New Strategies in the Search for Extraterrestrial Intelligence” 2008 $15,000 (PI: Paul Davies, NNX08AG65G) to fund the “Sound of Silence” SETI Workshop
  • “A Wideband SETI Galactic Plane Survey” 4 years 2009-2012 $302,000 (PI: Steven Levin, in response to Origins of Solar System NNH08ZDA001N-SSO).
  • “Detection of Complex Electromagnetic Markers of Technology,” 3 years (+ 1 year no-cost extension) 2005–2009, $660,079 (PI: Jill Tarter NNG05GM93G in response to NRA-OSS-01-ASTID). Money funded studies by Cullers, Stauduhar, Harp, Messerschmitt, and Morrison on using autocorrelation and other methods for detecting broadband SETI signals.
  • “Instrumentation for the Search for Extraterrestrial Intelligence,” 3 years, $590,589 (PI: Werthheimer, NNX12AR58G in response to ASTID 2011 NNH11ZDA001N-ASTID) Money funded building an instrument at Arecibo/GBT.

NSF:

  • AST-0808175 :  Radio Transient and SETI Sky Surveys Using the Arecibo L-Band Feed Array  $362,624.00 (PI: Wertheimer, NSF-AST 2008 )
  • AST-0838262:  Collaborative Research: The Allen Telescope Array: Science Operations
    $310,000.00 (PI: Tarter, NSF-AST 2008 )
  • AST-0540599:  Collaborative Proposal: Science with the Allen Telescope Array
    $300,000 (PI: Tarter, NSF-AST 2005)
  • AST-0243040 :  Multipurpose Spectrometer Instrumentation for SETI and Radio Astronomy
    $704,080.00 (PI: Marcy, NSF-AST 2002)
  • OAC-0221529:  Research and Infrastructure Development for Public-Resource Scientific Computing $911,264.00 (PI: Anderson, NSF-OAC 2001)

Total since 1993: $2,904,968 (NASA) + $2,134,350 (NSF) = $5,039,318

Wow! $5 million!  That’s a lot, right?

Well, not really. That means that since 1993, the entirety of federal grant spending on the topic is not even $200,000/yr, which, after indirect costs, supports 1-ish FTE (i.e. one scientist/engineer).  So one person at a time.

Now maybe that’s not fair, and we should count from 2001, when the first of these grants began.  Then it’s $278,000/yr, so we’re up to maybe 1.5 FTEs.

So, while it’s technically true that NASA has supported SETI for decades, the amount we’re talking about is so small that it’s not really a rebuttal to the reality of the situation, which is that the government doesn’t adequately fund SETI.  Why not?  The letter in the audit gives a reason:

NASA sets its priorities by following the recommendations of the National Academies of Science, Engineering, and Medicine while simultaneously implementing national priorities established by the President and Congress. SMD will continue to evaluate technosignatures research in the context of the Directorate’s overall portfolio through its standard scientific prioritization process.

This is mealy-mouthed, but the bottom line is that SETI funding is not a high priority in the NASA authorization bills or in the 2000 or 2010 Decadal reviews, so NASA doesn’t feel that it needs to fund it.

Now, this isn’t really a great excuse—the Decadal reviews do say that SETI is good and worth pursuing (even if they don’t recommend funding), and there’s nothing preventing NASA from including SETI under the astrobiology umbrella (which is a field it’s required to pursue).

Indeed, NASA is very inconsistent about whether SETI is allowed to be funded via grants—contrast its protest above that yes it does too fund SETI with Jill Tarter’s exploration of how SETI is/isn’t allowed in various NASA calls through the years here.

The bottom line is that what SETI needs is an explicit recommendation for funding in this upcoming Decadal process and/or explicit mention of technosignatures as an authorized expenditure for NASA and the NSF by Congress.  Here’s hoping that the winds really are changes and that we’ll get both in the next couple of years!

[Note to self: Grants since the list here are listed here.]

Galactic Settlement and the Fermi Paradox

The Fermi Paradox is the supposed inconsistency between the ease with which a spacefaring species could settle the entire Milky Way given billions of years and the fact that they are not obviously in the Solar System right now.

This, original form of the paradox was formulated most trenchantly by Michael Hart (more on him in Section 2.2 here) who called the lack of extraterrestrial beings or artifacts on Earth today “Fact A”. He showed that most objections to his conclusion stem from a lack of appreciation for the timescales involved (it takes a small extrapolation from present human technology to get interstellar ships, and even slow ships can star-hop across the Galaxy in less than its age) or what I’ve called the monocultural fallacy (positing a common behavior to all members of all extraterrestrial species, forever).

William Newman and Carl Sagan wrote a major rebuttal to Hart’s work, in which they argued that the timescales to populate the entire Galaxy could be quite long. In particular, they noted that the colonization fronts Hart describes through the Galaxy would move much more slowly than the speed of the colonization ships. They also argue that long-lived civilizations are anti-correlated with rapidly-expanding ones, and so they conclude that civilizations with very slow population growth rates are necessarily very slowly expanding. They conclude the Galaxy could be filled with both short-lived rapidly expanding civilizations that don’t get very far and long-lived slowly expanding civilizations that haven’t gotten very far—either way, it’s not surprising that we have not been visited.

We rebutted many of these claims in our paper on the topic. In particular, we argued that one should not conflate the population growth in a single settlement with that of all settlements. In particular, there is no reason to suppose that colonization is driven by population growth, resource depletion, or overcrowding, or that a small, sustainable settlement would never launch a new settlement ship. One can easily imagine a rapidly expanding network of small sustainable settlements (indeed, the first human migrations across the globe likely looked a lot like this).

Jonathan Carroll-Nellenback

Once this constraint is lifted, a second consideration makes Newman & Sagan’s numbers smaller. Most of the prior work on this topic exploit percolation models, in which ships move about on a static substrate of stars, but real stars move. Many of these papers also assume that the entire network of settlements have a similar behavior, and some posit they all might suffer a simultaneous culture shift away from settlement.

Jonathan Carroll-Nellenback at the University of Rochester with Adam Frank, and in collaboration with Caleb Scharf and me, has just finished work on analytic and numerical models for how a realistic settlement front would behave in a real gas of stars characteristic of the Galactic disk in the Solar Neighborhood.

The big advances here are a few:

  1. Jonathan has worked out an analytic formalism for settlement expansion fronts and validated it with numerical models for a realistic gas of stars
  2. Jonathan has accounted for finite settlement lifetimes, the idea that only a small fraction of stars will be settle-able, and explored the limits of very slow and infrequent settlement ships
  3. Jonathan has not assumed that settlement lifetimes or settlement behaviors are correlated. Rather, he assumed a simple, conservative set of parameterized rules for settlement and explored settlement behavior as a function of those fixed parameters.

In particular, the idea that not all stars are settle-able is important to keep in mind. Adam calls this the Aurora effect after the Kim Stanley Robinson novel in which a system is “habitable, but not settle-able.”

The results are pretty neat. When we let the settlements behave independently, Hart’s argument looks pretty good, even when the settlement fronts are pretty slow.  In particular, one can have very limited range (no faster than our own interstellar ships but lasting a million years, or faster ships that can only travel about 1pc) and still settle the entire Galaxy in less than its lifetime because the front speed becomes limited by the speed of the stars, which carry settlements into range of new stars regularly and naturally diffuse throughout the Galaxy.

Jonathan explores a few regimes where Earth would not have been settled yet. He finds that it doesn’t take much—just a single settlement front with modest ship ranges and launch rates—to populate the entire Galaxy in much less than a Hubble time.

Also neat, is that Jonathan explores regimes where they have been here, but we just don’t notice because it was so long ago.  Adam and Gavin Schmidt explored this possibility in their Silurian Hypothesis paper, and I did something similar in my PITS paper. The idea is that “Fact A” only applies to technology that has visited very recently or visited and then stayed permanently. Any technology on Earth or the Solar System that is not actively maintained will eventually be destroyed and/or buried, so we can really only explore even Earth’s history back in time for of order millions of years, and not very well at that.

So really, the question isn’t “has the Solar System ever had a settlement” it’s “has it been settled recently”.  Jonathan shows that there is actually a pretty big region of parameter space where the Solar System is amidst many settled system but just hasn’t been visited in the last 10 million years.

Of course, there are still lots of other reasons why we might not have been permanently settled by a Galactic network of settlements—as we note in the paper:

Hart’s conclusions are also subject to the assumption that the Solar System would be considered settleable by any of the exo-civilizations it has come within range of. The most extravagant contradiction of this assumption is the Zoo Hypothesis (Ball 1973), but we need not invoke such “solipsist” positions (Sagan & Newman 1983) to point out the flaw in Hart’s reasoning here. One can imagine many reasons why the Solar System might not be settleable (i.e. not part of the fraction f in our analysis), including the Aurora effect mentioned in Section 1 or the possibility that they avoid settling the environment near the Earth exactly because it is inhabited with life.

In particular, the assumption that the Earth’s life-sustaining resources make it a particularly good target for extraterrestrial settlement projects could be a naive projection onto exo-civilizations of a particular set of human attitudes that conflate expansion and exploration with conquest of (or at least indifference towards) native populations (Wright & Oman-Reagan 2018). One might just as plausibly posit that any extremely long-lived civilization would appreciate the importance of leaving native life and its near-space environment undisturbed.

So our results are a mixed bag for SETI optimists: Hart’s argument that settlement fronts should cross the whole Galaxy—which is at the heart of the Fermi Paradox—is robust, especially because of the movements of stars themselves which should “mix” the Galaxy pretty well, preventing simply connected “empires” of settlements from forming.  If Hart is correct that this means we are alone in the Galaxy, this is actually very optimistic for extra-galactic SETI, because it means other Galaxies with even a single spacefaring species should rapidly become endemic with them. Indeed, our analysis did not even include any effects like halo stars or Galactic shear which will make settlement timescales even faster.

On the other hand, there are a lot of assumptions in Hart’s arguments that might not hold, in particular that if the Sun has ever been in range of a settled system that “they” would still be here and we would know it. Perhaps Earth life for some reason keeps the settlements at bay, either because “they” want to keep it pristine or it’s just too resilient and pernicious to permit an alien settlement from surviving here. Is Earth Aurora?

The paper is here.

SETI is a very young field (academically) Part II

In a previous post, I discussed the five PhD dissertations focused on SETI (ever!) and mentioned that I could not track what had become of one of their authors, Darren Leigh.  Well, it turns out I should have just asked!

Darren was kind enough to email me with the details of his degree and his thoughts on the merits of a degree in SETI, Paul Horowitz as an adviser, and his career path since then.

I’ve updated my previous post to reflect his input. Below is his email to me, which he kindly allowed me to reproduce here.


Darren Leigh, the first person to write a doctoral thesis focused on their search for extraterrestrial intelligence.

Hi Jason,

A friend stumbled onto this post of yours and sent me the link.

I didn’t think I would be that hard to find. :-)

At the time I did my dissertation, I was told that it would be the world’s first on the subject of SETI. A couple of previous astronomy dissertations had contained a chapter on SETI, but did not have it as the main topic. The fact that I had done a bachelor’s and master’s in EE at MIT (with some physics background) probably made this easier than it would have been for a real physics major looking for a career track in academic astronomy. (Note that my PhD says “Applied Physics”, and is from the Division of Engineering and Applied Sciences, and not the Physics Department).

The real pull of doing SETI was working for Paul Horowitz at Harvard. I was actually in the early stages of a PhD program at MIT when I met Paul and decided to move up the street to work with him. Paul always prided himself on being a generalist, rather than a narrowly-focused academic. Note the wide range of things that he works on, including the amazing “Art of Electronics”. Those of us in the Horowitz lab were amused when Ernst Mayr complained about what a waste SETI was, both in terms of resources as well as in terms of the professional lives of Paul’s students. I think Paul’s students have all done pretty well, taking a more generalist approach than many doctoral recipients.

I’ve been doing corporate-type R&D since I defended, and my SETI background has served me well in areas from electronics to signal processing to satellite communications to marketing and public relations. [I spent a lot of time with camera crews and the press around 1995 due to the SETI work and the (then recent) discovery of 51 Pegasi b.]

Jonathan Weintroub, another of Paul’s PhD students who defended the same year that I did and also an EE, was doing actual astronomy, looking for highly red-shifted hydrogen. A lot of the work we were doing overlapped. He now works for the Harvard-Smithsonian Center for Astrophysics on the Submillimeter Array.

Ian Avruch was a doctoral student of Bernie Burke, but hung around the Horowitz lab a lot because he was also looking for highly-redshifted hydrogen and could actually get stuff built there. He’s a real physicist and has done a lot of professional astronomy since. I believe that he is at the European Space Agency now.

Chip Coldwell (on your list) was a physics major, but has spent most of his professional life doing software/computer stuff, and is now apparently moving into RF hardware. You can check with him yourself, but I don’t think he was doing astronomy research after his PhD, even though he has worked for such astronomers. He spent a lot of time at Red Hat and is now at MIT Lincoln Lab.

Of the other Horowitz students on your list, Andrew Howard had been a physics major and got a physics PhD and is now a professor of astronomy at CalTech. Curtis Meade was (I believe) an EE, who got his PhD in “Applied Physics” at the School of Engineering and Applied Sciences, like I did. I don’t know what he’s up to now.

I can’t think of any of Paul Horowitz’s doctoral students who has had professional problems. I guess Mayr was used to narrowly-focused grad students who could be ruined if they weren’t trained exactly right for academia. Paul took in both EEs and physicists and made us all better at both of those things, as well as turning us into skilled and pragmatic researchers.

As far as wasted money and resources go, SETI is cheap. I think people believe that it is expensive because they associate it with “space” and that with NASA and it’s enormous budgets. There’s a good chance that the press spent more money covering our SETI work than we spent actually doing it.

Me? I’m currently a VP at (and one of the founders of) Tactual Labs. We do advanced human-machine interaction, especially high-performance capacitive sensing systems. I’ve been working in R&D shops for my entire professional career. After finishing my doctorate, I spent ten years at Mitsubishi Electric Research Labs, coming up with new IP and product ideas. That lab was magical and very influential, and many alumni went off to professorships at MIT, Harvard and other prestigious universities, as well as to corporate R&D labs at Microsoft and Google.

SETI is a very young field (academically)

[Note: This is a “living” post which I update periodically as I learn about people who have done graduate work in the field. If I’m missing a name please email me.]

SETI is not a field that has a large presence in academia, especially in terms of graduate education. Indeed, there are only two regularly numbered graduate courses in the world on the topic that I’m aware of (at Penn State and UCLA).

Because of this, it’s hard to get a PhD while having the primary focus of your dissertation be searching for technological extraterrestrial life. In fact, so far as I can tell (speaking with many of the people in the field) it’s only been done thirteen times:

  1. Darren Leigh (May 1998, Horowitz, thesis)
  2. Alexey Arkipov (December 1998, Litvinenko)
  3. Stephen Brown (2000, Dixon & Kraus, thesis)
  4. Charles Coldwell (2002, Horowitz, thesis)
  5. Andrew Howard (2006, Horowitz, thesis)
  6. Andrew Siemion (2012, Bower & Werthimer, thesis)
  7. Laura Spitler (2013, Cordes, thesis)
  8. Curtis Mead (2013, Horowitz, thesis)
  9. Ian Morrison (2017, Tinney, thesis)
  10. Emilio Enriquez (2019, Falcke)
  11. Sofia Sheikh (Spring 2021, Wright)
  12. Paul Pinchuk (Summer 2021, Margot)
  13. Macy Huston (Summer 2023, Wright)

Paul Horowitz, SETI PhD adviser extraordinaire.

Except for a brief spell from 1998-2002, until 2018, Paul Horowitz was responsible for supervising at least half of all doctoral SETI dissertations! Thanks, Paul! Of these twelve, six are professional astronomers today, Mead is at Apple, Coldwell works in a astronomy-related industry, Brown is apparently a scientist at Harris Corporation, and Darren Leigh describes his career here. We’re not sure what became of Arkipov.

I’m also aware of some terminal master’s degrees on the topic (many are EE degrees related to the Argus SETI array):

  1. Dennis Cole (1976, Dixon & Kraus, thesis)
  2. Jim Bolinger (1988, Dixon & Kraus)
  3. Hyung Joon Kim (1999, Ellingson & Burnside)
  4. Tom Alfernik (2000, Ellingson & Burnside)
  5. Emarit Ranu (2000, Ellingson & Burnside)
  6. Amy Reines (2002, Marcy & Cool)
  7. Mikael Flodin (2019, Mattsson, thesis)
  8. Andreea Dogaru (2019, Kerins & Breton, thesis)

This is not to say that no other graduate students have done work on the topic. Here are a few of the (presumably many) theses that had a significant SETI component:

  1. Maggie Turnbull
  2. Jayanth Chennamangalam
  3. Hayden Rampadarath
  4. Kimberley M. S. Cartier
  5. Branislav Vukotic

And there has also been a lot of doctoral work in the humanities and social sciences studying SETI itself, for instance in this theses by Daniel Romesberg, Claire Webb and Rebecca Charbonneau.

I’m also aware of some current graduate students who have or have planned for major (50-100%) components of their dissertation work to be searching for intelligent life in the universe:

  1. Bryan Brzycki (Siemion/dePater)
  2. Megan Li (Margot)

And three more with at least a portion of their thesis about SETI:

  1. Gerry Zhang (Siemion/dePater)
  2. Maren Cosens (S. Wright)
  3. Neda Stojkovic
  4. Daniel Giles (Walkowicz)

So the number of thesis is going up by a lot in the span of just a few years! This is (weak) evidence of what certainly feels like a resurgence in the field. Still, these numbers are tiny compared to the perception of the amount of SETI work being done, and illustrate how young the field really is, despite the nearly 60 years that have elapsed since its inception.

‘Oumuamua, SETI, and the media

Avi Loeb

[Note: This post was written in 2019, two years before Avi’s book Extraterrestrial and ensuring book tour in which his claims were significantly stronger and his voice significantly louder than at the time of this writing.]

Avi Loeb is the chair of the astronomy department at Harvard, a distinguished and well cited astronomer (he has an h-index of 87), and the chair of the Breakthrough Starshot initiative. He’s a strong proponent of making sure that science doesn’t succumb to groupthink and champion of outré ideas.

He also has been making headlines recently for articles he has co-authored, interviews he has given, and popular media columns he has written about the possibility that fast radio bursts, and now ‘Oumuamua, are artificial in origin. This has created a great deal of buzz in popular culture and a lot of hand-wringing and criticism on social media by scientists who find his actions irresponsible. Many have asked my opinion, so I’m collecting my many thoughts on the topic in this post.

I am happy to defend Avi on these grounds:

  • He is driving us to have an important conversation about what “acceptable” SETI research looks like, and in this conversation I’m mostly on his side. He’s essentially moving the scientific equivalent of the “Overton Window” towards SETI, and that’s a good thing. These are exciting and interesting questions and we should not let the face-on-Mars/Ancient-Aliens/UFOlogy types prevent us from discussing them.
  • He is using tenure and his stature the way we all imagine it’s supposed to be used: as a shield so that he can explore potentially unpopular research avenues without fear of retribution or ostracism. We all imagine that’s what we would do in his position (I hope!) but too often it ends up just being a club to get junior scientists to conform to one’s vision for what “proper” science looks like and what “good” problems are.
  • The papers he and his postdocs are writing are important first steps in making Solar System and other forms of SETI a serious academic discipline.
  • He is being a role model for how scientists can explore outré ideas and spend an appropriate amount of their time on potential breakthroughs.
  • He is putting SETI in the public eye and doing a lot of outreach.

Avi wouldn’t be pushing the envelope hard enough if he weren’t getting some pushback, and indeed there is plenty of fair and good-faith criticism that can be made about his approach (not all of which I agree with):

  • The degree of certainty he expresses in ‘Oumuamua being artificial does seem unwarranted to me (though to be fair I’ve always been an ‘Oumuamua-might-be-artificial skeptic.)
  • Given the way we know the press (especially the yellow press) will handle any story about “aliens”, one can argue that the “extraordinary claims require extraordinary evidence” maxim is especially applicable to SETI (I’ve made this argument strongly when discussing my own research in the press.) Avi could hew more closely to this maxim.
  • The tone of his papers and his public comments are quite divergent. The body of the paper on ‘Oumuamua-as-lightsail, for instance, has a brief mention about the potential of the artifice of ‘Oumuamua at the end, but most of it is about the perfectly general problem of thin objects in interstellar space. Snopes highlights this divergence well pointing out that the paper is quite sober and restrained compared to some of the media coverage. (It’s true that the title and abstract of the paper are about ‘Oumuamua specifically, and that it serves as the case study for the whole analysis.) Avi’s public statements are much less conservative and equivocal.
  • He is not just quietly following the evidence; he is using his platform to have a very public and high-visibility discussion about his research. I will concede that Avi is an exception to my earlier (somewhat petulant) protest that SETI scientists are not in it for the attention. That said, I will object to anyone who would claim Avi is only in it for the attention, or that such attention is inherently a bad thing.
  • Many of his papers are de novo explorations of topics like the fate of comets in interstellar space, with little connection to the substantial amounts of work that has already been done on the topic, and his papers would be better and less naive if they had a closer connection to this prior work rather than starting from scratch.

More broadly, let’s look at two threads on Twitter criticizing Avi. I’ll start with this one by Bryan Gaensler:

Bryan makes the rather Popperian argument that if your model is too flexible then it can’t be falsified, so you’re not doing science.  The implication is that since we don’t have a good model for aliens, we can always play the “aliens of the gaps” game and so SETI isn’t good science unless it’s looking for unambiguously artificial signals like narrow-band radio waves.

This argument isn’t as tight as it seems. Most interesting new theories start without concrete predictions—General Relativity was so hard to use that even Einstein wasn’t sure what it predicted (he got the deflection of starlight wrong the first time he calculated it; he wrote a paper saying gravitational waves don’t exist). Theories don’t spring fully-formed from theorists’ heads; many important breakthroughs start with something less than quantitative or precise (“maybe we need to modify gravity”; “maybe there is a new subatomic particle involved”) and let the data guide the theories’ details.

This is the normal progression of science. SETI is no different, and so no less scientific.

Then there is this one, by Eric Mamajek, which I mostly agree with:

It’s mostly fine through tweet #9, but then he conflates things in the last tweet using an unwarranted leap of logic.

Up until then he had been criticizing the Holmesian logic of how ‘Oumuamua must be alien because we had ruled out natural explanations. I quite agree with him.

But in the last tweet he jumps to criticizing even bringing up the hypothesis of ETI’s in general, implying that scientists who do are pulling a Giorgio Tsoukalos. (There’s also the assertion at the end such anomalies will “inevitably” turn out to be not just natural, but mundane, which is obviously not strictly true.)

But Tabby and I weren’t pulling a Tsoukalos when we submitted our proposal with Andrew Siemion to NRAO to study Tabby’s Star. We really weren’t. I have clarified the actual events with Eric, so I’m pretty sure that’s not what he meant to imply here, but that is how this tweet reads.

Bryan makes a similar (but softer) implication in his final tweets:

We all would! Indeed, Avi Loeb suggested that Breakthrough Listen point Green Bank at ‘Oumuamua1 because he understands very well that the proof of alien technology is something like the bullets on Bryan’s list.

But the implications of these tweets aren’t just wrong, they’re harmful to the field of SETI. A very plausible path to SETI success will be that we will see something strange (not “Eureka!” but “That’s funny…” as the old fortune quip goes) and eventually, after lots of follow up, we might find the smoking gun, or perhaps it will just end up being a proof by exclusion.  As I wrote in 2014:

Artifact SETI can thus proceed by seeking phenomena that appear outside the range that one would expect natural mechanisms to produce. Such phenomena are inherently scientifically interesting, and worthy of further study by virtue of their extreme nature. The path from the detection of a strange object to the certain discovery of alien life is then one of exclusion of all possible naturalistic origins. While such a path might be quite long, and potentially never-ending, it may be the best we can do.

Communication SETI, on the other hand, shortcuts this path to discovery by seeking signals of such obviously engineered and intelligent origin that no naturalistic explanation could be valid. Together, artifact and communication SETI thus provide us with complementary tools: the most suspicious targets revealed by artifact SETI provide the likeliest targets for communication SETI programs that otherwise must cast an impossibly wide net, and communication SETI might provide conclusive evidence that an extreme but still potentially naturalistic source is in fact the product of extraterrestrial intelligence (Bradbury et al. 2011).

Bryan’s thread and Eric’s final tweet could easily be read to foreclose this sort of research, essentially saying “it’s not worth thinking about the aliens hypothesis until it’s so unavoidable that you’ll get no flak for it” (radio signals à la Contact, the proverbial saucer on the White House lawn, etc.). They certainly make it clear that they won’t hesitate to chastise you on Twitter for going down this road.

But if we want to get to the end of that road, we’ve got to start walking down it at some point, and when the media very reasonably asks what we’re doing so they can report on it to a very understandably curious public, we should be allowed to answer their questions without having our motives (or scientific credibility) questioned by our peers.

In short: your mileage may vary on Avi’s particular style of public communication and conclusions on ‘Oumuamua, but when making your critique please be mindful that you are not slamming the whole endeavor. SETI as a serious science will make hypotheses, explore anomalies, and discuss the possibility of alien technology as the cause, and we need to be able to do so without obloquy from our peers, and without them policing which kinds of SETI we’re “allowed” to work on or talk about in public.

If I seem touchy about this, it’s actually not because I’m smarting from these Twitter threads or anything like that (which I don’t actually disagree with much—in particular I’m friends with Eric and I know I have his respect). As I wrote at the top, I’m glad we’re having this conversation and I hope it continues!

But another purpose of this post is that Avi and I (and other SETI researchers) have advisees that work on SETI and these sorts of messages are not lost on them: these tweets imply that senior people in your field will disapprove of you because of the topic of your research, and they will police what you’re allowed to say to the press, regardless of how good a scientist you are. Keep in mind, “Avi’s” paper on ‘Oumuamua that is being criticized has a postdoc as first author.

So in closing: I pledge to keep the SETI real and well grounded in science, to be responsible in my interactions with the media about it, and to train my students to do the same.

And, I hope my peers will pledge to create a welcoming environment for my advisees as SETI (hopefully!) comes back into the astronomy fold (even when—especially when—they are complaining about Avi).

[Updates: Bryan responds in this thread (click to expand):

also:

1= privately, Bryan clarified to me his tweet was referring to his team’s MWA search for signals, not the search by Green Bank, as I suggested in my post. I should have read Bryan’s tweet more carefully and followed link before critiquing his tweet.

Also, I’ve changed the language about who suggested that GBT observe ‘Oumuamua; Joe Lazio informs me that the observations were made with WVU time following discussions with Breakthrough Listen that preceded Avi’s recommendation. In spite of both errors on my part in the original post, my point that Avi appreciates the importance of dispositive evidence stands.

Also, Avi touches on his motives in this interview:

But the search for intelligent life remains outside the mainstream. I am trying to change that in two ways. First, by speaking out in the way that I did on ‘Oumuamua.

]