Author Archives: jtw13

Reductionism and Emergence II: Two Philosophers’ Takes on My Post

A short while back I posted my amateur position on the issues of reduction and emergence in physics.

I shared it with Chlesea Haramia, a philosopher that works on problems of ethics in astrobiology, and she and Thomas Metcalf kindly responded with a lengthy discussion. I’m really appreciative of the time they have taken to “translate” my post into proper terminology of philosophy and give professional feedback.

So, if you’re interested in a professional take on the ideas I raised in my prior post, please read on!

Reduction, emergence, and the limits of physics

Thomas Metcalf and Chelsea Haramia

Jason Wright presents a thoughtful and interesting discussion of his views on emergence and reduction. In academic philosophy, these terms and concepts are both widely used and the subjects of vigorous debate. In this short note, we want to outline how philosophers think of reduction and emergence and in which ways these concepts can illuminate some topics Jason mentions.

Reduction vs. emergence, weak and strong

Put simply, those who believe in emergence (rather than reduction) maintain that the big (holistic) stuff is as real as the small (basic) stuff. Minds and waves of bathwater, then, are as real as electrons and molecules. It makes sense to have scientific theories that talk about tornadoes, mammals, colors, planets, phobias, and so on, instead of merely having theories that talk about the atoms and fields that compose those things.

Emergent phenomena (e.g. planets) are distinct from their basic components or substrates (e.g. atoms of silicon and iron), but there are different ways of describing this distinctness. We’ll look at two: strong and weak.

One way is to maintain that, while complex combinations of the small stuff cause and fully determine the nature of the big stuff, the big stuff is not “realized in” the small stuff: it comes from the small stuff but it’s not ultimately the same stuff as the small stuff. Some of this emergent stuff, in fact, might not even be physical at all, or might not interact with the physical world at all. This “strong emergence,” then, is normally taken to be incompatible with what philosophers call “physicalism,” i.e., the thesis that everything is ultimately physical. Strong emergence occurs when the emergent phenomenon is of a fundamentally different type of stuff than is the stuff it emerges from.

A different way to posit emergence is to take the view Jason favors. Emergent properties are still fully physical, but they’re realized at different scales and often require different academic disciplines, approaches, and analyses for their study. These disciplines study entities and phenomena that are just as real as what the quantum physicist or neuroscientist studies, but we might still need to understand the small stuff to properly identify and fully understand the big stuff. Despite this emergence’s compatibility with a fully physical world, we may call it “weak emergence.” What separates this weak emergence from reductionism is that the emergent (big) stuff is still real in itself (and useful to talk about and include in scientific theories), and crucially, can often be realized in very different sets of small stuff. For example, perhaps there could be silicon-based, rather than carbon-based animals. We would still properly call them “animals,” but “animal” wouldn’t be reducible to “carbon-based (among other things)” because there could be animals that aren’t carbon-based at all. Being an animal would emerge from (among other things) being carbon-based, but it could also emerge from (among other things) being silicon-based. In contrast, perhaps only H2O would ever really be “water.” Something that looked and acted just like water at the macroscale, but wasn’t made of H2O, wouldn’t really be water. If so, then water wouldn’t just emerge from H2O; it would reduce to H2O. (The example of “water” vs. “H2O” is ubiquitous in philosophy; you can read more here and here.)

One of the virtues of Jason’s view is that it provides a coherent avenue of response for anyone who finds that, often, those who attempt to make appeals to emergence have not actually posited anything beyond the purely physical realm. Some emergent accounts are congenial to reductive accounts, and these accounts may all manifest in a fully deterministic, measurable, physical world. The compatibility of Jason’s “weak physical emergence” and reductionism is a useful way of responding to certain claims of emergence—a way of demonstrating that many purported appeals to emergence are actually perfectly compatible with strong physicalism.

Again, these concepts and terms are commonly debated in philosophy, so for much more discussion on the topic, we invite you to visit this link.

The case for strong physical emergence

As you can see from the linked entry just above, there may be reason to quibble a bit with Jason’s (and our) definitions of both “weak emergence” and “strong emergence.” Nonetheless, when addressing issues of strong emergence, we’re happy to help ourselves to Jason’s terminology: that  “strong physical emergence” refers to a phenomenon that is truly, fundamentally real, and emerges from some set of physical causes, but is not itself realized in any set of physical objects. For example, consciousness might arise from neurons, but not be identical to any set of neurons, and it might have fundamental properties that neurons don’t have.

What might those properties be like? Well, the four most-commonly discussed are consciousness, intentionality, perspective, and unity. Consider these four pairs of premises (you can imagine how the rest of each argument would go):

The Argument from Consciousness

C1. At-least-some minds have conscious experiences.

C2. No atoms have conscious experiences.

The Argument from Intentionality

I1. At-least-some beliefs are about things.

I2. No atoms are about things.

The Argument from Perspective

P1. At-least-some experiences necessarily have first-person, subjective perspectives inherently attached to them.

P2. No sets of atoms necessarily have first-person, subjective perspectives inherently attached to them.

The Argument from Unity

U1. At-least-some minds are unified: they are not made of individual parts.

U2. All sets of atoms are disunified: they are made of individual parts.

All these arguments would then conclude that minds, or beliefs, or experiences aren’t ultimately just sets of atoms.

We don’t pretend that these arguments are all decisive; many philosophers would reject them, and most philosophers believe that the mind is ultimately physical. And there are important arguments against dualism about the mind, i.e., the thesis that the mind and the brain are two different objects, which might imply that the mind is non-physical. If we think there are two fundamental categories of stuff—say, physical and non-physical—then we have to explain how these fundamentally different things could possibly interact with each other. At least as far back as the philosopher Elisabeth of Bohemia (1618–1680), skeptics about dualistic views of reality have offered this challenge. (You can read more here.) But you can see how someone might argue, based on the alleged intrinsic properties of mental states, for the strong emergence of minds.

One more thing for now: There are lots of other arguments that the mind isn’t a physical object. We’re not going to get into such arguments much here, but you can read about them if you want.

Against strong emergence

Now we can consider Jason’s argument against strong emergence. It’s based on a good point. We have reason to believe that consciousness and intentionality at-least-weakly-emerge from neurons, since as far as we know, destruction of neurons harms or destroys consciousness and intentionality. If you cut off the current to the broadcast antenna, you lose most of the photons. But that’s all compatible with weak emergence and even with reduction. The interesting question for us is whether causation goes in the other direction. Is there any reason to believe that some extra thing—beyond our neurons and the corresponding current and neurotransmitters—has any causal influence on anything physical? Can my beliefs cause me to do things without my neurons’ causing me to do things? If so, then this would begin to look like what Jason calls “strong physical emergence.”

Well, let’s take a minute to identify an alternative view: epiphenomenalism. Strictly speaking, strong emergence of minds can occur without those minds’ having any causal influence on the physical world. Maybe minds are just extra things, floating out there, passengers along for the ride that never put their hands on the steering wheel. This mind could still be an example of strong emergence; it could still have intrinsic properties (such as, arguably, consciousness) that atoms don’t have. In correspondence, Jason gave the apt analogy of a child’s holding a disconnected video-game controller and watching the video-game feed on a screen, falsely believing that they’re the one controlling the video game. (You can read more about epiphenomenalism here.)

Maybe we think epiphenomenalism is implausible. Maybe we think, for example, that consciousness would have no reason to evolve if it didn’t have some influence on our bodies. Let’s set epiphenomenalism aside for now and go back to Jason’s argument: in essence, that we haven’t found any good candidates for strong emergence yet. We haven’t, for example, found neurons that just kind of fire for no reason at all. If we did, then maybe some strongly emergent beliefs or desires would be the cause of that firing. Similarly, we haven’t found any good evidence of a “life force” or anima that determines whether an animal is alive or dead. So that’s a good point. Maybe if we haven’t found any evidence of something, then by Occam’s Razor, we should dismiss it until we acquire such evidence. (One of us has criticized a version of Occam’s Razor in print, however.)

Potential examples of strong emergence?

Of course, Jason grants that we may have already found something that seems indeterministic in that way, i.e., that seems to result without a sufficient antecedent cause. If I measure an electron’s spin about some axis and then measure its spin about an orthogonal axis, then perhaps the second measurement’s result can’t be explained by anything intrinsic to the electron. This might even make room for something like indeterministic free will, if somehow there were some event or force that could influence the probabilities of our making certain decisions, while still leaving room for other possible decisions. This is a very interesting case and possibly one of the best routes for arguing for indeterministic free will. (Of course, this only really works if we believe in an indeterministic physical story.) If the actual free-will decisions are non-physical events, or if the tie between microscopic particles and free-will decisions is merely a law of nature (such that God, say, could have changed that relationship, by rewriting the laws of nature), then this looks like strong emergence. The free-will decisions are of a fundamentally different type of entity than are the neurological events.

Let’s go back to the question of neurons, then. Does the fact that we haven’t found any firing-for-no-reason neurons suggest that there are no “extra,” strongly emergent beliefs and experiences out there (beyond our neurons) causing our neurons to fire? Let’s grant the empirical premise: maybe we really haven’t found any such neurons. But as far as we know, no one has fully traced the entire process of stimulus-response in a way that rules out any extra influences. At the present moment, we have some fancy devices that allow us to scan brains in gross terms: we can see where blood is flowing, or where there’s lots of chemical activity, for example. But that’s a far cry from (say) getting everything reduced down to something like an observable chain of falling dominoes from external stimulus to neuronal firings to external response. But suppose we did reach that point. Even then, as noted, that wouldn’t rule out strong emergence. For one thing, as noted, the strongly emerging events might be epiphenomenal: they are caused by the physical realm, but don’t cause anything in the physical realm.

In response, one may reasonably be suspicious of a view that arguably inherently rules out the possibility that we could empirically verify its truth. But of course it’s possible for an empirically unverifiable theory to be true, and the position “We should only believe in empirically verifiable claims” is infamously potentially self-defeating. In any case, there’s substantial current debate about whether there is a detectable role for (say) quantum-mechanical decoherence in brain events. (See here for more information.)

We also want to discuss a very interesting example Jason gives: a hypothetical behavior of gravity, dark energy, or some other force. The idea, in brief, would be of a force or field (call it “Force X”) that seems to manifest, say at large scales, and in proportion or otherwise in relation to familiar, “light” matter, but can’t be explained by any of the microphysical-scale events or objects. Force X might influence the light matter around us, but we can’t find any individual particle that constitutes or mediates this force.

This might be evidence that Force X was strongly emergent. After all, Force X might seem to be related to the presence of light matter, but not composed of light matter nor of anything else we can specifically detect. If it was composed of some fundamentally different type of stuff, and not realized in the familiar particles of the Standard Model, then this would look like strong emergence. And this would, in turn, push us toward having to discuss the very deep question of what it means for something to be physical or a part of physics. (You can read more here.) If we never discover any candidate particle to be the matter or mediator of Force X, do we have to conclude that physics itself is a fundamentally incomplete description of reality? Or, perhaps by induction, are we entitled to conclude that Force X is realized in, and mediated by, physical particles that are simply undetectable to us? What if they’re apparently forever undetectable—may we really say that those particles are still part of physics, or still part of physical reality?

These are obviously difficult issues that we can’t solve here. But we mention them to give you an idea of how philosophers think about these issues and to potentially generate further discussion. And we’d like to thank Jason for a simulating post and for the opportunity to present our thoughts here.

Semi-technical appendix: Varieties of reduction and emergence

Okay, for those of you who have followed so far and want to know, in more explicit terms, how to tell the differences between reduction, weak emergence, strong emergence, and complete independence, here we go.

First, it helps to understand the difference between “physical” and “metaphysical” possibility. Something is physically possible when it’s compatible with the laws of physics, whatever they are. For example, to accelerate to half the speed of light, or to undergo an exothermic reaction. Something is metaphysically possible when it could happen, whatever the laws of physics happen to be. For example, to accelerate to twice the speed of light, or to know the exact position and velocity of a particle. (If an omnipotent God exists, then she can create whatever is metaphysically possible, even if it’s not physically possible—after all, she can change the laws of physics.) Of course, some things aren’t even metaphysically possible. Arguably, it’s metaphysically impossible for the number eight to be prime, and metaphysically impossible for something to exist and not exist at the same time. (By the way, there are far more than two varieties of possibility; see here for a much longer discussion.)

Now that we’ve got an idea of those two varieties of possibility, we can think about a procedure for distinguishing emergence, reduction, and so on. (We don’t intend this to be 100% correct and foolproof, but instead, to give a generally useful procedure.)

Take two events, phenomena, or objects. Let’s call them “Micro” and “Macro” since typically, emergent phenomena are on the larger scale than are the phenomena they’re alleged to emerge from. Now suppose we want to know whether Macro is reducible to Micro, or emerges from Micro in some way, or is independent of Micro. We can begin by asking some sets of questions in order.

  1. Is it physically and metaphysically possible for Macro to exist alone in the universe? If “yes,” then Macro is independent of (i.e. non-emergent-from and non-reducible-to) Micro. If “no,” then proceed.
  2. (a) Does Macro have inherent properties or powers that Micro doesn’t have? (b) Is Macro non-physical while Micro is physical, or is Macro otherwise a fundamentally different type of entity than Micro is? (c) Does Micro produce Macro by physical or psycho-physical law (or law of nature) but not by metaphysical necessity? If the answer to all of these is “yes,” then Macro strongly emerges from Micro. If not, then proceed.
  3. (a) Is Macro equally real as Micro? (b) Is it possible for true theories to mention Macro explicitly? (c) Could Macro be realized in many different sets of objects instead of Micro as well? (d) Does Macro produce Micro by metaphysical necessity? If the answer to all of these is “yes,” then Macro weakly emerges from Micro. If not, then proceed.
  4. Does Macro exist? If “no,” then Macro is just a myth. If “yes,” then Macro is reducible to Micro, or we’re at some borderline case.

What about those borderline cases? There are a few possibilities left unaddressed by this procedure, in which in ##2–3 some but not all of the (a)–(c) or (a)–(d) criteria are satisfied. In those cases, we’re probably dealing with some borderline case between strong and weak emergence or between weak emergence and reduction. For more, check out this article.

 

UFOs and SETI

I posted a Twitter thread and it blew up, so I thought I’d record it for posterity here.  Here’s the thread:

Unrolled:

I know now a lot of people want to identify parallels between SETI and UFOlogy. There are a few big differences, though:

1) SETI is based on the premise that alien tech follows the laws of physics as we know them. UFOlogy identifies alien tech from violations of those laws.

Asking me to consider UFOs as alien is asking me to believe two very unlikely things: that they are visiting and imperfectly hiding, and that it’s possible to violate conservation of momentum! This is not a parsimonious explanation for these things.

2) SETI is all about the hunt for good candidates, ones that can definitively survive intense scrutiny. Right now, we have virtually none (I’d say the Wow! signal is the best).

UFOlogy is awash in candidates. It’s starting from the opposite side of the problem.

3) SETI is based in astronomy and related fields. We astronomers have very few skills that translate into the fields needed to study UFO sightings.

It’s fine to scientifically study UFO sightings and understand our airspace, but why drag astronomers into it?

4) SETI works in a domain we don’t have a very good handle on: outer space. It could be *filled* with alien civilizations and signals, but it’s such a big haystack, it’s not hard to understand why we haven’t seen anything yet.

UFOlogy’s domain is the atmosphere, which we know *very well* because we’ve studied it for millennia. There’s not a lot of space for alien spacecraft to mostly hide from meteorologists, air traffic controllers, etc. and still be sort of barely detected the way they are.

Finally, lots of people get excited about UFOs as aliens because they infer from news stories that the government is interested in them, or is hiding what they know about them, or that military pilots or senators are very sure aliens are visiting.

This kind of tea-leaves-reading is not very persuasive to me. I already know a lot of people think UFOs are alien, and it makes sense the military would study weird aircraft and be secretive about that. Yet another article confirming that isn’t new evidence aliens exist.

Finally finally, I appreciate that studying UFOs as non-alien craft is a thing. That’s fine! I’m sure plenty of these things are real aircraft. The above is just about connecting them to aliens, and distinguishing UFOlogy from SETI.

To learn more about all of this, I recommend Sarah Scoles’ books and this article by Katie Mack.

 

Reductionism and Emergence

OK, time for more armchair philosophy!

Inspired by some Twitter posts by Adam Frank, I’ve been thinking about reductionism and emergence.  Here’s the thread that started me off:

In studying this, I’ve found that there are lots of different meanings of the terms “reductionism” and “emergence”, and a lot of the discussion seems to come from people talking past each other because they’re using different definitions. My thinking on this, I should note, is heavily influenced by Sabine Hossenfelder’s essay here.

In one sense, the terms are polar opposites. If by “reductionism” we mean the general approach to problem solving or studying something of reducing a problem to its component parts and working up from there, then its opposite is “holism” which presumes that a system’s behavior is best considered from the top down.

For instance, if I want to study how water sloshes in a bathtub, then starting from atomic physics or quantum field theory is a foolish approach. The water waves in the bathtub are described by equations of fluid flow that are insensitive to the underlying physics. For simple, low-amplitude waves, one is much better served with linearizing things from the equations for gravity waves, plugging in the measured properties of water, determining the modes in the bathtub, and working from there. For more complex situations you could numerically simulate the water in the tub, maybe with the full set of Navier-Stokes equations plus some corrections for surface tension and stuff. But there’s no need to go working out the van der Waals forces between water molecules or the quark interactions in their nuclei.

We call this an “emergent” property: the combined interactions of all of the water molecules obeying the laws of electromagnetism and quantum mechanics appear, on a sufficiently large scale, to be well described by equations that describe the bulk properties of the matter. One quality of an emergent property is that it is insensitive to the underlying physics: you can’t deduce the molecular structure of water from watching waves because there are lots of potential kinds of microphysics that could (and do!) give rise to the same macroscopic phenomena.

This kind of emergence has many levels: at the bottom we have quantum field theory, special relativity, and the Standard Model which describe how all particles interact. At the next level up we have atomic theory and quantum mechanics, which give us the basis for studying molecules. At this level things like the vacuum states of matter and the strong and weak forces don’t matter: they happen “underneath” at scales too small to matter, and we can summarize their contribution in quantities like an atom’s magnetic moment and rest mass (for instance).

From there, we get physical chemistry, but things quickly get too complicated to calculate, so we begin talking about sigma bonds and valences and electronegativity and now we’re into ordinary chemistry. At larger scales we can talk about the bulk properties of the material like its temperature, which conceals but successfully summarizes even more properties of the aggregate. Again, you can determine some things about atoms from chemistry, like the periodic table, but this can only take you so far. Ultimately if you really want to know the structure of the atom you have to study it directly; you can’t distinguish the plum pudding model of the atom from the Bohr model in a chemistry wet lab. Chemistry is thus an emergent property of atomic physics.

And so on to biology, psychology, sociology, and so on, as xkcd put it:

Purity

In one sense, using chemistry instead of quantum field theory is a “holistic” approach because it uses emergent properties instead of a reductionist approach, but it also reveals a second definition of “reductionism” which I’ll call “physical reductionism” to distinguish it: the scientific approach (axiom?) that all physical behavior arises from more fundamental laws at a smaller scale (or, if you like, at a higher energy).

Now, precisely defining reductionism in this way is the job of philosophers of science and I’m sure one can find holes in the way I’ve put it above, but I think my description above defines things more or less well: emergent behavior at each layer (except, I suppose, the bottom layer, wherever that is) is ultimately the sum of all of underlying microphysics, and not some new physics.

We often write reductionism means that we “could” calculate an emergent phenomenon in principle from a more fundamental theory, but I think that clouds the essence of physical reductionism because it unnecessarily introduces issues like predictability and computability. I’d say reductionism is better simply described as the view that there’s nothing else going on beyond lots small scale interactions. Also, emergence is sometimes defined in terms of “surprising” physics that shows up at large scales, but that’s way too squishy for me.

So from this perspective, there is no contradiction or tension at all between emergence and physical reductionism; indeed, as I’ve defined them the terms don’t even really make sense except with respect to each other, as Sara Walker wisely pointed out:

Now, some philosophers distinguish two kinds of emergence: weak and strong emergence. The precise definitions here seem slippery and I’m not sure I totally grasp them, so to distinguish how I’m going to (improperly?) use the terms I’ll refer to weak physical emergence and strong physical emergence.

The most useful definition of weak (physical) emergence to me as a physicist is basically the emergence that follows from reductionism. If it’s a behavior that arises from the sum of lots of smaller interactions, then that’s weak physical emergence. There is then no tension with reductionism at all because it’s consistent with reductionism by definition.

What, then could strong physical emergence be?

Strong emergence is often invoked to describe the kind of behavior that arises from complex systems that is thought to be more than “just atoms” as Adam put it at the top, and is fundamentally in opposition to physical reductionism.

The usual things people point to when asked for examples of strong emergence are life and consciousness.  To illustrate my point, I’ll use an old example.1

Many cultures have historically taught that animals are distinguished from inanimate objects by their anima, some sort of supernatural quality that imbues their physical bodies with motion. The details vary from culture to culture (for instance, the degree to which these overlap with life, the soul, consciousness, and free will) but the essence is that there is something else in the body beyond its corporeal form that makes it move. When an animal dies, that ineffable something leaves the body, and it stops moving. In this view, the body is just a vessel or puppet for the stuff of animate life.

This is decidedly not physically reductionist. We now know today how it is that living things can generate their motion and maintain their metabolic processes biochemically. We haven’t “solved” life by any means, but we do understand the biomechanical mechanisms for how living things move.

Now, it didn’t have to be this way. We could have discovered as we got better at studying living things, for instance, that living animals and dead animals were exactly the same inside physically and biochemically, except living things could move. We might have had to conclude that some things had an extra something that we couldn’t find just by looking inside of them. In fact, some might argue we still might prove that someday, but I’m sure most biologists would say this is not going to happen.

One reason is that we can analyze the biochemistry of life in great detail, and we know that when something dies it’s for a particular reason (like, for instance, lack of blood to the brain, which makes the neurons stop firing), not because its animating force departed. Another reason we can be so confident it’s not going to happen is that it would imply “downward causation”: the electrons in the animal would have to be moving due to some force other than electromagnetism caused by that supernatural anima. The animal’s leg moves because of its muscles, which are triggered by neurons, which are part of an enormously complex central nervous system. But at some point, if the anima2 were responsible and not just lots of individual electrons and ions doing their thing, then somewhere in that chain some electron or ion in some neuron had to pushed by the anima, and not just by its neighbors. If not, then there would be no difference between the inanimate and the animate versions, and clearly something is!

So that would have been an example of strong physical emergence. Another candidate, often brought up, is consciousness and free will. Lots of ink has been spilled over the Hard Problem of consciousness and qualia and so on, and I’m not going to dive into it here. Ultimately, it has the same problem of downward causation: if my consciousness and free will is due to a strongly emergent phenomenon (whether a supernatural soul or something less metaphysical) then at some point the neurons in my brain are responding to that new phenomenon and not just each other (“just atoms”) when they tell me to type these words.

But this is testable! That something firing that neuron could, in principle, be studied scientifically (for instance, by finding a neuron firing for no physically reductionist reason). If there’s more than “just atoms,” then at some point atoms need to respond to something other than “just atoms.”

There are other examples I can think of too (and ones less laden with religious implications). For instance, what if gravity has a smallest scale?  By this I mean, what if gravity only works above some threshold, when enough mass gets together in one place? The exact equation near this threshold might not be expressible in terms the sum of the gravitational forces of individual masses—that is, Newton’s formula could be correct when m and r are above some level, but incorrect below that.

Now this would be very surprising because Newton showed that his formula held even if you summed up the individual actions of all of the underlying atoms—in other words, that it is consistent with being a weakly physically emergent phenomenon.  Also, there are theories of gravity that are strictly physically reductionist that do predict gravity will behave differently or even go away on small scales, so it’s more complex than I’ve described.

Or, more straightforwardly, perhaps the dark energy of the universe or the dark matter works as a physical force that only manifests on large scales and simply can’t be described as an underlying field or sum of interactions of smaller pieces. I think that would be another example of strong physical emergence as I’ve defined it, though I admit this might be inconsistent with how the term is used by others.

One way that Adam has teased that he’s going to look at the problem of strong emergence is in terms of life as a processor of information. I’m looking forward to it, but ultimately information is a statistic or other description we assign to the arrangement of matter and energy in time and space. We have rules for how matter and energy react to each other in time and space, so ultimately any information-based description of life is, once again, physically reductionist. In order for information processing to generate strong physical emergence, there would have to be something else, some definition of information that went beyond a Shannon entropy or something, and I can’t imagine what that would be. If it’s not based on matter and energy’s distribution in time and space, then what is it based on?

One way I’ve seen people try to get around the downward causation problem is with various aspects of quantum mechanics. One avenue is by working with the concept of an “observer” (which is an unfortunate jargon term in physics whose conventional meaning invites us to give consciousness a privileged role in physical phenomena, leading to all sorts of popular misconceptions about quantum physics.)  The other is the apparently random and irreversible phenomenon of wavefunction collapse, which is the source of lots of debate about the meaning and nature of quantum mechanics.  These issues are tricky and still unsettled in quantum mechanics (indeed, they are at the heart of the Measurement Problem, which has capital letters, so you know it’s important) so this could be a way in for a strongly physical emergent phenomenon to push electrons around. Maybe! That seems at least plausible to me, though others have thought about it a lot more than I have. Indeed, Sabine Hossenfelder has gone so far as to define exactly what it would mean for there to be free will without metaphysics and shown that it’s at least possible in principle.

Anyway, that’s where I am on the topic! Again, I’m not a philosopher, so I’m sure I’ve gotten a lot wrong. My purpose was to be clear about my definitions, and hopefully clarify my “physicist’s perspective”.


And now two philosophers have written up their perspective on these ideas! You can read their take here.

Because I don’t want to change things out from under them, I’m making annotations to the above instead of edits and corrections:

1 I did not have the word ready at the time I wrote the post, but this long-discredited view is called vitalism.

2 I should have written vital spark, referring to whatever the extra thing is that vitalism is about. “Anima” is a term from Jungian psychology and refers to a property of the mind.

Bilogarithmic functions

When plotting over a big range, especially when plotting power laws or things with exponential dependence, the logarithm function is your friend. The log-log plot and semi-log plot are standard tools for visualization in science.

But while the logarithm function works on the domain (0,∞), many applications in science operate on other domains, like (-∞,∞) or (0,1).  In these cases, logarithmic plotting is not always appropriate, but in some cases it can be and it can be frustrating to find a good visualization tool that captures what you want.

For instance, I’m teaching stellar structure and evolution again and it’s tricky to make a plot that captures the details of the stellar atmosphere, which is only 1 part in 10 billion of the mass, and the details of the core,  where you want to see details covering only, say, the last 0.1% of the mass.

Let’s say you wanted to compare, for instance, the radiative temperature gradient ∇rad in the sun to the actual temperature gradient (this difference is related to the efficiency of convection). This quantity spans a huge range in the very thin outer layer of the star, so trying to plot its dependence as a function of mass inside the sun gives a pretty useless result:

A rather useless plot. The atmosphere is on the right, the core is on the left.

Maybe we just need to plot logarithmically?  Let’s try it:

A log version of above

That definitely helps!  We see that there is stuff going on down in the core, and apparently the gradient returns to around 1 near the surface, but we still can’t really tell what’s going on.

There are lots of solutions—you can plot in radius instead of mass, or use log(Pressure), for instance, or perhaps we could take the log of 1-m/M?  That would expand the region of interest logarithmically.  Let’s try that:

A log-log version. Note the minus sign on the x-axis which keeps the atmosphere on the right.

Now we’re talking! Suddenly we can see all of the action in the convective envelope.  But…we’ve lost resolution on the core.  Is there some way to expand both ends logarithmically?

There certainly is! We can use the logit function, which I’ve written about it before as a way to expand things on the domain (0,1). This is just the sum of log(m/M) and -log(1-m/M). [nb: logit is usually defined base-e, so you’ll have to divide it by ln(10) to get logit10].

This has the lovely property that is logarithmic in both directions, near zero and near 1.  So a value of -5 means I’m at 10-5, and a value of 5 means I’m at 1-10-5, or 0.99999.  In other words, negative vales “count zeroes”, and positive values “count nines”.  0 corresponds to a half, right in the middle.  So it’s intuitive to read and understand and gives you the dynamic range you want at both ends.  It’s a great way to plot things like transmission or conductivity when you have values that hug 0 and 1, and you care about how close they get.

Let’s try it:

A log-logit plot

Perfect!  Now the core is expanded way way out so we can see what’s going on (for the experts: it’s slightly convective here because this is the ZAMS sun and we have extra CNO fusion going on until all the carbon burns out).

There are other situations where you have functions that have power law dependence in both the negative and positive directions, as well. This has come up occasionally with me, and I’m always tempted to do something like sign(x)log10(|x|), but it doesn’t work because near zero it blows up.

What you want is a function that is linear near zero, and asymptotes to sign(x)log10(|x|).

It turns out matplotlib has a ‘symlog’ scaling that addresses this!  I’m not totally sure how it works, but here it is. This is a linear-symlog plot of a function against itself (so, the 1:1 line):

A ‘symlog’ scaling of the 1:1 line in Python.

This is nice, but there’s a clear kink in the data where it switches from logarithmic to linear in between.  It’s pretty kludgy.

But it turns out that there is an already-defined not-at-all kludgy function that already does this smoothly!  It’s the inverse hyperbolic sine function!  Specifically, in base-10 you’d call arcsinh(x/2)/ln(10), which is also log10(x/2+√(x/2)2+1)). And it behaves just like you want:

The bilogarithmic scaling

BTW, it’s not just that arcsinh has a sigmoid-y kind of shape so gets the job done, it’s actually the optimal function here.  That’s because it is the inverse of sinh, which is exactly the function that should generate a straight line in this scaling: sinh(x) is the average of exp(x) and -exp(-x), which are the functions you want on either side of zero; far from zero the other one decays away, and at zero the function is nicely linear (the second order term is zero).

[Update: Prior art from Warrick Ball, who implies this is on old trick and super useful for color scales!:

]

I don’t know what to call this scaling (inverse hyperbolic sine is too long and technical and obscure for what it does) but I’ll suggest bilogarithmic which seems to be a term that people sometimes use to mean “logarithmic in both directions” but has no well established meaning.

Anyway that’s how to get functions that are logarithmic in both directions, for domains that are bounded on both, one, or no sides!

The log scaling is already in matplotlib, of coure. There is also a ‘logit’ scaling but it does not work very well for me in the above examples, so could use some tweaking—in particular it would be good to have a base-10 version.  And ‘symlog’ could be replaced with arcsinh(x)/log(10).  This would reduce some of its functionality (right now you choose the range over which to make it linear) but I think it’s worth it to have it better behaved at the kink.

I don’t know enough Python to implement them myself, but perhaps it can be a feature request if enough people start using these!

Going to Mars is Hard

I love the idea that humanity will become an interplanetary species, and that our descendants will be interstellar. The far future of life and the universe is a fascinating topic.

And I love how close it all feels now that we are in the Space Age, with rockets routinely headed to Mars and beyond, and humans once again primed to leave low Earth orbit for the first time in almost 50 years.

Elon Musk is riding this movement with audacious plans for things like a million people on Mars by 2050. His vision of human expansion is very different from Sagan’s—he’s explicitly a colonizer of space, selling a very throwback vision of domination of the cosmos by (some) humans. He even openly mocked Sagan’s Pale Blue Dot, where Sagan pointed out that there is no place—not even Mars—that humans can migrate to yet. Musk says yes we can too migrate to Mars now—as if putting a person on Mars is the same thing as humanity migrating there!

You see, “migrate” doesn’t mean “send a few people, maybe even permanently”. It means you pack up a huge fraction of people living somewhere and go somewhere else. Very different things. DNLee rightly asked whose “version of humanity is being targeted for saving?” when Musk speaks of those sorry souls that will remain “stuck here on Earth.”

Musk is getting a lot of justified criticism for his vision, and I especially liked Shannon Sitrone’s take on it (which included a quote I tweeted by Carl Sagan):

Shannon takes on the trope that Mars is “Plan(et) B”, a way to escape Earth, pointing out that Mars is a terrible place to live. And it’s true! The worst winter night in Antarctica is a thousand times more habitable (and a million times easier to get to) than the best day Mars. And going there is not going to “save” anything on Earth, except incidentally via the technology we would need to live there.

Of course, the idea that humanity needs to become interstellar as a hedge against disaster makes sense, and if life on Earth has a future beyond about 500 million years, it will be in space, not on Earth. But acknowledging that is not the same thing as buying Musk’s vision for a million people on Mars in our lifetime.

After all, it’s hard to imagine any cataclysm on Earth that could possibly leave it less habitable than Mars short of something that destroyed everything on the surface—even the worst scenarios for climate change and nuclear war leave it with water, oxygen, one gee, and a biosphere.

Don’t get me wrong: there is a real, powerful vision for human migration into space, and I am excited to be alive to see it begin, but Musk is not doing the hardest parts of what needs to be done to make it happen.

“But SpaceX!” the Musk-ovites protest (and check out Shannon’s Twitter responses if you want to see how ugly the protest can get, especially when it’s woman criticizing Musk). And it’s true that SpaceX is amazing—I’m thrilled to see a new age of rocketry dawn with reusable rockets and a new philosophy that breaks the slow and expensive rut NASA and the rest of the aerospace industry has been stuck in. Tesla is amazing too—I look forward to owning one!

But there’s actually no contradiction here between criticizing Musk’s marketing and being impressed by and even in awe of the possibilities unlocked by the work of the engineers of the companies he owns.

To illustrate my point, think about weightlifting.

Getting into shape, especially building visible muscles, is a whole industry, an even mixture of science, engineering, medicine, biology, and psychology. If you want to be more Charles Atlas than 97 pound runt, there is a whole world of knowledge and a whole economy of trainers and equipment to help you get there.

If going to Mars is getting into shape in my analogy, who is Musk? To the Musk-ovites, he is some mix between Arnold Schwartzenegger and a cast member of The Biggest Loser: he’s the expert who will get us there.

But he’s not. He’s a BowFlex salesman.

Musk is the public face of SpaceX, which gets billions of our dollars (via taxes) to launch things into space. He has brilliantly built multiple companies using (among other assets) the force (I’d say cult) of his personality.

A million people on Mars is an amazing vision—especially if Mars has no extant life, it will be amazing when one day it happens.

But think about getting into shape: when someone needs to be in shape, like a professional athlete or an actor for a muscle-y role, what is the most important thing they do? Is it to buy a bunch of weights?

No! Not that they won’t need weights, of course, but in a pinch a sack of potatoes will do. What they mostly need is discipline, hard work, and a coach or trainer. The BowFlex equipment might make their training easier, but it’s neither necessary nor sufficient.

But you wouldn’t know that from the ads for exercise equipment and get-big-quick schemes, which promise if you just buy this thing you too can look like the models in the ads. This tactic goes back to Charles Atlas himself:

Image of old Charles Atlas ad including a cartoon of a 97 pound "runt" getting sand kicked in his faceAtlas promises you the body of “The World’s Most Perfectly Developed Man…without weights, springs or pulleys. Only 15 minutes a day of pleasant practice—in the privacy of your own room.”

He’s basically selling a book of exercises you can do. Of course, knowing what exercises to do is important, but that’s not the hard part—the hard part is the training itself. But you don’t sell lots of books by promising hard work, you sell them with pictures of Charles Atlas.

To Musk, we’re the 97 pound runts “stuck on Earth,” and we want to be Charles Atlas up on Mars. Musk wants us to think he can get a million of us there by 2050 because he’s selling rockets.

But rockets aren’t the hard part!

We’ve been sending rockets to Mars for decades. Yes, his rockets are really cool and cheap, but also a BowFlex setup is much fancier and better than the weights Charles Atlas used to maintain his physique. And no matter how good BowFlex is, just buying it won’t make you Charles Atlas.

The hard parts of going to Mars are understanding how humans can live so long in space, and how to build (nearly) self-sufficient habitats with limited materials. We really don’t know how to do those things. Heck, we can’t even build a self-sufficient habitat in Arizona!

You don’t build a big physique in a day, and we can’t jump to Mars all at once. Musk wants you to think the stepping stones to Mars are bigger and bigger rockets, and maybe even a Moon base. But he says that because those things need rockets, and he’s selling rockets.  And it’s not right—we’ve already built Skylab and the International Space Station. We’ve already put things on the Martian surface.  Building a bigger rocket will certainly help, but it’s not the hard part.

The actual stepping stones are fully self-sufficient artificial biospheres on Earth and a better understanding of human physiology in low-gravity environments. When we can live in a fully-enclosed habitat Antarctica for years with limited or no resupply, I’ll believe we might be able to translate that technology to Mars.  When we understand enough about plant ecology to maintain a whole, closed ecosystem that can recycle enough oxygen for human use, I’ll believe Mars habitats have a chance. When people can spend more than a year or two in space and not suffer horrible physical degradation, I’ll believe humans might last on Mars.

And here’s the tell with Musk: he’s not solving those problems. He hasn’t bought Biosphere 2 to make it work, he’s not investing in the sorts of technologies you need to maintain human life in space. He’s taking a 1950’s sci-fi approach to the problem: send up more oxygen as needed, build bigger and better machines that will protect people, build bigger rockets to send it all to Mars. Because he’s selling rockets.

Like any good salesman Musk knows: don’t sell the steak, sell the sizzle. So when he talks, he doesn’t sell the rockets, he sells the things you could do with them, the same way a kitchen gadget salesperson sells delicious food or a perfume salesperson sells a beautiful lover.

Radiantly Wild Perfume Ads : Avon Instinct Fragrance

But of course if those salespeople were really interested in you having those things, they’d be helping you with a lot more than food processors and aromatic oils. That’s how you can tell the difference. And Musk isn’t selling the hard part of space (the way Kennedy did for Apollo), he’s just selling the rockets.

Now, I know that by offering any criticism of Musk I’m inviting the Musk-ovites and trolls to come flame me (although I won’t get it as bad as Shannon does). So in case any of them have gotten this far, let me offer this:

It’s fine to be excited about SpaceX (I am!) and Tesla (I am!). It’s great to be excited about humanity’s future in space, and to help us get there. It’s reasonable to respect Musk for his entrepreneurship and success in business, and for his vision… just like it’s fine to admire Charles Atlas for his muscles and his marketing prowess.

But even the biggest Atlas fan can admit that his ads were making bodybuilding look easy when it’s actually hard. And even Musk fans can acknowledge that Musk’s salesmanship is just that: salesmanship, and that his vision of Mars that will take a lot more than he’s offering, and will probably need to change to accommodate the realities of Mars, humans, and ethics.

But there is a big contingent of super-fans that feels it’s important to publicly defend Musk against every criticism, to insist that his sales pitch is actually a complete and perfect vision of our best future, and that anyone who disagrees is somehow anti-space, or doesn’t understand the importance of space travel, or doesn’t appreciate how revolutionary SpaceX is. Musk carefully cultivates this kind of hero worship as part of his brand, and it creates a toxic atmosphere around the whole thing, hurting the whole cause.

I’ve had arguments with some of them on Twitter, some of them making, believing, and insisting on ridiculous claims like  “every prediction he has ever made has come true.” When I point out obvious counter-examples, they are brushed aside on technicalities. To them, Musk cannot fail, he can only be failed, and his critics are the enemies of the future of humanity.

Anyone serious about getting into shape knows about the whole world of advice and gadgetry, gyms and personal trainers, supplements and cuisines that surrounds the enterprise. Manufacturers like BowFlex are part of that and have helped a lot of people get into shape, and, yes, their sales force is an important part of that.

But think about how ridiculous you’d sound if you made one BowFlex salesman the first and last word on the entire subject?

If getting to Mars is really important to you, take a lesson from bodybuilders and go learn about all the pieces of that endeavor, and help humanity realize it. And don’t believe everything a salesman tells you, no matter how much you like his gadgets.

[Edit: It’s amazing how many Musk defenders will concede that we will not have a million humans on Mars by 2050 as Musk claimed, but still want to have arbitrarily long Twitter arguments to defend him against my accusation that his claims are unrealistic salesmanship.  Like, I said SpaceX is amazing and all, but Musk is overselling it. We agree!]

NSF and NASA Funding for SETI

I wrote way back when about the amount of funding that NASA and the NSF has provided for SETI since the program at NASA was canceled in 1993. It turns out that there have been a few grants scattered over the years, but they amount to not even enough to fund two people over that time.

Since then, there have been some successful grants! In addition to the NASA Technosignatures Workshop in 2018, there have now been a few grants to external PIs.

Below is a (continuously updated) list of the ones I know about:

NSF:

  • AST-2003582: “Participant Support for the first Penn State SETI Symposium” $49,400. PI: Jason Wright, 1/1/2020–12/31/2021, extended until symposium meets
  • NSF 1950897: “REU Site: Berkeley SETI Research Center” $323,947. PI: Stephen Croft, 3/1/2020–2/28/2022
  • NSF 2244242: “REU Site: Berkeley SETI Research Center” $437,654. PI Stephen Croft, 3/1/2023–2/28/2026

NASA:

  • NASA 80NSSC20K0622 (Exobiology): “Characterizing Atmospheric Technosignatures” $286,926. PI: Adam Frank, 12/15/2019–12/14/2022
  • NASA 80NSSC20K1109 (Exobiology): $11,679, “TechnoClimes: A Workshop to Develop a Research Agenda for Non-Radio Technosignatures” PI: Jacob Haqq-Misra
  • NASA 80NSSC21K0398 (XRP) $593,536, “From Exocomets to Technosignatures: Hidden Occulters in Planetary Systems” PI:Ann Marie Cody, 1/1/2021–12/31/2023
  • NASA 80NSSC21K0575 (XRP) PI: “A Search for Exoplanets Around Newly Discovered Exoplanets” $362,939 PI:Jean-Luc Margot 2/8/2021-2/7/2024

Hyphens, en-dashes, and em-dashes

One reason I stick to using LaTeX is that it’s pedantic about typography and I’m a recreational language pedant.

For instance, LaTeX marks up hyphens, en-dashes, and em-dashes as ‘-‘, ‘--‘, and ‘---‘ which makes it very easy to type in which one you want. (They’re called “en” and “em” because they’re supposed to be the width of the n and m in any given typeface.)

You can do this easily in most word processing programs, too: On a Mac, Option+’-‘ is an en-dash, Option+Shift+’-‘ is an em-dash. On a PC, use Alt and the numeric minus sign the same way. On a PC laptop there’s no good way but apparently Word recognizes Space+’-‘ as en-dash and Space+’--‘ (that’s two hyphens) as em-dash.

But which one do you want?

Hyphens join two words into a single unit, as in an adjectival phrase (“better-than-average player”). They also, of course, are used when words are broken across two lines.

En-dashes are used to express a range (“5–10 year sentence”).

Em-dashes mark a pause, setting one part of a sentence off from the other, as in a parenthetical or appositive (“The weather—which was unseasonably hot—oppressed them.”, “I can think of only one exception—the platypus”). In general, em-dashes could usually be replaced by commas, parentheses, or colons.

Here’s a handy guide to help you remember:

Hyphen:

He asked her to give him a surprise, so she gave him one-two punches.

She gave him a series of boxing combinations.

En-dash:

He asked her to give him a surprise, so she gave him one–two punches.

She game him either one or two punches (read: “one to two punches”).

Em-dash:

He asked her to give him a surprise, so she gave him one—two punches.

Two punches were the surprise.

You’re welcome!

BLC1: A candidate signal around Proxima

So, the media is abuzz about a BLC1, a candidate signal around Proxima. I’ve been all over Twitter about this, so I’m collecting my thoughts here.

But first, a disclaimer: as a member of the Breakthrough Listen Advisory Board and a Breakthrough Listen member’s current PhD adviser, I have a little bit more information than the public than this, but I am not a BL team member and have not seen the data. My comments here are purely general and, while they can provide context for what’s going on, they do not actually add anything to what’s known about the actual candidate signal beyond what is already in the press.

First, how does radio SETI work?

The Breakthrough Listen team uses radio telescopes to look for signs of radio technology in the form of (among other things) narrowband radio signals of the sort that can only be caused by technology. This is the sort of thing they’re looking for:

This is not the data from Proxima, it is an example.

This plot, from Howard Isaacson’s paper on the topic, shows the actual signal of extraterrestrial technology beaming a radio signal to the Earth. In this case, it’s not aliens: it’s Voyager II.

The vertical axis is time, going up.  Each bin is 10 seconds or so.  The horizontal axis is frequency, and each bin is a few Hz.  Note a few things about this signal:

  1. At any given moment, almost all of the power is concentrated into a single frequency bin. This is how we know the signal must be artificial. Radio signals from space come from electrons or atoms or molecules, which always have some temperature.  They also tend to come from large clouds of gas, which have lots of internal motions. Both thermal and bulk  motions generate Doppler shifts that blur out the frequencies they radiate at. Even the narrowest masers, like water or cyclotron maters, must have widths 4 orders of magnitude broader than the signal above.
  2. The signal is not perfectly narrow band.  There are two faint “sidelobes” visible on either side (there are bigger ones, too, outside the plot). This is due to signal modulation, illustrating that the signal contains information—it is not a pure “dialtone” or “doorbell”.
  3. The signal’s frequency is shifting towards lower frequencies as time increases (upwards). This is how we know the signal is not from Earth: the telescope is on the Earth, which is rotating. As this happens the telescope is first moving towards the source (as the source rises), then moving away (as the source sets).  This creates an ever increasing redshift, making the signal “drift” to lower and lower frequencies during the observation. A source on the surface of the Earth would not be moving with respect to the telescope, and so would show no Doppler shift.
    Note that this shift is the change in the Doppler shift—we can’t calculate the total Doppler shift without knowing the transmission frequency.

The problem is that the spectrum is filled with these sorts of signals. Every now and then one is from an interplanetary probe, but most are from Earth-orbiting satellites and terrestrial sources.  Breakthrough Listen employs sophisticated software that sorts through the millions of signals they can detect and find the ones from space.  The way the team rules out signals from anything other than their celestial object is by nodding the telescope. If the signal is from something on Earth, then they’ll see it no matter which way the telescope is pointing. If it’s from space, it will only appear when they are pointing at the target. For instance, here’s a pernicious false positive from Emilio Enriquez’s paper on the topic:

This is not the data from Proxima, it is an example.

This signal is apparently modulated in a nasty way: it was strongly detected only when they were pointed at the star HIP 65352 (the first, third, and fifth rows) but not when they pointed away (in the second and fourth rows). It also has a slight drift: the signal has shifted by a few Hz by the end of the series of observations.

But, you’ll notice, the signal is also present in the off pointings. It’s very weak then, below their threshold I suspect, which is why the algorithm flagged this as interesting. But if it were really coming from HIP 65352 there’s no way it could be present in those off pointings. This is probably something terrestrial with a poorly stabilized oscillator putting power into the sidelobes of the telescope or something. The exact nature of this signal is not important—all that matters for SETI is that it is not from HIP 65352.

And the radio spectrum is filled with these sorts of false positives! Sifting through them all is hard, and takes a lot of time, and has trained the team to get very good at identifying radio frequency interference. Indeed, no signal has ever survived the four tests I described above after human inspection: narrow band, drifting signal, only present in the on pointings, never present in the off pointings.

Until now!

What did the team find?

The Breakthrough Listen project uses the Parkes radio telescope in Australia as one of its tools to search for technosignatures. In this case, they were “piggybacking” on observations of Proxima, the nearest star to Earth, which were looking for radio emissions from stellar flares. These were long stares for many hours per day, for many days. The signals they were looking for are broadband, with complex frequency and temporal structure—basically, if you tuned into it with a radio receiver like the ones we use for FM or AM transmission, it would be present at every frequency, and sound like very complicated static.

But the equipment on the telescope can also be used for SETI, and so the BL team was using the telescope “commensally” to do a SETI experiment simultaneously to the flare study.

And in these data, a signal has apparently survived all of their tests!

Now, this does not mean it’s aliens, as the team has pointed out. It means they have, for the first time, a signal that can’t be easily ruled out as RFI. It’s probably RFI of some pernicious nature, but we don’t know what. Pete Worden of the Breakthrough Listen team says it is “99.9% likely” to be RFI.

We know the signal was present for around three hours, present in 5, 30-minute “on” pointings and not at all in the interspersed “off” pointings. We also know it has a positive drift rate, it appears at 982.002 MHz, and that it appears to be unmodulated.

Other than that, we don’t know much!  But there are some things we can conclude based on this little bit of information.

Why isn’t the team releasing more information?

I cannot speak for the team but I know they’re committed to transparency and scientific rigor. They also think hard about how to convey results to the media, and are careful about things like press releases and peer review of results.

Unfortunately, this news leaked out before the team had finished their analysis, so we’re left to read tea leaves and parse vague newspaper statements instead of reading their paper on the topic (which does not exist because they’re not done with their analysis!)

Someone in the “astronomical community” (we don’t know if they are even a member of the team) leaked the story to the Guardian. Their hand having been forced, the team then gave interviews to Scientific American and NatGeo with some more details, emphasizing that the signal is probably RFI.

Now, I’m pretty grumpy about this. SETI has extensive post-detection protocols that were not followed by the leaker, exactly to avoid this sort of situation. Especially since the team was definitely going to announce this, there’s no need for the leak.

But really what I’m grumpy about is that the team did not get to announce this on their own terms in a way that made clear what was going on. Instead we have lots of speculation and questions that not even the team can answer (because they haven’t finished their analysis yet!)

So what are the odds it’s aliens?

As Pete Worden tweeted:

And in the SciAm article:

“The most likely thing is that it’s some human cause,” says Pete Worden, executive director of the Breakthrough Initiatives. “And when I say most likely, it’s like 99.9 [percent].”

What should we make of the fact that the drift rate is positive? Isn’t that the opposite of what we expect?

It’s unclear how to interpret this.

The fact that it drifts at all is consistent with a non-terrestrial origin. The fact that it drifts more than you’d expect from the motion of Parkes by itself means that the source is either “chirping” its signal to go up in frequency, or that it is not correcting for its own acceleration, and it is accelerating towards the Earth (not directly towards the Earth, like it’s coming for us or something, but just that we’re in the same hemisphere of its sky as the direction it’s accelerating).

Some SETI practitioners expect that a signal would be non-drifting in the frame of the Solar System barycenter, meaning that after we correct for our motion, the signal would have just one frequency. This defies that expectation.

It also can’t be from the rotation of a planet that hosts the transmitter—those shifts would also be negative.  But it could be from the orbital motion of a planet, or from a free-floating transmitter, or from a transmitter on a moon.

The most likely explanation is probably that it is a source on the surface of the earth whose frequency is, for whatever reason, very slowly changing.

Until we know more about the drift, though, there’s not much we could say.

Are we sure it’s coming from the direction of Proxima?

Not completely. If it’s ground-based interference, it’s definitely not coming from that direction. If it’s really from space, it could actually be coming from any place in a 16 arcminute circle around Proxima—about half the width of the full moon.

What do we make of the frequency?

I’m no expert, but apparently 982.002 MHz is in a relatively unused part of the radio spectrum, where there is not a lot of radio frequency interference. It is in or near what radio astronomers call L band (I guess it’s technically UHF because it’s below 1 GHz), which has long been favored as a place to do SETI because it is in the broad minimum between noise and opacity from the electrons throughout the Galaxy and Earth’s ionosphere on one side, and that from water and other molecules in Earth’s atmosphere, and the cosmic microwave background on the other.

From seti.net. The x-axis is in GHZ, so 982 MHz is just to the left of the 1.

It’s also not far from the “water hole” favored for a long time as the place to look in radio SETI.

Some have pointed out that the signal is suspiciously close to an integer value of MHz, which would argue for a terrestrial origin (since aliens presumably would not use Hz as a standard. The deviation from an exact number of MHz is also consistent with imperfect oscillators in typical radio equipment).

The articles mention Proxima b. Could the signal be coming from that planet?

Maybe? Until we know more we really can’t say. The team itself does not appear to have favored this idea (it seems to me to have come from the authors of the newspaper articles) and indeed they have privately communicated to me that they have not analyzed this possibility because they’re focused on the RFI origin right now.

We also don’t know the orbital inclination or rotational properties of Proxima b, so we don’t know what acceleration signal it would provide. Without a good model for that planet and without knowing what the team has seen, we can only speculate.

That said, if the signal repeats and turns out to be from Proxima, and if the signal is not being inherently modulated, then we could use the drifts to infer the accelerations of the transmitted, and possibly determine whether it’s on the surface of a planet, and determine the rotational and orbital period of that planet.

But seriously, isn’t it horribly unlikely that of all places the first signal we’d find are from the nearest star, Proxima?

The original Guardian article had a misguided take on this one:

“The chances against this being an artificial signal from Proxima Centauri seem staggering,” said Lewis Dartnell, an astrobiologist and professor of science communication at the University of Westminster. “We’ve been looking for alien life for so long now and the idea that it could turn out to be on our front doorstep, in the very next star system, is piling improbabilities upon improbabilities.

“If there is intelligent life there, it would almost certainly have spread much more widely across the galaxy. The chances of the only two civilisations in the entire galaxy happening to be neighbours, among 400bn stars, absolutely stretches the bounds of rationality.”

This is wrong, because it’s based on a lot of unexamined priors and assumptions.

First, it assumes that signals of this sort must be very rare coming from only a handful of stars in the Galaxy. While that is certainly very plausible, the idea that nearly every star might have some sort of technology around it is older than SETI itself! Indeed, it is at the heart of the Fermi Paradox, which asks: since interstellar spaceflight is possible with ordinary rockets, and since the Galaxy can be populated by such rockets in less time than it’s been around, why aren’t aliens here in the Solar System right now?

One answer is “they don’t exist.”  Another is “they don’t spread around very much”. Another is “they are most places, but avoid the Solar System for some reason, perhaps because life is present here.” Another is “they have been here in the Solar System but aren’t here now.”  Another is “there is alien technology in the Solar System but we haven’t noticed it”.

Dartnell’s “improbabilities upon improbabilities” presumes that the second answer above is correct, but there is plenty of heritage in the SETI literature that explores the other answers, as well.

But even if it’s true that interstellar travel of creatures is rare and Parnell is right that it’s therefore unlikely that Proxima is inhabited, there is still a good argument to be made that Proxima is the most likely star to send us signals—perhaps even the only such star!

If there exists a Galactic community, either a diaspora or a lot of stars with technological life, or even just a single planet with life that has sent its technology everywhere, then it might set up a communication network. This is, after all, what SETI hopes to find.

But when you want to communicate with many places over very large distances, point-to-point communication is a poor way to go about it. When you call your friend on your mobile phone, your phones aren’t sending radio signals to each other. That would require way too much power and complexity. Instead, your phone sends its signal to the nearest cell tower. This makes the power requirements of your phone (and the tower) much more reasonable. This tower then sends the signal, via many means, on a complex route through many central nodes until it arrives at your friend’s nearest cell tower, and they get the signal that way.

By this logic, Proxima is the most likely place for the “last mile” portion of any message to the Solar System. Indeed, it may be the only star transmitting to us!

And note that this scheme does not assume that the message is meant for us—the Solar System may just be one stop in a network.

But if they were trying to get our attention, then they need to do something we would find obvious to look for, which means they’d have to guess which stars we’d guess to search for their signal. There are a lot of stars to choose from—which is the most obvious place for us to look?  It’s hard to argue for a better target than Proxima.

Now, this could all be wrong, but the point is we don’t know what sort of luminosity function or spatial distribution transmitters might have, and it’s easy to construct plausible scenarios where Proxima or some other very nearby star is the first one we’d detect.

So what could it be if not aliens?

I don’t really know. I’m not an expert in RFI, and even if I were, I haven’t seen the data.

Jonathan McDowell and I have had some fun on Twitter exploring an interesting possibility:

There’s a special kind of orbit that takes satellites way out to +/- 63 degrees declination and sort of hang there at apogee for a while in a long elliptical orbit.  Such satellites would also have a positive draft rate, since they’re accelerating towards the Earth. Jonathan, who keeps careful track of everything artificial in space (literally) has been trying to see if any actual satellite might do this in the direction of Proxima, but he didn’t find any in his database.

So what’s next?

Mainly, we wait for the team to finish their work and present their results.

Things that I imagine the team are and will be doing include:

  • Pointing Parkes at Proxima a lot to see if the signal repeats! Unfortunately, there are not a lot of facilities in the Southern Hemisphere that can do this work. MeerKAT may be up to the task soon, but is hard to get time on. Depending on the strength of the signal it may be possible to point smaller telescopes at Proxima to search for it as well.
  • Scouring all of their data for other examples of this signal. If it’s RFI, there’s a good chance they’ve seen it before when not pointing at Proxima
  • Searching carefully for other signals from Proxima. If there is one signal, there may be many more.
  • Considering lots of sources of RFI—what devices transmit at 982 MHz? Could any satellite or train of satellites stay in the Parkes beam for 3 hours? Could it be a hoax?

If it never repeats and if the team can’t find a good RFI explanation then I’m afraid it will be another Wow! Signal; an intriguing “Maybe?” that we’ll just have to wonder about forever. We can’t study it if it’s so ephemeral that we never get a good look at it again!

But mostly, we talk about how cool SETI is and we wait!

Where should we look for life?

Where should NASA look for biosignatures?

When I give talks to the public (and to technical audiences sometimes) I often get asked whether we might be focusing our search for life too narrowly. Mightn’t life be silicon-based, or in other string theory dimensions, or under the ice sheets of distant moons?

Indeed, David Stevenson had a nice commentary in Physics Today on the topic. He called it “The habitability mantra: Hunting the Stark” and he complained that focus on the Habitable Zone around nearby stars was a “perhaps the most distressing example of limited imagination” because it excluded searches for life that might be found elsewhere, noting that NASA planetary science missions spend most of their efforts outside of the Sun’s Habitable Zone!

I rebutted him, or tried to, by pointing out that we focus on the Habitable Zone not because it is the only place we imagine life to be, but because we need to define the parameters of the search and that’s a good place to start—following the only lead we have in the hunt.

This issue is becoming more salient as NASA plans for the next coronagraphic missions like HabEx and LUVOIR, and especially as JWST prioritizes which exoplanets to attempt to search for life via transmission spectroscopy. Which stars and planets should we look at?

Now, there is an approach that says we should not prioritize things to focus on potential biosignatures, since we don’t really know where to find those because they might not be from Earth-like life. Instead, we should look for a wide range of planets and let the data tell us where the life is, or at least whether our approach to the Habitable Zone has evidence to support its applicability to biosignatures. In particular, there is the feeling that there won’t be all that many good targets to observe, so we won’t really have the luxury of choosing the “best” candidates to find life.

I do like this approach, but note that one still needs to prioritize planets you think will have life.  For instance, advocates of this approach typically do not suggest we put potential planets orbiting pulsars, red giants, or white dwarfs on their list.  Why?  Because any planets we would be able to image around those stars are so far from our expectations of places to look for life that it doesn’t really feel like astrobiology. They might still be good targets, but are generally not “in bounds” for comparative planetology with life in mind.

In other words, even those advocating casting a wide net come with some priors regarding where good places to look are. We should of course be prepared to be surprised, but if the goal is to find life then our resource allocation should at least roughly track our priors on where we will find it.

Plus, if we’re going to be spending billions or tens of billions of dollars on a space mission, we had better have a quantitative idea of what we’re doing.  It’s not enough to have a squishy sort of feeling that G and K stars are good targets but giants and B stars are bad—we should be able to quantify where that sense is coming from and formalize it.  Then we can answer important questions like: should mid-F stars be on the target lists?  What about subgiants?

To that end, Noah Tuchow here at Penn State has been working on how to do that quantification.  Noah starts by pointing out the the reason we think giants are bad targets is that the planets have been rapidly heating recently.  Planets that used to be temperate are now very hot, and those that were once frozen are now temperate, and these changes have happened on what, for Earth, are timescales faster than big evolutionary timescales. We don’t expect biosignatures to have had time to arise in their new environments on any planets orbiting a giant star, because that phase doesn’t last very long and is dynamic.

Similarly, we think planets orbiting very young stars aren’t great targets because life has not had time to develop. Indeed, biosignatures did not arise on Earth until it was a couple of billion of years old, and oxygen was not noticeable until a few hundred million years ago. Surely, then, older planets will have a higher chance of having detectable biosignatures than younger planets, right?

So we can bound the problem: we should favor planets that have been in the Habitable Zone longer.  Sure, planets outside of the Habitable Zone may have life, but unless it’s surface life that has had time to change its atmosphere, we’ll never find it with LUVOIR or HabEx, anyway.

Noah defines the habitable duration of a planet to be how long it has been in the Habitable Zone.  One tricky bit is that as stars age they get slightly brighter, so planets that used to be in the Habitable Zone (like, for instance, Venus) eventually get too hot and leave it. Planets like Mars that start outside the Habitable Zone can enter it, but there is considerable skepticism in some quarters that they could ever thaw out because their ice would reflect so much sunlight away.  Those are called “Cold Start” planets (sorry planet formation people!) and we need to decide if we’ll prioritize them in our searches or not.

Then, you can apply your favorite planet abundance numbers (we don’t really know how many terrestrial planets to expect around these stars) and your favorite model of biosignature genesis. Do you think that life has a fixed probability per unit time of arising and creating biosignatures? We have a probability distribution function for that.  Do you think all planets older than 2 Gyr are basically equally likely to host life? We’ve got one for that, too.  Pick your model for planets and abiogenesis and biosignatures, and Noah’s approach allows you to compute which stars are most likely to host life.

Then, given such a model, you can turn a habitable duration into a fraction of stars with detectable biosignatures. Now, this number is certainly wrong—we don’t know how often life will arise!  But if we compare two stars’ numbers, then this uncertainty cancels and we can determine their relative likelihood of hosting life is robust.  Very young planets have less chance of hosting life than very old ones, regardless of what the overall rate of abiogenesis is.

Noah calls that number the relative biosignature yield and he applies some reasonable guesses for planet and life occurrence to compare stars. It turns out things are pretty sensitive to your assumptions!

The plots above show the relative yield for different assumptions about life and planet occurrence.  Red means more planets than blue (because these are relative abundances you can compare colors within a plot but not between plots).

The bottom row shows what you get if you just target any old Habitable Zone of any old star without worrying about how long a given planet has been in there (that is, how fast the star is evolving).  The all red plot on the right shows the situation if planets are logarithmically spaced around their stars (Bode’s Law, basically).  In that case, all stars are roughly equally good targets.  But look what happens if planets are more evenly spaced  (like we see in tightly packed systems): in that case you should favor F stars over K stars, and by a lot!

Now look at the top row, which says the longer planets have been in the Habitable Zone the better. Suddenly, old, low mass stars are much better targets than all of those younger F stars (which makes sense), but whether you should favor F or G stars depends on the underlying planet distribution.

Finally, the middle row shows what happens if you throw out the Cold Start planets—surprisingly, it’s not a huge difference, but it will matter at the margins.

Now you might worry that this is false precision since we don’t know actually know how life arises, so our model of abiogenesis that made these plots is wrong, so it’s all just GIGO (Garbage In, Garbage Out). But that’s throwing out the baby with the bathwater.

This approach is really a way of translating the assumptions we have already been making tacitly (“don’t look at giant stars!”) into quantitative decisions about mission priorities. It will let us quantitatively determine whether, for instance, mid-F stars or subgiants are within our uncertainties about these things, or if our priors on them hosting biosignatures are really small. It will also tell us how well we have to know a star’s property to say something about our expectations that it hosts life.

Models like this will also be important for interpreting detections and null results. If we do/don’t find something, what does that mean?  Without a model, we can’t interpret those results in terms of our understanding of life in the universe.

In other words, without a model, all of our astrobiologically-based target selection and results interpretations for these missions is just hand-waving, and Noah has the model.

You can find the paper here. Take a look! I think it’s very nicely written, and really lays things out well.

 

David Alan Amato (1954-2020)

Dave Amato was a biostatistician who led the design and analysis of clinical trials for several important therapies, including AZT to treat AIDS, Lunesta to treat insomnia, and Trikafta to treat cystic fibrosis. He was also a son, a husband, a father, and a beloved family member to many.  Dave died of brain cancer peacefully and painlessly Wednesday, September 23 at 12:30pm surrounded by his family. He was 66.

He was my stepfather.

Dave and Victoria holding their infant grandchildren E and S
Dave and Victoria with their grandchildren, E and S.

Dave was born on August 14, 1954 and lived in Hamden, CT. He grew up on a farm on the lower floor of a two-story house. Dave’s mother was of full Irish descent and his father full Italian, and he grew up in a tight-knit, extended family with his siblings Don and Linda, his parents Barbara and Lou, and his paternal grandparents upstairs. Dave had 26 cousins. Lou would die young, at almost the same age as Dave, but wonderful Barbara is still a regular and story-filled presence at family gatherings. 

Lou was very handy with a hammer and saw, a trait he passed on to Dave. When the farm was claimed by eminent domain when Dave was 13, the family moved into a house his father built nearby. In his senior year of high school the family finished a vacation cottage in Moodus, CT where the family spent summer weekends. The cottage was still in the family until recently, and I have many of the same youthful memories as Dave does at that house: spending summer weekends on the lake, swinging in the hammock, and playing “all-terrain” bocce in the yard.

Dave attended Colgate University, where he majored in mathematics, was a member of Sigma Chi, and where he met many lifelong friends. He graduated in 1976 with Phi Beta Kappa honors. There Dave met Beth Collea (class of 1978) and they were married in Hamilton, NY in 1978.

Dave in college at Colgate, holding a cigarDave at Colgate

Together, Dave and Beth had three wonderful children, Dan, Karen, and Debbie. Dan is a computer programmer in Iowa from whom I have been the beneficiary of many video games he has helped program (I and my children are particularly grateful for the Rock Band ports to the Wii). Karen is an artist that lives in Maine; regulars to the blog and my office will recognize Karen’s artwork. Debo lives in Cambridge and works in development for the Boston Children’s Museum.

In 1982 Dave earned his PhD in operations research from Cornell University, where he developed new methods for conducting clinical trials for cancers. Clinical trials for fatal diseases are tricky because the subjects often or, sometimes, nearly always die during the trial, so you have to measure survival time, not whether the therapy made them better. In clinical trials you also often have patients whose outcomes you can’t learn because they leave the trial or for some other reason, which results in “censored” data (in the physical sciences we usually just call these “upper” or “lower limits”). The branch of statistics that deals with these issues is called “survival analysis” for this reason, and its techniques are now common throughout the sciences, including in astronomy.

Shortly out of graduate school, Dave worked as a study statistician on clinical trials for treatments of carcinomas, melanomas, mesotheliomas, and sarcomas. In his first job at the Dana-Farber Cancer Institute, he worked on chemotherapy and radiation therapies for bladder cancer and untreatable lung cancer. In a strange twist of fate, his work there included studies of gliomas in the neurooncology department, where I would join him decades later for an appointment to hear the biopsy results on his own glioma.

Dave worked for five years as an assistant professor at the Harvard School of Public Health, and another two as an associate research scientist at the University of Michigan.

In 1989 Dave rejoined the Harvard School of Public Health as the Head of Biostatistics at the Statistical and Data Analysis Center (SDAC). This was a particularly formative time in his life, where he met many lifelong friends and, eventually, his second wife, Victoria Hattersley, my mother. I was around 14 at the time.

We had just moved to Boston several weeks earlier, and moved in with my uncles Michael and David.  Around the time we arrived, David was diagnosed with HIV, and mom wanted to help. So she took a job at SDAC despite being badly overqualified for it, because she wanted to contribute to the important work being done there developing therapies for HIV.

Dave was particularly proud of the work he did at SDAC, where he was lead statistician on multiple HIV therapies, including AZT. At the time, HIV was a death sentence, and there were no effective therapies. AZT was ultimately approved on the basis of another trial, but SDAC was an important part of the worldwide effort to find a cure. Today, the disease is mostly manageable with medications thanks to those efforts, although they were too late for David.

In 1994 Dave left academia for industry, including working as executive director of biostatistics at Sepracor, where he led the statistical analysis for the sleep drug Lunesta. He told me that the Lunesta trial was the best trial he ever analyzed: they hit every endpoint easily and early, becoming the first (and still only) sleep drug approved for long-term treatment of insomnia by the FDA. He and my mother, both of whom suffer from insomnia, used it loyally ever since. Dave told me he encouraged leadership at Sepracor to run a head-to-head trial against Ambien because he was sure it would prove superior and knock it out of the market, but they seemed satisfied having the “long-term” advantage and never risked such a trial.

Dave climbed the corporate ladder, working for several other companies throughout his career. He was senior director of biometrics at Shire HGT, where he worked on FDA approval for Firazyr, which treats hereditary angioedema.  As I write this, I’m looking at the trophy on his desk he got when it was approved.

Image of Dave playing bocce
Dave demonstrating his impeccable bocce technique at my wedding.

Dave finished his career at Vertex Pharmaceuticals, where he served as Vice President and Head of Innovation and Methodology, and worked on FDA approval of Trikafta. Vertex for a long time has been the main company working on cystic fibrosis treatments. The disease makes it hard to impossible to breathe, and it’s effectively fatal: few people with it live past their 30’s.

This is a hard disease to develop treatments for because it is so rare; to get a big enough N in your clinical trial you have to enroll most of the people who suffer from it. Since a new drug can cost billions of dollars to develop, most pharmaceutical companies won’t even try to treat diseases without millions of potential customers, but fortunately, the US government has financial incentives for pharmaceutical companies to pursue therapies for rare diseases, and Vertex built its business on this funding for “orphan” drug development.

In the US, about 90% of people with cystic fibrosis suffer from a common genetic mutation, and based on that discovery in 1989 Vertex had a few promising therapies they were pursuing. Until recently, they were all not very effective. Trikafta was a cocktail of three of those therapies, and Dave led the analysis of the clinical trial data for this approach.

It worked. Cystic fibrosis is now a manageable condition. I dare you to read this article about it with dry eyes.

When Dave came back from the FDA advisory committee hearing during the approval process he tearfully described the testimony he witnessed from trial participants begging the FDA to approve the drug so they could continue taking it, so they could see their children grow up. He considered getting Trikafta approved a highpoint of his career.

We in his family, though, remember him for what he gave us personally.

At Colgate, Dave learned to “work hard, play hard.” He adhered to this philosophy for the rest of his life, and passed it on to us.

Around the time I graduated high school, when they moved to better-paying industry jobs, Dave and Victoria moved into a spacious condo a few blocks away from the tiny apartment we had lived in together in Brookline. As they both got better and better jobs, the houses I went to for summer visits and Christmases grew larger and larger. In 2001, they finally “made it official” and got married.

By the time our first child was born, Dave and Victoria had a vacation home in Wareham, MA near the beach, where our “Brady Bunch” family (my mother’s 3 natural children, and Dave’s 3) would meet for a week in the summer and Yuletide. It had lots of bedrooms, a pingpong table in the basement, and a well stocked refrigerator, and it was always a great time.

Dave and G by one of Karen's pieces of art
Dave and G just after installing one of Karen’s works of art at our house.

Eventually, as the grandchildren became more numerous (they now have 13), they relocated to Pembroke, MA, in a big house on the North River (where I am writing this now), and moved their vacation residence to Mount Vernon, WA, on Big Lake, not far from where I was born and my twin brothers live with their families.

Dave was well known for his love of cinema (both classic and recent) and catnaps (and, more than occasionally, combining the two.) He was particularly fond of and expert at trivia, poker, and fantasy sports, and for 20 years was commissioner of the family and friends league I am a member of. Dave also made sure to pass lots of Amato family pastimes on to the next generation, including all-terrian bocce and dice and card games like Onze and Mexican.

Dave loved music, from classic rock to modern stuff. My contribution to the family canon was during grad school, when I introduced them to the Old 97s and Jackie Greene. “I Don’t Want To Miss A Thing” by Aerosmith was “their song”, which  Victoria and Dave played at their wedding. Ever since, putting it on the Sonos was a guaranteed way to make them stop whatever they were doing and dance together.

Dave loved good drink, good food, the occasional cigar, and he especially liked providing these for others. His mainstay cocktail was the Perfect Manhattan. On special occasions, like Christmas morning, he would prepare a huge batch of his Bloody Mary recipe, which remains unmatched in the world. My personal favorite is his recipe for marinated steak tips, which I’ve never been able to reproduce (it requires a cut of steak that seems to only be available in the Boston area, but other cuts still make for a yummy meal).

Portrait of Dave's family(Some of) Dave’s family, at Christmas shortly after his diagnosis.

Dave’s legacy is his family and the values he instilled in us, but also…

After being diagnosed with a glioblastoma in 2018, Dave began compiling much of his lifelong wisdom including his favorite films, card and dice games, and food and drink recipes at www.TheBearKnowsBest.com. (Dave was fondly known since his college days and to his family as “The Bear”).

Please go take a look, leave a comment, add some ingredients to your grocery list for a weekend dinner, try out a drink recipe, get some family and friends together and play one of the dice or card games, have a laugh at the funny lists, or get one of his favorite books or movies.

I know he’d appreciate it.

Planck Frequencies as Schelling Points in SETI

Early when I was learning about SETI I was reading about “magic frequencies” and the “Water Hole.”

Back in the early days of radio SETI, instrumental bandwidths were pretty narrow, so Frank Drake and others had to guess what frequencies to observe at to find deliberate signals. One wants a high frequency to avoid interference from the Earth’s ionosphere and background noise from the Galaxy. But one also wants to observe at a high frequency to avoid lots of emission from Earth’s atmosphere.  There is a “sweet spot” between these problems, in the range of 1–10 GHz:

Figure showing background levels as a function of frequency, with a minimum between 1-10 GHz

From seti.net

In this broad minimum are two famous and strong astrophysical emission lines: the spin-flip hyperfine transition due to hydrogen (the “21 cm line”), and the emission from the hydroxyl radical (OH-).  Since these two species combine to form water, and since water is essential to life-as-we-know-it, the region between these two lines is known as the “water hole”. This has a nice pun to it too as a reference to a place where savannah animals (or barflies) gather.  As Barney Oliver put it “Where should we meet? The watering hole!”

Trying to determine exactly which frequency in the Water Hole to search became a game of guessing “magic frequencies” (I think the term is due to Jill Tarter, though I could be wrong) to tune one’s telescope to.

When I was learning about all of this, I was reading the Wikipedia article on the Water Hole and I saw this intriguing link:

Screenshot of the Wikipedia page on the Water Hole showing a link to "Schelling Points"

Clicking on that last link sent me down a nifty rabbit hole and eventually got Schelling points introduced into the SETI literature.

I wrote a while back on Schelling points and their relevance to SETI.  Go there for the full story, but briefly: Thomas Schelling was an economist and game theorist who considered games where players must cooperate (everyone wins or everyone loses) but cannot communicate. His example was finding someone else who is also looking for you in New York City.  The prospects for winning such a game seems hopeless, but Schelling’s insight was that it was actually pretty easy if you could correctly guess the other person’s strategy, since some strategies are clearly bad (a random search) and others are plausibly good (go to a major landmark).

These optimal strategies are characterized by what we now call Schelling points: in New York City, it’s the Empire State Building and noon are good ones.

Amazingly, ABC News Primetime got people to actually play that game and they won! In hours!

When introducing this idea to the world in his book The Strategy of Conflict, Schelling wrote:

[A good example] is meeting on the same radio frequency with whoever may be signaling us from outer space. “At what frequency shall we look? A long spectrum search for a weak signal of unknown frequency is difficult.  But, just in the most favored radio region there lies a unique, objective standard of frequency, which must be known to every observer in the universe: the outstanding radio emission line at 1420 megacycles of neutral hydrogen” (Giuseppe Cocconi and Philip Morrison, Nature, Sep. 19, 1959, pp. 844-846). The reasoning is amplified by John Lear: “Any astronomer on earth would say ‘Why, 1420 megacycles of course! That’s the characteristic radio emission line of neutral hydrogen.  Hydrogen being the most plentiful element beyond the earth, our neighbors would expect it to be looked for even by tyros in astronomy’” (“The Search for Intelligent Life on Other Planets,” Saturday Review, Jan. 2, 1960, pp. 39-43). What signal to look for? Cocconi and Morrison suggest a sequence of small prime numbers of pulses, or simple arithmetic sums.

So the water hole is a Schelling point!  Or it could be—we need to guess the mind of ET and ask: what frequencies would they guess we would guess, and perhaps water and ionospheres and radio just aren’t their thing?  Schelling’s players can win the game only because they have a common cultural heritage and know about the Empire State Building being famous. If we played that game with aliens, we’d probably lose.

So what do we know we have in common with aliens?  Max Planck had an idea.

Image of Max Planck

Max Planck

Max Planck is one of the most important figures in modern physics, famous for many key insights but among them his constant, h, and his eponymous “natural units.”  Planck realized that there is a fundamental length scale of the universe, set by the nature of space and time, gravity, and quantum mechanics. We call it the “Planck length” and it is given by:

Planck Length formula

Very roughly and heuristically, it is the wavelength of a photon so energetic that its wavelength is equal to its Schwartzchild radius (that is, a photon so dense with energy that it would be a black hole).  Today, we recognize this as the scale on which quantum mechanics and General Relativity give different answers or become mutually incompatible.  Dividing by the speed of light, one defines a fundamental timescale of the universe, which could be interpreted as an observing frequency.

In his famous paper on the topic written in 1900, he wrote:

It is interesting to note that with the help of the [above constants] it is possible to introduce units…which…remain meaningful for all times and also for extraterrestrial and non-human cultures, and therefore can be understood as ’natural units’

and that

…these units keep their values as long as the laws of gravitation, the speed of light in vacuum, and the two laws of thermodynamics hold; therefore they must, when measured by other intelligences with different methods, always yield the same.

So he imagined that these units would be known to extraterrestrial physicists, unlike, say, kilograms and seconds which are completely arbitrary and anthropocentric. Since what we’re looking for is a frequency that we know they would know that we know, this seems like a good Schelling point!

The problem is that the Planck time is way way too short—photons at those frequencies are little black holes (or something—we don’t have physics for it) so don’t exist (or can’t be produced, anyway.)  So how could we use them?

Well, there is another fundamental physics constant, another fundamental unit in nature, which is the fundamental unit of charge. Aliens would have to know that!  Combining the charge of the electron with the speed of light and Planck’s constant h one gets the fine structure constant:

Formula for the fine structure constant

which has a value near 1/137 and is dimensionless.  This is a constant of nature that we do not have a way to calculate purely mathematically or from first principles—it measures the degree to which electrons “couple” to photons, and so governs how all of electromagnetism and light works.

So, can we get another frequency if we multiply the inverse of the Planck time by the fine structure constant. That frequency turns out to still be too high to observe, but there’s no reason we couldn’t keep multiplying by the fine structure constant until we get a useful frequency. This process would actually generate a large number of frequencies—a frequency “comb” with many “teeth.”  Some of the frequencies generated this way are at interesting frequencies: 610.3 nm in the optical, and 26.16 GHz in the microwave, for instance, are both easily observed from Earth and in fact the sorts of frequencies we might use for communication.  These are Planck’s Schelling points!

But there are some caveats here.  Are Planck’s units really that universal?  Look at those equations above: they both have a factor of 2π in them.  Where did that come from?

Well, Planck liked to define things in terms of angular frequency, meaning the time it takes the phase of an oscillator to change by 1 radian. The 2π goes away if you choose to define frequency in terms of cycles per unit time (as astronomers do for light or engineers do for AC electricity).  It’s arbitrary!  So, we could also define both constants without the 2π. Maybe aliens like it better that way?  So we can build a frequency comb that way, and that’s another potential Schelling point.

Also, maybe we’re overcomplicating things.  If we’re going to choose a base to raise to a power then maybe the fine structure constant isn’t the natural one to use—any mathematician will tell you that the natural base to use is the base of the natural logarithm, e! (Different e from the one above, though). Then you can do it all with only 3 physical constants instead of 4, so maybe more obvious? So that’s another potential Schelling point.

Or maybe you want to make sure the frequencies have physical meaning akin to the 21 cm line Shelling mentioned, and as long as you’re thinking about light you might like to use the mass of the electron instead of the gravitational constant G.  In that case you could define your base unit of energy as half of its rest energy of the electron and use the fine structure constant to make your comb.  Seems a bit arbitrary, at first, but the energies defined by that comb are

(m_e c^2)/2 \alpha^n

and when n=2 we have an important unit in physics, the Rydberg, related to the energy it takes to ionize hydrogen (in reality there’s a small correction factor because protons are not infinitely massive, but this is the fundamental unit).  This unit was known even to classical physics and so is a very “natural” way to define a universal energy or frequency.  So there’s yet another frequency comb.

We could surely define more. The truth is, Planck’s insight isn’t all that helpful for guessing exactly which frequencies we should look at because we still need to make lots of choices and we don’t have any guide beyond what seems natural.

But still, it’s a useful illustrations of both the power and limitations of Schelling’s idea. Also, we can add the frequencies that appear in these frequency combs to the lists of “magic frequencies” we check for—more ideas about places to look can’t hurt, because today modern radio observatories can search billions of frequencies at once, so it costs nothing to check a few more more that they observed.

But there may be another insight here: these frequency combs generate multiple frequencies, and perhaps we should look for signals at all of them.  After all, unlike looking for someone in New York, there is little preventing us from looking in more than one channel at once, or from their signals being at more than one frequency at once.  Perhaps we should be looking for combs of signals, or at multiple wavelengths simultaneously!

Anyway, this idea of Schelling points has gained a lot of traction since I made passing reference to it in a review article a while back, but it has no proper, refereed citation in a SETI context (beyond Schelling’s offhand remark in his book).  So I’ve written up the idea formally, including the Planck Frequency Comb as a case study, in a new paper for the International Journal of Astrobiology. You can read it on the arXiv here.

 

Thanks to Sabine Hossenfelder and Michael Hippke for these translations of Planck’s 1900 paper.

Is SETI dangerous?

Interdisciplinarity in science can be wonderful: combining expertise across disciplines leads to new insights and progress because it’s only when people from those disciplines communicate about a particular problem that progress is made, and that happens much more rarely than communications among members of a single discipline.

It’s important, though, when working across disciplines to actually engage experts in those other fields. There’s a particular kind of arrogance, common among physicists, that a very good scientist can wander into another discipline, learn about it by reading some papers, and start making important contributions right away. xkcd nailed it:

xkcd comic. A physicist is lecturing an annoyed person who has beer working at a blackboard and laptop with notes strewn about. "You're trying to predict the behavior of <complicated system>? Just model it as a <simple object>, and then add some secondary terms to account for <complications I just thought of>. Easy, right? So, why does <your field> need a whole journal, anyway? Caption: Liberal arts majors may be annoying sometimes, but there's nothing more obnoxious than a physicist first encountering a new subject.And my favorite takedown of the type is from SMBC (go read it!)

There’s a new paper about the dangers of SETI out by Kenneth W. Wisian and John W. Traphagan in Space Policy, described here on Centauri Dreams. In it, they describe the worldwide “arms” race, similar to the one in the film Arrival, to communicate with ETIs once contact is established. They say this is an unappreciated aspect of SETI and that SETI facilities should take precautions similar to those at nuclear power plants.  Specifically, they write:

In the vigorous academic debate over the risks of the Search for ExtraTerrestrial Intelligence (SETI) and active Messaging ExtraTerrestrial Intelligence (ETI) (METI), a significant factor has been largely over- looked. Specifically, the risk of merely detecting an alien signal from passive SETI activity is usually considered to be negligible. The history of international relations viewed through the lens of the realpolitik tradition of realist political thought suggests, however, that there is a measurable risk of conflict over the perceived benefit of monopoly access to ETI communication channels. This possibility needs to be considered when analyzing the potential risks and benefits of contact with ETI.

I have major issues with their “realpolitik” analysis, but I’m not an expert in global politics, international affairs, or risk aversion so I’m not going to critique that part here. Instead, I’ll stick to my expertise and point out that the article would be much stronger if the authors had consulted some SETI experts, because it is based on some very dubious assumptions about the nature of contact.

The authors seems to think it is clear that once a signal is identified:

  1. Only around “a dozen” facilities in the world will be able to receive the signal, and that states will be able to somehow restrict this capability from other states. The authors think this covers both laser and radio.
  2. That it will be possible to send a signal to the ETI transmitter, and that this capability will have perceived advantages to states.

While there are some contact scenarios where these assumptions are valid, they are rather narrow.

First, modern radio telescopes are large and expensive because they are general purpose instruments. They can often point in any direction, and have a suite of specialized instrumentation designed to operate over a huge range of frequencies.

But once a signal is discovered, the requirements to pick it up shrink dramatically. Only a single receiver is required, and its bandwidth need be no wider than the signal itself. The telescope need only point at the parts of the sky where the signal comes from, so it need only have a single drive motor. And the size of the dish need not be enormous, unless the signal just happens to be of a strength that large telescopes can decode it but small ones cannot, which is possible but a priori unlikely.

Indeed, there are an enormous number of radio dishes designed to communicate with Earth satellites that could easily be repurposed for such an effort, and can even be combined to achieve sensitivities similar to a single very large telescope, if signal strength is an issue. And there is no shortage of radio engineers and communications experts around the world that can solve the problem quickly and easily. The scale of such a project is probably of order tens of thousands to millions of dollars, depending on the strength and kind of signal involved. The number of actors that could do this worldwide is huge. Also, such efforts would be indistinguishable from normal radio astronomy or satellite communications, so very hard to curtail without ending those industries.

The situation is similar for a laser signal: if it is a laser “flash” then the difficulty is primarily in very fast detectors that can pick it up. Here, the technology is not as mature, and if the flashes are *extremely* fast it is possible that the necessary technology could be controlled but, again, this assumes a very particular kind of laser signal. And, again, there are an enormous number of optical telescopes which will have similar sensitivity to optical flashes as existing optical SETI experiments (which, again, are only expensive because they search a huge fraction of the sky for signals of unknown duration).

Finally, there is the issue of two-way communication: unless the signal is coming from within the solar system or the very closest stars, the “ping time” back and forth is at least a decade, and likely much longer. There is no “conversation” in this case: the first response to our communications would be ten years down the line! So the real dangers are transmitters within the solar system or signals that contain useful information without the need for us to send signals.

In summary, the concerns expressed in this article apply to a narrow range of contact scenarios in which the signal is, somehow, only accessible to those with highly specialized equipment or from a transmitter within the solar system. The first seems highly unlikely; I do not know to evaluate the second, but note that such signals are not searched for routinely in the radio, anyway.

I’d be happy to engage with experts in space law on a paper on the topic, if I know any?

Science is not logical

OK, time for some armchair philosophy of science!

You often hear about how logic and deductive reasoning are at the heart of science, or expressions that science is a formal, logical system for uncovering truth. Many scientists have heard definitions of science that include statements like “science never proves anything, it only disproves things” or “only testable hypotheses are scientific.” But these are not actually reflective of how science is done. They are not even ideals we aspire to!

You might think that logic is the foundation of scientific reasoning, and indeed it plays an essential role. But logic often leads to conclusions at odds with the scientific method.  Take, for instance, the “Raven Paradox”, expertly explained here by Sabine Hossenfelder:

Sabine offers the “Bayesian” solution to the paradox, but also nods to the fact that philosophers of science have managed to punch a bunch of holes into it. Even if you accept that solution, the paradox is still there, insisting that in principle the scientific method allows you to study the color of ravens by examining the color of everything in the universe except ravens.

I think part of the problem is that the statement “All ravens are black” sounds like a scientific statement or hypothesis, but when we actually make a scientific statement like “all ravens are black” we mean it in something closer to the vernacular sense than the logical one. For instance:

  • “Ravens” is not really well defined. Which subspecies? Where is the boundary between past (and future!) species in its evolutionary descent?
  • “Black” is not well defined.  How black? Does very dark blue count?
  • “Are” is not well defined. Ravens’ eyes are not black. Their blood is not black.

Also, logically, “all ravens are black” is strictly true even if no ravens exist! (Because “all non-black things are not ravens” is an equivalent statement and trivially true in that case). Weirdly, “all ravens are red” is strictly true in that case, as well! This is not really consistent with what scientists mean when we say something like “all ravens are black”, which presumes the existence of ravens. We would argue that a statement like that in a universe that contains no ravens is basically meaningless (having no truth value) and actually misleading, not trivially true, as logic insists.

So the logical statement “all ravens are black” is supposed to be very precise, but that is very different from our mental conception of its implications when we hear the sentence, which are squishier. We understand we’re not to take it strictly literally, but that is exactly what logic demands we do!  And if we don’t take it in exactly the strict logical sense, then we cannot apply the rules of formal logic to it. This means that the logical conclusion that observing a blue sock is support for “all ravens is black” does not reflect the actual scientific method.

You might argue that “black” and “raven” are just examples, and that in science we can be more precise about what we mean and recover a logical statement, but really almost everything we do in science is ultimately subject to the same squishiness at some level.

Also, and more damningly:

If we were to see a non-white raven—one that has been painted white, an albino, or one with a fungal infection of its wings— we would not necessarily consider it evidence against “all ravens are black”!  We understand that “all ravens are black” is a general rule with all kinds of technical exceptions. Indeed, a cardinal rule in science is that all laws admit exceptions! Logically, this is very close to the “no true Scotsman fallacy,” but it is actually great strength of science, that we do not reach for universal laws from evidence limited in scope, only trends and general understandings. After all, even GR must fail at the Planck length. 

So even the word “all” does not have the same meaning in science as it does in logic!

More generally, in science we follow inductive reasoning. This means that seeing a black raven supports our hypothesis that all ravens are black. But in logic there is no “support” or “probability,” there is only truth and falsity. On the other hand, in science there are broad, essential classes of statements for which we never have truth, only hypotheses, credence, guesses, and suppositions. Philosophers have struggled for years to put inductive reasoning on firm logical footing, but the Raven Paradox shows how hard it is, and how it leads to counter-intuitive results.

I would go further and argue that strictly logical conclusions like those of the Raven Paradox are inconsistent with the scientific method. I would simply give up and admit: the scientific method is not actually logical!

After all, science is a human endeavor, and humans are not Vulcans. Logic is a tool we use, a model of how we reason about things, and that’s OK: “All models are wrong, but some are useful.”  Modeling the Earth as a sphere (or an oblate spheroid, or higher levels of approximation) is how we do any science that requires knowledge of its shape but it’s not true. Newton’s laws are an incredibly useful model for how all things move in the universe, but they are not true (if nothing else, they fail in the relativistic limit).

Similarly, logic is a very useful and essential model for scientific reasoning, and the philosophy of science is a good way to interrogate how useful it is. But we should not pretend that scientists follow strict adherence to logic or that the scientific method is well defined as a logical enterprise—I’m not even sure that’s possible in principle!

The astrophysical sources of RV jitter

A big day for our understanding of RV jitter!
 
Penn State graduate student Jacob Luhn has just posted two important papers to the arXiv. You can read his excellent writeup of the first of them here:
 
It took Jacob a HUGE amount of work to determine the *empirical* RV jitter of hundreds of stars from decades of observations from Keck/HIRES. These are “hand crafted” jitter values, free of planets, containing only the HIRES instrumental jitter plus astrophysical jitter.
(Along the way, we wondered how to put error bars on jitter, which is itself a deviation. What’s the standard deviation of a standard deviation? Jacob found the formula—it’s in the paper if you’d like to see how it’s done (you have to use the kurtosis). )
You may have seen Jacob’s work at various meetings: young stars and evolved have high jitter, so there is a “jitter minimum” where they are quietest.
But this paper has more! It turns out the location of the jitter minimum depends in a predictable way on a star’s mass.
Figure from Jacob's paper illustrating the dependence of jitter on log(g) and mass.
 
The second paper describes the properties of F stars with low jitter.
 
But don’t F stars all have high jitter?
 
Nope. Jacob has found many stars in the “jitter minimum” are F stars with < 5 m/s of RV jitter. This has important implications for following up transiting planets.
 
My favorite consequence of this work is that we will be able to now *predict* the RV jitter of a star from its mass, R’HK, and log(g) *empirically*, incorporating *all* sources of RV noise . Right now, such predictions are only good to ~factor of 2. Jacob can predict it to <25%!
 
But predicting RV jitter is a story for another paper, coming soon. For now, enjoy these papers at AJ and on @jacobkluhn’s blog:
https://iopscience.iop.org/article/10.3847/1538-3881/ab855a
https://iopscience.iop.org/article/10.3847/1538-3881/ab775c

On Meeting Your Heroes

Freeman Dyson died on Friday. He was a giant in science, possibly the most accomplished and foundational living physicist without a Nobel Prize. He was 96.

Tania/Contrasto, via Redux

He had a big influence on my turn to SETI. I’ve written about him several times on this blog, including about his “First Law of SETI Investigations”, his role in the development of adaptive optics, how that intersected with Project Orion and General Atomic, and of course his eponymous spheres that I’ve spent some time looking for.

I got to meet him twice. Once was when Franck Marchis invited him, Jill Tarter, Matt Povich, and me to talk about Dyson spheres on a Google Hangout for the SETI Institute:

The second time was a at UC San Diego. I was there to give a talk, and walking down the hallway of the astronomy part of the physics department I saw “F. Dyson” on one of the doors. I asked, and was surprised to learn that he spent his winters in San Diego where his grandchildren lived, and that he had an office in the department.

And he was there that day.

And he’d be at my talk.

About Dyson spheres.

Indeed, his face was on the second slide.

The talk went well and afterwords he invited me to lunch to discuss it. He asked if I was free. I looked at my schedule: of course I had a lunch appointment. “Yes, it looks like I’m free!” I said, then briefly excused myself to explain the change to my host.

Freeman Dyson and me after my talk at UCSD

Freeman Dyson and me after my talk at UCSD

I asked where we should go and he said “I like Burger King.” So he walked me to the student union where he got a hotdog, and we sat at a table for four, next to a slightly annoyed undergraduate looking at his phone.  We talked about Dyson spheres and SETI, I’m sure. I also could not resist and asked embarrassingly naïve questions about experimental tests of the vacuum energy and the like. “I don’t think that’s a promising line of research” he politely deflected.


I have a list of bands I’ll see if they come to town, and a shorter list of bands I’ll see if they come within driving distance. It’s not a list of my favorite bands, it’s a list of bands that might be on their last tour that I want to have seen at least once. I’ve seen Dylan (twice!), Springsteen (twice!), The Who (Quandrophenia in Philadelphia), Bob Seeger, Rod Stewart, Elton John, Metallica (twice!), the Rolling Stones, Paul McCartney, and more. Cher caught a cold and so they canceled the State College show (I really bought the tickets for Pat Benitar’s opening act, though).

I missed Prince. You never know.

I’ve twice missed talking to my heroes because they were old and I dallied. I invited Nikolai Kardashev to this summer’s SETI Symposium, but I got a decline from someone managing his email account, and then we learned last August that he had passed away at 87.

When I was organizing the letter writing campaign for a prize from the AAS for Frank Kameny, I got his contact information (the phone number at his house). I wanted to call him to tell him what we were doing, but I decided to wait until the prize was official so I could tell him the good news. On August 1, 2011 I learned the AAS was officially going to consider the prize. On October 12, Kameny passed away at 86. On October 15, the AAS announced the prize, which had to be posthumous.

I never called. I’m not sure Frank knew about the effort at all, that his old professional society was finally honoring him.


I’m glad I met Freeman. I’m sad I won’t get his feedback on the big review article on Dyson Spheres that I’ve written that will be published this summer. I probably should have sent it to him earlier.

Battling the Email Monster

Sometimes when people ask what I do for a living, I tell them I write and answer email.  It certainly is a big part of my day!

That said, I have a pretty good relationship with email. I have a well-managed inbox and occasionally even hit inbox zero, despite getting a lot of emails every day and juggling a lot of responsibilities.

There are a lot of guides out there about how to do this, including this nice Harvard Business Review article on how to have an efficient email session, the “touch it once” philosophy that apparently got its start in the pre-email days, and the original “inbox zero” philosophy that leverages a lot of Gmail features.  My own philosophy borrows a lot from all of these, especially the idea that when you encounter an email you should dispose of it quickly in a way that either gets it off of your desk or puts it where it needs to be for you to act on it.

I know many people with tens of thousands of unread emails, and it’s probably not practical for them to go through and dispose of them all.  For them, I might recommend email bankruptcy: file everything away, start from an empty inbox, and this time don’t let it build up.

I got to this state by building up a lot of good email habits including, counterintuitively, sending myself lots of emails.  Here they are, in case you’d like to try it:

  1. Get GMail. It has good filtering, enough storage space for all of your email, a snooze feature, and (and this is key): such good search capabilities that you don’t have to file anything. It has good support for mobile devices, you can configure it for offline use, and the cloud storage means you don’t have to worry too much about backups.
  2. Use hotkeys. They save a second per email which really adds up. Have one for archiving, one for spam, one for responding, and one for responding to all.
  3. Think of your inbox as your to-do list.  If it’s in your inbox, it’s a short- to medium-term action item. Every email is an item. If you’re not going to do something with it soon, it should not be in your inbox. Keep your list of big and/or long-term projects you’re working on somewhere else.
  4. Archive emails immediately after dealing with them. This is how you cross the item off of your to-do list. GMail is also spooky good at giving “nudges” about emails you sent that never got answered, helping you to not lose track of important threads when they leave your inbox.
  5. Use snooze a lot. If you don’t need to work on it soon, snooze it until you do need to work on it. That final report due in December? Snooze until late November.  That speaker you need to arrange visits for? Snooze the “yes I can host” email you sent until the week before they arrive. That thing you’re going to buy this weekend? Snooze until Friday afternoon. Don’t have things in your inbox that aren’t potential action items today or very soon.
  6. Battle the email monster often and efficiently. I “weed” my inbox many times per day. It’s a constant triage, with every email getting one of three dispositions every time I see it:

    1) deal with it and archive it forever,
    2)
    snooze it for later, or
    3)
    decide you’ll deal with it very soon.
    It’s a good way to spend those odd bits of time between meetings or on a bus where you don’t have time to dig into a big project.

  7. Send yourself emails. If you have an ongoing thing that you need to have on your to-do list (i.e. in your inbox) and there is no associated email for the next short-term task, make one by sending yourself an email with the task as the subject.
  8. Get used to saying “send me an email”. If I’m in a conversation and we generate an action item for me, I make sure there is an email to go with it. You might send it to yourself, you might summarize the conversation at the end in an email to them and you, or (if appropriate) just ask them to send you an email asking for that thing. Now it’s on your to-do list.
  9. Expect that you will archive everything. Plan that every single email will eventually get stored away, the sooner the better. It’s not gone; you’ll find it again because you have Google search. If it’s not on your to-do list, archive it.
    [If you really really can’t not file things because you need that level of organization in your mail: use a label+archive hotkey.  Choose a small number of labels (they’re like folders) and file the emails you’re worried you’ll lose appropriately as you archive them. But: you don’t have to label everything.]
  10. Learn to use the GMail search bar. You can find emails very quickly if you know how to search on sender, dates, and other nifty keywords. This is key: you need to always be able to find any email without spending a lot of time filing them.
  11. Unsubscribe aggressively.  Spam is not to be immediately deleted! Each one is an action item: unsubscribe (if it’s true spam and not just normal marketing you can hit GMail’s “spam” button and better train the AI to keep these out of your inbox in the first place).  Between unsubscribing aggressively and GMail’s spam filter I get very little unwanted mail, which is essential for a well-managed inbox.
  12. Filter out the noise. There are emails you need to have and maybe want to read in batches but don’t really need to read every time they arrive. You can filter them to archive and get a label before they ever hit your inbox. When you want to catch up, go to the label and read them at your leisure, but don’t waste a tiny part of each day acting on them. If you’re worried you’ll never get around to reading them if they’re out of sight, send yourself an email to read them! Now they’re just a single line in your inbox, not many.
  13. Keep it on the first page. If your inbox exceeds your first page (50 or so) you need to sit down and deal with it. This will make you more productive and help with the feeling of doom that comes from having too many emails. Find what is not important to do this week and snooze it. Archive the stuff you just aren’t ever going to get to (maybe send a “sorry I won’t get to this” email first).  Be realistic about what you’re going to do. Don’t guilt pack emails in your inbox!

That’s how it works for me. I know it’s not for everybody, but hopefully there are some nuggets in there you can use.

Technosignatures White Papers

Here, in one place, are the white papers submitted last year to the Astronomy & Astrophysics decadal survey panels:

  1. “Searching for Technosignatures: Implications of Detection and Non-Detection” Haqq-Misra et al. (pdf, ADS)
  2. “The Promise of Data Science for the Technosignatures Field” Berea et al. (pdf, ADS)
  3. “A Technosignature Carrying a Message Will Likely Inform us of Crucial Biological Details of Life Outside our Solar System” Lesyna (pdf, ADS)
  4. “The radio search for technosignatures in the decade 2020—2030” Margot et al. (pdf, ADS)
  5. ” Technosignatures in Transit” Wright et al. (pdf, ADS)
  6. “Technosignatures in the Thermal Infrared” Wright et al. (pdf, ADS)
  7. “Searches for Technosignatures in Astronomy and Astrophysics” Wright  (pdf, ADS)
  8. “Observing the Earth as a Communicating Exoplanet” DeMarines et al. (pdf, ADS)
  9. ” Searches for Technosignatures: The State of the Profession” Wright et al. (pdf, ADS)

And, because it’s relevant and salient: the Houston Workshop report to NASA by the technosignatures community:

“NASA and the Search for Technosignatures: A Report from the NASA Technosignatures Workshop” (Gelino & Wright, eds.)  (pdf, arXiv)

On Watching the Sound of Music as an Adult

As a child, I watched the first two hours of The Sound of Music countless times. We had recorded it off of TV on a VHS top set to short-play mode, and so we only caught the first two hours.  For me, the movie ends with the von Trapps pushing their car away from the house to begin their escape from the Nazis.  I’ve only seen the rest of the movie a few times, as an adult.

As kids, we loved most of the music, we loved the scenes with the kids, we loved “Uncle Max,” and of course we loved Maria.  We generally skipped past the “boring parts” where the adults were talking and “Climb Every Mountain.”  We wanted to see “Do Re Mi” and “Lonely Goatherd”.

My kids have seen it a few times now (2-day rental at the public library) and they love it, too, for the same reasons I did.  But now I love it for different reasons—it’s a rich and brilliant film with lots to offer, much of it contradictory to the reasons I loved it as a child.

Some observations from an adult perspective:

    1. The movie downplays the evil of Nazism.
      As a kid, the Nazis are bad because Georg doesn’t like them and they want him to go to Berlin. In real life, the Nazis were evil because they were genocidal. It’s great to teach kids that “Nazis are bad” but watching as an adult you can’t help but think that the the von Trapps’ troubles are trivial compared to what was actually going on.
    2. “Climb Every Mountain” is great.
      The abbess his some pipes.
    3. Christopher Plummer was a dish.
      We understand Maria’s attraction to him because of the way the camera treats him as a sex object (for instance using soft focus) in a way modern movies usually reserve for women.
    4. Uncle Max is not a good man.
      He makes it clear he’s perfectly happy to collaborate with the Nazis, especially if he’ll make money doing it. The children (in the movie and those watching) love him because he’s so gregarious, but it is only his love for the von Trapps, their money, and Georg’s shaming that makes him help them escape. He doesn’t really deserve the hero status the movie gives him.
      Max
    5. Baroness Schraeder is not a villain.
      As kids we see only see her as an antagonist because she stands in between Georg and Maria’s love, and we dislike her because we see her scheming with Max about money, because she doesn’t like to play ball, and because she dreams of of putting the kids in boarding school. But as an adult I find her to be a sympathetic character, remarkable for her strength and maturity. A widow, she finds love in Georg, a good and handsome man who loves her for who she is, not for her money. She is desperate for him to marry her, but this is hardly a character flaw for a single, rich, middle-aged European woman in the 1930s. Georg promises the safety, stability, and love we all seek in life. She schemes to get Maria out of the house, yes, but wouldn’t we all in her position? And her schemes are all honest: at end of Act I she truthfully tells Maria Georg is falling in love with her, and Maria follows her calling and leaves the house to pursue her vows. It’s what Maria thinks she wants!And when it all falls apart and Georg is clearly conflicted, she doesn’t fight to the end. She knows when she’s been beaten, and she saves face by ending the relationship before he can say anything, telling him to follow his heart. Georg’s smile as she breaks it off is one of admiration, respect, appreciation, and love. It’s a brilliantly done scene, and as an adult that has loved and lost I find it remarkably moving.
      Richard Dreyfuss GIF
    6. Julie Andrews is brilliant.
      Especially thinking about the range revealed by her later roles as mature, stern characters, her innocent, effervescent Maria is just a delight to behold.

  1. Julie Andrews and Christopher Plummer’s onscreen chemistry is fantastic

I mean come on:

  1. Except for Leisl, the kids aren’t actually in this movie much
    They hardly get any actual lines, and are mostly just caricatures. The exception is Leisl’s hard lesson in love with Rolf, which is really well done.

  1. “I am Sixteen Going on Seventeen” doesn’t hold up.
    It’s a great song, and a beautifully choreographed sequence that wonderfully captures the unstable mix of love and lust that saturates teenagers, but the sexual politics are so retrograde it’s painful to watch.

  1. Mrs. von Trapp looms over the movie
    The children’s mother is rarely mentioned and we learn almost nothing about her except that she loved music and sang to the children with Georg (“I remember, father,” says Leisl, with aching innocence to the pain Georg is in at those words.). As adults and parents we are fascinated: Georg’s retreat into a stern taskmaster is clearly a defense against the pain of his loss; Maria’s music and exuberance clearly reminds him of her. We would understand Georg so much better if we could meet her; instead we barely know of her.
  2. Georg is a remarkable man, perfectly portrayed by Plummer
    His fierce morality, unshakable patriotism, strength, and sensitivity shine through the screen. I first saw the “Eidelwiess” scene as an adult, and Plummer nails it, with Georg unable to finish the song until Maria, his children, and the people of Salzburg give him the strength. For me, it’s a highlight of the movie.

The Little Principle

It it ethical to be good to your family?

Since the Renaissance, ethics has been a core subject in the humanities, but it has fallen out of the usual core curriculum at liberal arts schools, but Penn State is reviving the tradition. Many faculty at Penn State have taken ethics training via Penn State’s Rock Ethics Institute, which provides a week-long crash course in the basics and helps us integrate ethics into the curriculum.  The aspiration is that all faculty will be trained and all courses will have an ethics component. I think it’s a great project.

This means that I have just enough training in formal ethics to think about the topic as a sort of educated layperson or hobbyist. I find ethics to be a great way to think through problems and interrogate our motives, but not necessarily a way to arrive at the “right” answer to a dilemma: different ethical frameworks can yield different conclusions, as can bringing different values to the problem (but when all frameworks point to the same conclusion, you know you’ve got a robust answer). Some ethical reasoning is also about justifying our guts’ impulses formally, codifying, explaining, and refining peoples’ collective moral compass.

One paradox I struggle with is between our deep, instinctual tendency to treat our friends and loved ones better than others, and the bedrock principles of fairness that underlie most ethical frameworks. Put simply: there are things I would do for my family I would not do for a stranger, and things I’d do for a stranger I would not do for an enemy. How does that fit in with ethical analysis?

When thinking about this, I call it “The Little Principle,” and I consider it axiomatic. Here it is:

It is appropriate to treat some people better than others. Specifically, one should prioritize those to whom one has an emotional bond over others.

or, more simply: “Je suis responsable de ma rose.”

The name comes from The Little Prince, the classic children’s(?) book by Antoine Saint-Exupéry. It is a primary theme of the book, and is captured best in the lesson the fox teaches the little prince:

So the little prince tamed the fox. And when the hour of his departure drew near—

“Ah,” said the fox, “I shall cry.”

“It is your own fault,” said the little prince. “I never wished you any sort of harm; but you wanted me to tame you…”

“Yes, that is so,” said the fox.

“But now you are going to cry!” said the little prince.

“Yes, that is so,” said the fox.

“Then it has done you no good at all!”

“It has done me good,” said the fox, “because of the color of the wheat fields.” And then he added:

“Go and look again at the roses. You will understand now that yours is unique in all the world. Then come back to say goodbye to me, and I will make you a present of a secret.”


The little prince went away, to look again at the roses.

“You are not at all like my rose,” he said. “As yet you are nothing. No one has tamed you, and you have tamed no one. You are like my fox when I first knew him. He was only a fox like a hundred thousand other foxes. But I have made him my friend, and now he is unique in all the world.”

And the roses were very much embarrassed.

“You are beautiful, but you are empty,” he went on. “One could not die for you. To be sure, an ordinary passerby would think that my rose looked just like you—the rose that belongs to me. But in herself alone she is more important than all the hundreds of you other roses: because it is she that I have watered; because it is she that I have put under the glass globe; because it is she that I have sheltered behind the screen; because it is for her that I have killed the caterpillars (except the two or three that we saved to become butterflies); because it is she that I have listened to, when she grumbled, or boasted, or even sometimes when she said nothing. Because she is my rose.


And he went back to meet the fox.

“Goodbye,” he said.

“Goodbye,” said the fox. “And now here is my secret, a very simple secret: It is only with the heart that one can see rightly; what is essential is invisible to the eye.”

“What is essential is invisible to the eye,” the little prince repeated, so that he would be sure to remember.

“It is the time you have wasted for your rose that makes your rose so important.”

“It is the time I have wasted for my rose—” said the little prince, so that he would be sure to remember.

“Men have forgotten this truth,” said the fox. “But you must not forget it. You become responsible, forever, for what you have tamed. You are responsible for your rose…”

“I am responsible for my rose,” the little prince repeated, so that he would be sure to remember.

The book is elliptical and has some odd moral dimensions, but with this theme it nails something important on the head, something both profound and trivial: you treat those you care about better than those you have never met.

It’s easy to get carried away with individual ethical principles, to take your morality to extremes. I’ve written before about the great evil that comes from following your ideas to their logical conclusions (and also about the importance of radicalism to positive political change). Clearly, taking The Little Principle to its extreme leads to a sort of puerile selfishness, where all of our actions are centered around helping ourselves and those in our in-group, at whatever expense to others. Depending on whom we identify with, this can lead to great evils like genocide, and is contrary to the egalitarian principles of law and democracy. The Little Principle needs to sit next to another principle that all humans (and, I’d argue, much life) is entitled to a minimum level of moral standing. The Little Principle is not license to treat others badly.

But the other extreme—that we owe nothing special to our friends and loved ones—is fundamentally contrary to who we are as humans. Indeed, we don’t even restrict this instinct to other people, but it extends to our pets, our environment, and even whole classes of beings we’ve never met (“Save the Whales!”). The Little Principle states that any useful moral framework acknowledges that individuals can prioritize which others they help and care for.

I think one way to thread the needle is to acknowledge that this prioritization is personal, not universal. I treat my children better than yours, but I also expect you to treat yours better than mine. We do have universal moral responsibilities, but we also have relative ones that depend on who we care about. We can build universal legal and moral structures that themselves eschew the Little Principle, while enshrining it at the individual level. We are “a nation of laws, not of men.” A court will impartially defend the rights of any parent to care for their own children.

It’s an interesting and nuanced issue!

Background to the 2019 Nobel Prize in Physics

Fifty percent of the 2019 Nobel Prize in Physics goes to Michel Mayor and Didier Queloz for the discovery of 51 Pegasi b!  I had a tweet thread on the topic go viral, so I thought I’d formalize it here (and correct some of the goofs I made in the original).

A hearty congratulations to Michel Mayor & Didier Queloz, for kickstarting the field that I’ve built my career in! Their discovery of 51 Peg b happened in my senior year of high school, and I started working in exoplanets in 2000, when ~20 were known.

A thread:

The Nobels serve a funny place in science: they are wonderful public outreach tools, and a chance for us all to reflect on the discoveries that shape science. The discussions they engender are, IMO, priceless.

They also have their flaws: because they are only be awarded to 3 at a time, they inevitably celebrate the people instead of the discovery.

(This technically a requirement from Alfred Nobel’s will, but there are other requirements, like that the discovery be in the past year, that the committee ignores. Also, the Peace Prize is regularly awarded to teams, but the science prizes have never followed suit.)

Anyway, many of the discoveries awarded Nobels are from those who saw farther because they “stood on the shoulders of giants.” The “pre-history” of exoplanets is a hobby of mine, so below is a thread explaining the caveats to 51 Peg b being the “first” exoplanet discovered.

The first exoplanet discovered was HD 114762b by David Latham et al. (where “al.” includes Mayor!) in 1989. It is a super-Jupiter orbiting a late F dwarf (so, a “sun like star” for my money), published in Nature:

https://www.nature.com/articles/339038a0

Dave is a conservative and careful scientist. At the time there were no known exoplanets *or* brown dwarfs, and they only knew the *minimum* mass of the object, so there was a *tiny* chance it could have been a star. He hedged in the title, calling it “a probable brown dwarf”.

I wonder: if Dave had been more cavalier and declared it a planet, would *that* have kickstarted the exoplanet revolution? Would he be going to Stockholm in a few months?

Meanwhile, Gordon Walker, Bruce Campbell, and Stephenson Yang were using a hydrogen fluoride cell to calibrate their spectrograph. In 1988 they published the detection of gamma Cephei Ab, a giant planet around a red giant star:

https://ui.adsabs.harvard.edu/abs/1988ApJ…331..902C/abstract

They were also very careful. At least four of the other signals reported there turned out to be spurious. They did not claim they had discovered any planets, just noted the intriguing signals. In follow up papers they decided the gamma Cep signal was spurious. Turns out it was actually correct!

Again, what if they had trumpeted these weak signals as planets and parlayed that into more funding to continue their work? Would they have confirmed them and moved on to stars with stronger signals? Would they be headed to Stockholm?

Moving on: in 1993 Artie Hatzes and Bill Cochran announced a signal indicative of a giant planet around the giant star beta Gem (aka Pollux, one of the twin stars in Gemini).

Like gamma Cep A, the signal was weak. Like Campbell Walker & Yang, they hedged about its reality. But again, it turns out it’s real!

https://ui.adsabs.harvard.edu/abs/1993ApJ…413..339H/abstract

Then, in 1991 Matthew Bailes and Andrew Lyne announced they had discovered an 10 Earth-mass planet around a *pulsar*. This was big news! Totally unexpected! What was going on!? They planned to discuss in more detail in a talk at the AAS that January.

But when the big moment came, Bailes retracted: they had made a mistake in their calculation of the Earth’s motion. There was no planet, after all. That made more sense. He got a standing ovation for his candor.

But in the VERY NEXT TALK Alex Wolszczan got up and announced that he and Dale Frail had discovered *two* Earth-massed planets around a different pulsar! They would later announce a third, and that remains the lowest mass planet known.

Some wondered: Was this one really right? Had they done their barycentric correction properly? It held up. The first rocky exoplanets ever discovered, and the last to be discovered for *20 years*.

And there would be more. In 1993 Stein Sigurdsson and Don Backer interpreted the anomalous period second derivative of binary millisecond pulsar PSR 1620-26 as being due to a giant planet. This, too held up.

https://ui.adsabs.harvard.edu/abs/1993ApJ…415L..43S/abstract
https://ui.adsabs.harvard.edu/abs/1993Natur.365..817B/abstract

Meanwhile, in a famous “near miss”, Marcy & Butler were slogging through their iodine work. They actually had the data of multiple exoplanets on disk when Mayor & Queloz announced 51 Peg b, but not the computing power to analyze it.

If you’re interested in more detail, you can read this “pre-history” in section 4 of my review article with Scott Gaudi here:

https://arxiv.org/abs/1210.2471

None of this, BTW, is meant to detract from Michel & Didier’s big day. 51 Peg b was the first exoplanet with the right combination of minimum mass, strength of detection, and host star characteristics to electrify the entire astronomy community and mark the exoplanet epoch. As I wrote above, they kickstarted the exoplanet revolution. It makes sense that Mayor & Queloz got the prize!

This is to make sure that the Nobel serves its best purpose: educating, and promoting and celebrating scientific discovery.