Category Archives: science

How thick is “blood”? Am I really related to my 5th cousin?

Here’s a picture of my great grandparents John Henry Hattersley and Bertha Herrmann at Niagara Falls in 1910:

Photo of two well dressed people posing on rocks in front of Niagara falls

I’m presuming this is a real photograph and not staged with a backdrop or something, but I really don’t know. I think this was taken on Luna Island on the American Side.

I don’t know much of anything about them except that Bertha is the only non-English-origin great-grandparent of mine I’m aware of; I don’t remember my grandfather talking about them.  At one point I really got into tracking my family tree: I even discovered at one point that my lineage can be traced back to Boston Colony (via John Viall through my paternal grandmother’s father Clifford Viall Thomas). But I long ago stopped imagining that I was somehow learning about myself once I got to people beyond living memory.

It’s always bothered me when people say siblings share half of their genes. Similarly, people who can trace their decent to someone famous (Charlemagne, Jefferson) and seem to think this reflects well on their genes or something.  There are a few reasons this can’t be right, a few of which I think about in particular:

  1. We share 98.8% of our DNA with chimpanzees—we must share much more than that with our siblings!  In fact it seems that the “average pairwise diversity” of randomly selected strangers is around 0.1%.
  2. There is some level of discretization with DNA inheritance.  Obviously it can’t be at the base-pair or codon level, or else we wouldn’t be able to reliably inherit entire genes.  If the “chunks” we inherit from each parent are large enough, small number statistics will be push the number significantly from 50%.
  3. Mutations slowly change genes down lineages
  4. Combinations of genes and epigenetic factors have strong effects on traits

Point 2 is not something I really understand yet, except that talking to biologists I think the number of “chunks” that get passed down on each side is ~hundreds, but is also random making the problem quite tricky.  Still, ~hundreds means that we are probably close enough to ~50% inheritance from grandparents on each side (+/- 10% or less) that we can get a rough idea of how related we really are to people on our tree in terms of shared DNA.

So let’s take a closer good at point 1 above:

Let’s assume the amount of identical DNA we get from each ancestor is given by 2-n where n is the number of generations back they are (grandparents: n=2, so 25% inherited DNA, ignoring discrete “chunks” and mutations). This makes sense: except for (many) details like the X/Y chromosomes, mitochondria, and probably a bunch of other things, each ancestor a given number of generations back has an equal chance of having contributed a bit of DNA to you.

Finding the amount of shared inheritance is thus a matter of going back to the first shared ancestor and counting all of the shared ancestors at that level (which will be 1 in the case of for half siblings and 2 for full siblings, except for details coming later).

So cousins share 2/4 grandparents, each of whom had a 2-2 chance of contributing a bit of DNA, so they have 1/8 shared bits of DNA or around 12.5%.

Second cousins (the children of first cousins) share 2/8 grandparents, so the number is 6.25%. Each generation gap gives us a factor of 1/4: a factor of 2 from the extra opportunity on each line to “lose” that bit of DNA, one on each side.

Now we get into the fun of “removed cousins”, which just counts the generation gap between cousins. You don’t usually get big numbers of “removals” among living people because it requires generations to happen much faster along one line than another—big numbers like “1st cousins 10 times removed” are usually only seen when relating people to their distant ancestors.

So my kids are my first cousins’ “first cousins once removed”, and all of their kids would be “second cousins once removed”. The rule is that if you have “c cousins r removed” (so c=2 r=1 means “second cousins once removed”) then you have to go back n=c+r+1 generations from the one and n=c+1 from the other to find the common ancestors.  So removals count the number of opportunities to “lose” a bit of DNA that occur on only one side of the tree.

Putting it all together: the amount of shared DNA we share with a cousin is 2-(2c+r+1) (siblings have c=r=0, subtract one from the exponent if the connection is via half siblings).

But there’s a limit: this only works if none of the other ancestors are related, but in the end we’re all related. If cousins have children, this increases the number of shared ancestors and raises the commonalities. And, of course, mutations work the other way, lowering the amount of identical bits.

So why is this interesting? Because the “we’re all related” thing is true at the 0.1% level in DNA, meaning that if you make c high enough, you’ll get an answer that’s below the baseline for humans. Since log2(0.1%) = -10, we have that if 2c+r+1 >10, the DNA connection is no stronger than we’d expect for random strangers.

This means that if you meet your 4th cousins (i.e. your great-grandparents were cousins) your genealogical relationship is mostly academic and barely based on “blood”!  By 5th cousins, you’re no more related than you are to the random person on the street in terms of common DNA.

Even worse, if we have hundreds of “chunks” we randomly inherit from parents, then it’s even possible (and here I’m a bit less sure of myself) that you share no commonly inherited genetic material with someone as distantly related as a 5th cousin!

Again, this calculation makes a lot of assumptions about genes from different ancestors being uncorrelated, and in particular communities that have been rather insular for a very long time must have at least a bit more kinship with each other than they do with similar communities on different continents.  But from what I’ve gathered this effect isn’t that large: the variance in genetics within a community, even an insular one, is still usually larger than the difference across communities.  That is, the average person from one place is more similar genetically to the average from another place than they are to a random person in their own place.

And also, this doesn’t mean you can’t prove descent from someone more than 10 generations past via DNA—that might indeed be possible by looking at where common bits of DNA are in the chromosomes and similar sorts of correlations (I would guess).

Anyway, the bottom line is that it’s fun to do family trees and learn about our ancestors where we can, but we definitely shouldn’t get too hung up on the idea that we’re learning about the origins of our genes and kinship via biology—even setting aside the fact of old family trees being full of adopted and “illegitimate” children, the actual genetics dilute out so fast it hardly matters past great-grandparents.

 

Measuring Stellar Masses with a Camera using Mesolensing

I love the Research Notes of the AAS.  They are a place for very short, unrefereed articles through AAS Journals, edited (but not copyedited!) by Chris Lintott. They are a great place for the scraps of research—those little results you generate that don’t really fit into a big paper—to get formally published and read.

You might think that without peer review and with such a low bar for relevance, such a journal would have a very high acceptance rate, but actually I’ve read it’s the most selective of the entire AAS family of journals, including ApJL!  The things it publishes are genuinely useful, and shows that there’s a need for publishing models for good ideas that are too small to be worth the full machinery of traditional publishing.  The curation by Chris also ensures that the ideas really are interesting and worthy of publication.

A while back I wrote a Research Note on how to prove the Earth moves with just a telescope and a camera. Nothing that would leave to novel results, but it has inspired some amateurs to try it out!

For my latest note, I’ve got another trick you can do with nothing but a telescope and a camera, although in this case they’ll cost billions of dollars and do something useful and novel!

Whenever I hang out with Eric Mamajek we end up talking science and coming up with cool ideas. This often ends with one of us starting an Overleaf document for a quick paper that never ends up getting written.  But the idea we had on my last trip was good enough that I was determined to see it through!

The idea goes back to Eddington’s eclipse experiment, wherein he showed that a gravitational field deflects starlight at the level predicted by General Relativity (which is twice the level one might deduce from Newtonian gravity).

To do this, he imaged the sky during a total solar eclipse, when he could make out stars near the sun.  Comparing their positions to where they were measured during the night at other times of year, he showed they were significantly out of place, meaning the Sun had bent their rays. Specifically, he found that they were farther from the Sun by about an arcsecond (in essence, the Sun’s focusing effects allows us to see slightly behind it, and so everything around it appears slightly “pushed away” from its center.)

This led to a great set of headlines in the New York Times I like to show in class:

undefined

This is actually an example of what today we confusingly call microlensing. This actually captures a broad range of lensing effects, as Scott Gaudi explained to me here (click to read the whole thread).

Microlensing is most obvious when a source star passes almost directly behind a lens star—specifically within its Einstein radius, which is typically of order a few milliarcseconds. This level of alignment is very rare, but if you look at a dense field stars, like towards the Galactic Bulge, then there are so many potential lenses and sources that alignments happen frequently enough that you can detect them with wide-angle cameras.

In close alignments like this, the image of the background source star gets distorted, magnified, and multiply imaged, resulting in it getting greatly magnified in brightness, as shown in this classic animation by Scott.  Here, the orange circle is the background source star passing behind the foreground lens, shown by the orange asterisk. The green circle is the Einstein ring of the foreground lens.  As the background star moves within the Einstein ring, we see it as two images, shown in blue.

We typically do not resolve the detail seen in the top panel, we only see the total brightness of the system.  The brightness of just the background source is plotted in the bottom panel.

But in the rare cases where we can resolve the action, note what we can see: the background star is displaced away from the lens when it gets close, just like in Eddington’s experiment. This effect is very small, just milliarcseconds, and has only been measured a few times. This is called astrometric microlensing.

Rosie Di Sefano has a nice paper on what she dubs “mesolensing”: a case where instead of a rare occurence rate of lensing among many foreground objects, like in traditional microlensing surveys, you have a high rate of lensing for a single foreground object.  This occurs for very nearby objects moving against a background of high source density, like the Galactic Bulge.

The reason is that the Einstein ring radius of nearby objects is very large—for a nearby star it is of order 30 mas, or 0.”03.  Now, there is a very low chance of a background star happening to land so close to a foreground star, but foreground stars tend to move at several arcseconds per year across the sky, so the total solid angle (“area”) covered by the Einstein ring is actually a few tenths of a square arcsecond per year, which is starting to get interesting.

Things are even more interesting if you don’t require a “direct hit”, but consider background stars that get within just 1″ or so of the lens: even though it’s 30 Einstein radii away, the astrometric microlensing effect is still of order 1 mas, which is actually detectable!

Now, most of these background objects are very faint, so this isn’t really something you can exploit. Twice, people have used the alignment of a very faint white dwarf and some background stars to see this happen, and also once with the faint M dwarf Proxima. But most main sequence stars are so much brighter than the background stars, that their light will completely swap them.

But detecting very faint objects within a couple of arcseconds of bright stars is exactly the problem coronagraphy seeks to solve with the upcoming Habitable Worlds Observatory!  This proposed future flagship mission will block out the light of nearby stars and try to image the reflected light of Earth-like planets orbiting them.  And while it’s at it, it will see the faint stars behind the nearby one at distances of a few to dozens of Einstein radii.

So, for target stars in the direction of the Galactic Bulge, HWO will detect astrometric microlensing! And it will do this “for free”: it will be looking for the planets orbiting the star, anyway!

So, who cares? Is this just a novelty? Actually, it will be very useful: measuring the astrometric microlensing will directly yield the mass of the host star. This is great, because we have almost no way of doing this otherwise: we need to rely on models of stellar evolution, which are great but still require conversion to observables, which comes with systematic uncertainties of order a few %.  Directly measuring stellar masses will allow us to avoid those systematics, and better understand each star’s history—and that of its planetary systems.

Now, if we find planets with orbital periods of a few years or less, we can also measure the host star masses using Kepler’s Third Law, but this is an independent way to do this, and it also works on stars without planets. In principle, you could even go pointing HWO at all of the stellar mass objects towards the bulge to do this measurement, making it a pure stellar astrophysics engine (precise stellar masses don’t sell flagship missions like exoplanets do, though).

The final piece of this calculation was we needed to know what the background source density of HWO target stars were. As luck would have it, my recent advisee Dr. Macy Huston had just graduated, and the final chapter of their thesis is on a piece of Galactic stellar modeling software that does exactly this calculation for microlensing! It’s called SynthPop and you’ll hear about it soon, but in the meantime they were able to calculate how many background sources we expect from an example HWO architecture around likely HWO targets.

Macy finds that the best case of 58 Oph will likely have over 15 stars in the coronagraphic dark hole that will show astrometric microlensing, giving us a ~5% mass measurement of the star every visit. These numbers are very rough by the way—the precision could easily be better than this.

Anyway, this RNAAS was a lot of fun to write, and you can read all of the details in it here.

The bottom line is that HWO will be able to measure the masses of all sorts of stars towards the Galactic Bulge directly, with no model dependancies!

Enjoy!

Codes of Conduct at the PSETI Center

Why I am Responding Here

As the head of the PSETI Center I need to address a controversy and correct some factual errors circulating about the Codes of Conduct at various PSETI-related activities, one of which led to the abstract of an early career researcher (ECR), Dr. Beatriz Villarroel, being rejected from an online conference organized by and for ECRs, the Assembly of the Order of the Octopus.  This controversy has been brewing on various blog posts and social media, and recently became the subject of a lengthy email thread on the IAA SETI community mailing list.

Sexual harassment is widespread in science and academia in general, it is completely unacceptable, and, when these kinds of issues arise, our focus and priority as a community should be to protect the vulnerable members of our community.

Much of this criticism has been directed at the PSETI Center and specifically at its ECRs and former members. This discussion has also been extremely upsetting not only for these ECRs, but for researchers across the SETI community and beyond, especially those that have been victims of sexual harassment, and to minoritized researches who need to know that the community they are in or wish to join will protect them.

For these reasons and many more, these attacks warrant a response, explanation, and defense from the PSETI Center.

Background

I don’t want to misrepresent our critics’ positions. You can read Dr. Villarroel’s version of events and their context for yourself here.

The background for this story involves Geoff Marcy, who retired from astronomy when Buzzfeed broke the story that he had violated sexual harassment policies over many years. Since then many more stories of his behavior have come to light, and the topic of whether it is appropriate to continue to work with him and include him as a co-author has come up many times. I am particularly connected with this story because Geoff was my PhD adviser and friend, and for a while I continued to have professional contact with him after the story broke. I have since ended such contact with him and apologized after discussions with my advisees and with victims of his sexual harassment gave me an understanding of why such continued association was so harmful.

This particular story begins at an online SETI conference in which Dr. Villarroel presented research she was doing in collaboration with Marcy, during which she showed his picture on a slide. This struck some attendees as gratuitous and as potentially an effort to rehabilitate Marcy’s image in the astronomical community. It also struck some as insensitive to victims of sexual harassment and assault, especially to any of Marcy’s victims that may have been in attendance.

Around this time, a group of ECRs in SETI decided to revive the old “Order of the Dolphin” from the earliest days of SETI, rechristened as the “Order of the Octopus.”  This informal group of researchers builds community in the field, and the PSETI Center is happy to have provided the it some small financial and logistical support.  The Order decided to meet online during the COVID pandemic in their first “Assembly.”  As they wrote: “In designing the program for this conference, we are also striving to incorporate principles of inclusivity and interdisciplinarity, and to instill these values into the community from the ground up.“

I was not an organizer of the Assembly, but I think it is fair to say that they wrote their code of conduct in a way that would ensure that image and presence of any well-known harasser like Marcy would not be welcome. This effectively meant that abstracts featuring Marcy as co-author would be rejected, and that participants were asked to not gratuitously bring Marcy up in their talks.

What happened: The Assembly of the Order of the Octopus and the Penn State SETI Symposia

More than one researcher that applied to attend the 2021 Assembly of the Octopus, including Dr. Villarroel, had a history of working with Marcy. To ensure there were no misunderstandings, those applicants were told in advance that they were welcome to attend the conference provided that they abided by the aspects of the code of conduct, and the language in question was highlighted.

When the organizers of the Assembly learned that Dr. Villarroel’s abstract was based on work published with Marcy as the second author, they withdrew their invitation to present that work, but made clear she was welcome to attend and even to submit an abstract for other work.

Dr. Villarroel chose not to attend the Assembly.

Similar code of conduct language appeared in two later PSETI events, the Penn State SETI Symposia in 2022 and 2023.  Dr. Villarroel did not register to attend or submit an abstract for either symposium.

What happened: The SETI.news mailing list and SETI bibliography

Another source of criticism of the PSETI Center involved me directly. At the PSETI Center we maintain three bibliographic resources for the community: a monthly mailer1 of SETI papers (found via an ADS query we make regularly), an annual review article, and a library at ADS of all SETI papers.

Dr. Villarroel wrote a paper with Marcy as a second author which does not mention SETI directly, but obliquely via the question of whether images of Earth satellites appear in pre-Sputnik photographic plates. This paper did not appear in our monthly SETI mailer, and Dr. Villarroel contacted me directly to ask that it appear in the following month’s mailer.

I declined. As I wrote:

Hi, Beatriz.Thanks for your note.

Your paper slipped through our filter because it doesn’t mention SETI at all, or even any of the search terms we key on. Did you mean to propose an ETI explanation for those sources? At any rate, if you mean for it to be a SETI paper we can add it to the SETI bibgroup at ADS so it will show up in SETI searches there (especially if there are any followup papers regarding these sources or this method).

As for SETI.news, that is a curated resource we provide as a service to the community, and we have decided that we don’t want to use it to promote Geoff Marcy’s work. This isn’t to say that we won’t include any papers he has contributed to, but this paper has him as second author and, since I know his style well, I can tell he had a heavy hand in it.

Best,

Jason

Dr. Villarroel has taken exception to my message, saying that it implies she “wasn’t the brain behind [her] own paper.” I have also learned that Dr. Villarroel feels this implication is sexist.

My meaning here was simply that Marcy’s name on an author list wasn’t an automatic bar from us considering it—we were specifically concerned with recent work he had made substantive (and not merely nominal) contributions to. I think the first part of the offending sentence makes this clear. As someone who worked very closely with Marcy for many years (and as someone who is familiar Dr. Villarroel’s other work) I felt that I could tell that he had more than a nominal role in the work behind the paper. I felt that this and his place on the author list justified that paper’s exclusion from the mailer.

But while it was certainly not my meaning, I do acknowledge the insidious and sexist pattern of presuming that papers led by women must not be primarily their own work, and that men on the author list—especially senior men—must have had an outsized role in it. Now that Dr. Villarroel has pointed this out, I do regret my choice of words, acknowledge the harm they’ve caused, and here apologize to Dr. Villarroel for the implication. For the record: I do believe that paper was led by Dr. Villarroel and is primarily hers.

Dr. Villarroel

While I obviously disagree with some of Dr. Villarroel’s interpretations of these events, I don’t think she has publicly misrepresented them. Others have, however, perpetuated misinformation in the matter.

Specifically, I want to make clear that Dr. Villarroel was never “banned” from any PSETI-related conference, and Dr. Villarroel is not being punished for her associations with Marcy with our codes of conduct.  The prohibitions at the PSETI symposia are targeted at harassers, and include work they substantially contribute to. Dr. Villarroel is welcome to attend these conferences in any event, and to present any research that does not involve Marcy.  She and her work have not been “cancelled”, and her work with Marcy appears in the SETI bibliography we maintain.

I also want to acknowledge the large power differential between me and Dr. Villarroel. I understand that I have some power to shape the field and her career, while she has almost none over my career. It is for this reason that I have avoided discussing her in public up to this point, or initiating any engagement with her at all.  If she were more senior I would certainly have defended our actions and pushed back on her characterizations of me and the PSETI Center sooner.

At any rate, I do not bear her any ill will and I absolutely do not condone any harassment of her. That said, I understand why people are upset that she would continue to work with Marcy, and they are entitled to express that displeasure, even in potentially harsh terms, especially in private or non-professional fora, as long as they are not “punching down,” doing anything to demean, intimidate, or humiliate her, or sabotaging her work.

The Order of the Octopus SOC

I also want to acknowledge the large power differential between many of the PSETI Center’s critics and the chairs and the organizing committees of our various conferences that have contributed to the Codes of Conduct, many of whom are ECRs. This is another reason that I am responding here: to give voice to those who have far less power than I that are being attacked.

Critiques of the PSETI Center’s actions here should therefore be directed at me: I am the center director and conference chair of both symposia, and I take full responsibility for our collective actions here.

What Sorts of Codes of Conduct are Acceptable

Many have argued our bar against harassers’ work is completely inappropriate, being both unfair to Marcy and even more unfair to his innocent co-authors. I disagree, and argue that it is in fact an appropriate way to protect vulnerable members of the community who are disproportionately harmed by sexual harassment and predation.

As an aside, I note that the PSETI Center is not alone in this position; it is also consistent with our professional norms. I would point to the AAS Code of Ethics which includes a ban from authorship in AAS Journals as a potential sanction for professional misconduct. Such a sanction is analogous to a ban from authorship on conference abstracts.  It is true that this ban also affects innocent co-authors, but a harasser should not be able to evade a ban by gaining co-authors. That is not guilt-by-association for the co-authors; it is a consequence of a targeted sanction. It is certainly not harassment of those co-authors.

I admit I find this whole episode to be somewhat confounding.  A small group of ECRs got together to hold a meeting and had a no-harasser rule, this was enforced, and now years later it’s the subject of a huge thread on the IAA SETI community mailing list, the subject of Lawrence Krauss blog posts, the basis of an award by the Heterodox Academy, and creating so much drama that I need to address it here.

I also find it ironic that many complaining about the Order of the Octopus being selective about who they decide to interact with at their own conference are ostensibly doing so to protect the principle of…freedom of academic association. To be clear: Dr. Villarroel is free to collaborate with Marcy or anyone else she chooses. This is a cornerstone of academic freedom. Are the members of the Order of the Octopus not equally free to dictate the terms of their own collaborations and the scope of their own meeting, and to select abstracts as they see fit? Freedom of association must include the freedom to not associate, or else it would be no freedom at all.

Now, I acknowledge that there are limits to this freedom: one should not discriminate on matters that have nothing to do with science, especially against minoritized people. But that’s not what’s going on here: Marcy’s behavior is worthy of sanction, and our sanctions are entirely focused on harassers like him and their research, and only to protect vulnerable members of the community.  As I wrote, Dr. Villaroel is not guilty by association, and is welcome at future PSETI symposia, provided she abides by the Code of Conduct.

As for what behavior is appropriate towards those, like Dr. Villarroel, who choose to work with Marcy and the like, I think this is nuanced.  Especially in large organizations, we should honor people’s freedom of association and in general this means those people should not lose roles or jobs for this choice alone. There should be no guilt by mere association, especially by past association—indeed, as a longtime collaborator of Geoff’s, including for years after his retirement and downfall, I am particularly sensitive to this point.

But the choice to work with Marcy will have inevitable consequences. If you are working with him, many people will rightly not want to do work with you that might involve them with him, and there are excellent reasons why one might avoid working with those who have an official record of sexual harassment violations. My students are wary of working with groups that involve Marcy, because this had led to students finding themselves on conference calls with Marcy, finding themselves on author lists with him, and getting emails from him as part of the collaboration.  For me to honor my students’ freedom to not associate with Marcy, I have discovered the hard way that I need to be very careful with anyone working with him, and that I must turn my own interactions with him down to zero.

Affirmative Defense of our Codes of Conduct

At any rate, we’ve done nothing wrong. We’ve decided where we at the PSETI Center will draw the line on notorious sexual harassers like Marcy and I am confident it is the right choice for us. Other meetings and organizations will deal with this in their own way that might be different from or very similar to ours, but either way I’m confident that the majority of astronomers are comfortable with the choice we’ve made.

There is a troubling lack of empathy for the victims of sexual harassment in these abstract discussions about academic freedom. When a notorious harasser’s face and name and work pops up in a talk, we need to remember their victims may be in the audience. Other victims may be in the audience. Allowing that to happen sends a message to everyone about what we, as a community, will tolerate, and whose interests we prioritize.

And the attacks on our code of conduct and the stance we have taken continue to do harm. The ECRs that helped write and enforce these codes are reminded that no matter how badly an astronomer acts, there will always be other astronomers there to apologize for them, to ask or even demand that their victims forgive them, to accept them back into the fold, to act like nothing happened, to insist that only a criminal conviction should trigger a response, to question, resist, and critique sanctions, and to attack astronomers that would insist otherwise.

If we, as a community, claim that we won’t tolerate sexual harassment, we need to show that we mean it by enforcing real sanctions that seek to keep our astronomers feeling safe. If we can’t do that for as clear and notorious a case as Geoff Marcy, then we can’t do it at all, and we will watch our field hemorrhage talent.

I am grateful to the many astronomers and others that passed along words of support to our ECRs as this criticism has rained in. I hope in the future more astronomers, especially senior ones, will speak up publicly to defend a strong line against sexual harassment in our community and show with their actions, voices, and platforms that all astronomers can be safe in our field.


1I should have been more precise in my language. James Davenport the sole owner and operator of the seti.news website and mailer. For a while I and other PSETI Center members supplied the data that populated it (we haven’t had the bandwidth for a while now, but hope to start up again soon). This is why the issue of Dr. Villarroel’s paper went through me.  You can read Jim’s position on the topic here:

Why NASA should have a do-over on the name of JWST

The name of JWST, the James Webb Space Telescope is in the news again.  If you’re not familiar with the story, I recommend the Just Space Alliance video here:

which summarizes the case against keeping the name.

As I write this, I’m told the release of a NASA report on James Webb’s role in the Lavender Scare and the firing of LGBT NASA employees is about to become public. I’ve been involved in this because I sit as an ally on SGMA, the committee which advises the American Astronomical Society on LGBTQ+ issues. On this committee, I was lead on the issue of learning about what NASA was doing about this. I spoke with the Acting NASA Chief Historian, Brian Odom, about his research on this.

Below is how I see it.  If you think we should keep the name, please read the following with an open mind. Note, some of what appears below was drafted in collaboration with other SGMA members, as part of our recommendation to the AAS.

The name of the telescope really matters, and we need to get it right

The Hubble Space Telescope (HST) has shown that the name of NASA’s flagship observatories can become synonymous with astronomical discovery and gain deep resonance and symbolism among both astronomers and the public at large. Astronomers tout the discoveries of Hubble in interviews and public talks, they festoon their laptops and backpacks with Hubble mission patches and stickers, and some of the most talented young astronomers bear the title “Hubble Fellow.” For many members of the public, the Hubble Space Telescope may be the only scientific instrument or laboratory they can name.

Since JWST is in many ways a successor to HST, and is likely to occupy a similarly important role in astronomy and the public’s perception of the field, it is especially important that its name be appropriate, that it inspire, and that be something everyone who works on and with it can be proud of.

Despite this, NASA gave the telescope an uninspiring name

When the name was announced, there was a distinct sense of confusion and disappointment in the community.  “Who’s that?” was the refrain.

I and many others sort of accepted it because we didn’t really think too hard about it, but it’s a huge missed opportunity. The name doesn’t inspire. When people ask for why it’s called that, most astronomers shrug and say “he was the NASA administrator during the Apollo era” and move on to the next topic. It’s a name only a NASA administrator could love.

This isn’t to say that administrators don’t do important things that should be acknowledged! Administration is hard and good administration is so valuable it absolutely should be celebrated. And perhaps if his legacy were different astronomers would celebrate his name and be glad to see his name on this telescope.

But the name just has no resonance here.

Despite this, NASA named the telescope with no input from stakeholders

NASA’s international parters were not involved in the decision. Astronomers were not involved in the decision. The people who built it were not involved in the decision. Lawmakers and policymakers were not involved in the decision. Elected officials were not involved in the decision.

The name was poorly chosen, and does not reflect NASA’s (purported) values

The decision was made by one NASA administrator, to name the telescope after another NASA administrator, and this name has been stubbornly kept by a third NASA administrator.

This is bad precedent, and the current fallout is a great illustration of why. In James Webb’s NASA, gay employees were fired. Clifford Norton was arrested, interrogated, and fired.

This is not the organization that today’s NASA aspires to be (we hope!).

It’s not too late to change it

NASA changes the names of space telescopes and missions all the time. Its very common for things to have boring names on the ground (AXAF, SIRTF) and inspiring names once they’re working (Chandra, Spitzer).  We all adjust. It’s not a big deal.

At this point, NASA’s resistance has gone from stubbornness to recalcitrance. Already, NASA employees are refusing to use the name in prominent publications. The Royal Astronomical Society says it expects authors of MNRAS not to use the name.  The American Astronomical Society has twice asked the administrator to reopen the naming process (and received no response!).  This is an error that only grows as NASA refuses to fix it.

NASA needs to think about the people using the telescope

Think for a moment about the LGBT NASA employees working on JWST today. They want to be proud of their work, proud of the telescope, proud as LGBT NASA employees.

But just to use the name of the telescope is to name a man who, undisputedly would have had them fired. This feels perverse to me.

Right now, the premier fellowship in astronomy is the Hubble Fellowship. When Hubble finally goes, will it become the Webb Fellowship? If you advise students, how would you feel recommending an LGBT student apply for that fellowship? How would you feel when they tell you they’re uncomfortable attaching the name of someone who undisputedly would have fired them to their career, to their CV, to their job title?

This, of course, isn’t just a “gay issue”. We all have LGBT colleagues, friends, and family. Beyond that, we want astronomy, space, and NASA to be inclusive and inspiring in all ways. What precedent does this whole fiasco set for that future we seek?

The telescope deserves a better name.  Astronomy deserves to have a telescope that reflects our values. America and the world deserve a telescope that inspires. Even those who are defending Webb have to concede the current name is not doing those things.

Let’s do better. Why not?


All that said, there is a lot of interest in the specific accusations of homophobia and bigotry by Webb. I’m pretty sure that will be the focus of the NASA report that’s about to come out and of most of the ensuing discussion.

I think this is a distraction. Now, the evidence seems to indicate that, at the very least, he did not see enough humanity in LGBT people needed to protect them from unjust policies. But regardless, his bigotry is not part of my argument for changing the name. (That said, if there is some sort of smoking gun document revealing his personal involvement in these firings or personal animosity towards gay people, that makes the case even stronger).

And even though they are beside my point, I find most of the defenses of Webb lacking.  Here are some common ones I see and hear:

All of the accusations against Webb (the misattributed homophobic quote, his place in the chain of command) are false.

There is a long back story to how this issue came up, of a few specific accusations that turned out to be false, and others that turned out to be very true, and so on.  You can easily find it if you Google around or search on Twitter.

The bottom line is that he had a leadership role at State during the Lavender Scare and was chief administrator at NASA when LGBT employees were fired (and worse). This is undisputed, and it is enough.

This is just a woke mob “canceling” and smearing the name of an innocent man.

This isn’t James Webb on trial. I’m not basing my argument on his being a nasty bigot, because even if he wasn’t we should still rename the telescope.

The standards for putting someone’s name on the most important scientific instrument of a generation should be very high, and there’s no shame in not having your name on it.

But what if he was, in his heart, not a bigot and actually worked behind the scenes in undocumented ways to minimize the Lavender Score? I think, given the balance of evidence, that this is unlikely, but just to entertain the logical possibility: in that case I’m sorry his legacy is caught in the middle of this and I’m sure this is infuriating for his family and people who respected him a lot, but this is much bigger than James Webb and his legacy. Again, this is not “James Webb on trial”; it’s “what should we name the telescope?”

Wasn’t Webb just a “man of his time”? Why should we judge people in the past by standards of today?

This argument all but concedes he was a bigot, which is enough to rename the telescope. But, entertaining it:

First of all, plenty of people at the time understood that sexual orientation had no bearing on one’s ability to work at NASA. Most LGBT people understood that, for starters.

Secondly, the argument that it made them susceptible to blackmail to foreign adversaries and so it was objectively reasonable to fire them is not as strong as it looks. After all, one way to fix that problem is to make it absolutely clear to employees that if they are outed, they won’t lose their livelihood.  Every fired gay employee is a gift to potential blackmailers, handing them leverage over other closeted employees on a silver platter.

But even granting he was a man of his time, this argument completely fails.

Of course we are judging the namesake of the telescope by today’s standards.  Why would we choose any other? We are here today, with the telescope of today. Its name should reflect today’s standards! Why wouldn’t it?

Don’t you worry that people of the future will “cancel” great people from our time for moral lapses by future standards?

I don’t worry about that at all. If I end up (in)famous for something and people in the year 2500 spit after saying my name because I ate meat from slaughtered livestock, which they consider an unspeakable evil—well, that makes sense right? Why would you celebrate people who lived lives antithetical to your values?

Firing LGBT people at State and NASA was the law of the land at the time.  There’s little he could have done and he wasn’t directly involved anyway.

If we concede that he was just doing his job, then we also concede away the only good argument for naming the telescope after him.  James Webb did not design or build the Saturn V rockets, he did not calculate the trajectories of the capsules, he did not walk on the Moon. He was a (by all accounts highly effective) administrator who oversaw those things.

If he gets credit for the good things that happened on his watch obviously he should get demerits for the bad.

There’s no evidence he’s a bigot. His heart wasn’t in firing LGBT people the way it was in, for instance, integrating NASA. 

There’s a double standard at play where simply listing his (very impressive!) accomplishments at NASA is sufficient for justifying the name, but when it comes to bad things happening on his watch we need some sort of smoking gun, evidence of mens rea, to understand where his heart was on the matter.

Anyone demanding evidence of his bigotry should be ready to put forward evidence of his personal virtues on other items, not just lists of things good happening on his watch.

OK: James Webb went above and beyond to integrate NASA. He gave an impassioned speech about it.

Based on what I’ve seen, we really don’t know his views on race.  We do know that Johnson charged him with using NASA as a lever to integrate the South.  We do know he was a loyal foot soldier who understood the assignment and got it done.  It’s unclear to me what extracurricular activities he was doing to promote racial equality.

But isn’t every name problematic? Everyone in the past had something that people today will object to.

First of all,  I’m sure we can find people who didn’t have a demonstrated track record of ruining innocent people’s lives like Webb’s NASA did.

Secondly, the onus of solving the problem of what the perfect name is should not be on the people pointing out the current problem! This is a great question and one that obviously needs addressing before we name a project as important as JWST. NASA should put together a process for addressing it, which means reconsidering the name of the telescope!

 

Is SETI a good bet? Part III: Ambiguity and Detectability

In Part I I established the claim that technosignatures must be less prevalent than biosignatures, and showed that while that certainly could be true, the opposite is actually quite plausible, and by a huge factor.

In Part II we looked at the longevity term and, again, found that even though technology has been on Earth for much less time than life has, it’s still possible, and even plausible, that it typical lifetime in the Galaxy is actually much longer than that of life.

In this part, we look at two more criteria: detectability, and ambiguity.

Detectability

How detectable are technosignatures?  Except for a few things like radio and laser transmissions, it’s not actually very clear. Most technosignature strengths have not been worked out in detail!  An ongoing project led by Sofia Sheikh is to determine what Earth’s detectability is because of our technosignatures.

Héctor Soccas-Navarro proposed a nifty metric called the inchoscale that compares a technosignature strength to that produced by Earth today. So, Earth today has, by definition, i=1 for all of its technosignatures.  How does their strength compare to our biosignatures?

If you ignore one-offs like the Arecibo Message, it’s actually not clear what our “loudest” technosignature is.  To stars that see Earth transit, they could try to measure our atmospheric composition, and Jacob Haqq-Misra has worked out roughly how hard it would be to detect our CFCs, and Ravi Kopparapu has done something similar for NOxs.  Both would be very challenging to detect…but then, so would our ozone and methane.  Which is stronger? I’m not sure.

I do know that the full SKA is supposed to be strong enough to have a shot at detecting our regular aircraft radar emissions at interstellar distances in coming decades. This means that being able to detect ichnoscale=1 technosignatures is a few decades out, and that feels similar to the time before we could detect biosignatures around an Earth analog.

The bottom line is that we don’t know whether Earth’s technosignatures are more or less detectable than its biosignatures with Earth technology from nearby stars, but it’s probably a close call, and it could easily be that technosignatures win.

Ambiguity

The ambiguity of technosignatures depends on the signature. Waste heat from Dyson Spheres any circumstellar material should generate waste heat. A narrowband radio signal, however, can only be technological (although its origin could be ambiguous).

Waterfall plot of the Voyager I carrier wave

Dynamic spectrum of the Voyager I carrier wave—a clear example of an unambiguous technosignature

So technosignatures run the gamut. Clearly, searching for an unambiguous one is better on that score, but ambiguous ones may require less contrivance—waste heat is an inevitable consequence of energy use, but there’s no reason aliens would have to use narrowband radio transmitters. Balancing this requires thinking about the axes of merit of technosignatures.

But the same is true for biosignatures! There are examples of what an unambiguous detection would look like (microbes swimming in Europa’s subsurface ocean), but there are plenty on the other end, too, especially for remote detection: detecting oxygen or methane in an alien atmosphere is a potential biosignature, but both species can also be generated abiotically.

Even iidentifying something that would serve an “agnostic” (not specific to Earth life) and unambiguous biosignature is a major challenge in astrobiology. The most probable path to success, IMO, is identifying a “constellation” of ambiguous biosignatures that together suggest strong disequilibrium chemistry maintained by metabolism (oxygen and methane together for instance).

So as far as ambiguity goes, biosignatures and technosignatures share the same problems, and neither has a clear advantage. Both have many examples of ambiguous signatures, and both can offer examples of clean detections.

Conclusions

This last point illustrates something important: biosignature searches and technosignature searches have a lot in common. Both search for the unknown, trying to balance being open-minded about what there is to find while letting what we know about Earth life to inform us. Both struggle with identifying good signatures to hunt for, how to handle ambiguity, and how to interpret null results.

But the communities don’t talk about these much between one another. Indeed, astrobiologists have called for and launched an ambitious project to nail down standards of life detection, without acknowledging or even mentioning the significant work on the topic over in SETI. Similarly, technosignature search would benefit from a this sort of rigorous exercise.

I hope our new paper will inspire better cross-pollination between the two communities, and a better balance of effort between the two methods of finding life. Since we don’t know which has a better chance of success, we should follow a mixed strategy to maximize our chances.

Our paper, written with Adam Frank, Sofia Sheikh, Manasvi Lingam, Ravi Kopparapu, and Jacob Haqq-Misra, is now published in Astrophysical Journal Letters.

Is SETI a good bet? Part II: Drake’s L for biology and technology

In Part I, I laid out part of the argument for why SETI is not worthwhile compared to searches for biosignatures. In this part, I’ll address the next big part of the argument: longevity.

Longevity

Last time we summarized an argument for looking for biosignatures instead of technosignatures as:

N(bio)/N(tech) = ft * Lt/Lb <<1

Let’s take a look at that last ratio, between the time on a planet technosignatures are detectable and the time biosignatures are detectable. Need it really be small?

The zeroth-order way to think of it is in terms of the history of the Earth: biology has been here for around 4 billion years, and biosignatures detectable from space for some significant fraction of that—at least 1 billion years, say.  But our technosignatures only just recently got “loud” enough to be detectable—decades ago, say.  That’s a factor of about 10-8!

And that’s fair, but there are a few reasons to expect in some cases it could be much larger, and even greater than 1!

Humanity may be a poor guide to longevity

Lots of people are pessimistic about our future on Earth, but we should be careful not to project our perception of human nature onto aliens. Technological alien life may be very different from ours, with a different evolutionary trajectory and relationship with its planets.

But even if we do use humanity as a guide, we should not assume we have a good sense of what our longevity is. Unless something happens that makes humans go extinct, there’s no reason to think our technosignatures will permanently disappear.  Even ecological disasters, nuclear war, or deadly pandemics, horrible as they would be, are unlikely to actually erase Homo sapiens from the planet completely.  Over geological or cosmic timescales, we might have many periods of tragedy and low population, but that only keeps the duty cycle less than 1, it does not shrink Lt by orders of magnitude.

And, of course, technology doesn’t just give us a way to end our species existence, it offers a way to save it.  We can, in principle, deflect killer asteroids, cure pandemics, and alter ourselves and our environment to survive in paces and times our biology alone would not allow.

In other words, it’s not at all clear why Lt for humans could not be orders of magnitude larger than it has been in the past.

And, even if you are pessimistic about humanity’s actual tendencies to do those things well, there’s no reason to project that pessimism onto other species.

Humans aren’t Earth’s only intelligent species

Further, even if humans do go extinct soon, that would not necessarily end Earth’s technosignatures! Most or all of the traits that make humans so good at building “big” technology with detectable technosignatures exist elsewhere in the animal kingdom. Tool use, social behavior, communication and language, problem solving, generational knowledge—all of these things can be found not just among distant relatives in Chordata but over in Mollusca, too, in the squids and octopi.

Image of a group of squid

Squid communicate, hunt, and live together.

That’s important for two reasons: one is that it means that there is no reason another species could not arise that exhibits our level of technology, even in the sea.  Another is that since molluscs and mammals share no intelligent common ancestors, these traits aren’t accidents of human’s particular evolutionary pathway, but have arisen independently multiple times. This means they can arise independently again, and not just on Earth!

Also, surprisingly, we can’t even be sure something like our level of technology has not existed on Earth in the past!  Adam Frank and Gavin Schmidt have advanced the Silurian Hypothesis, that such a thing has happened—not because there is any evidence for it, but to point out the surprising fact that we have almost not evidence against it.  In fact, all evidence of Earth’s “Anthropocene” will have either disappeared or be ambiguously technological after a few million years.

In other words, we need to include other species in Lt for Earth, and we don’t really have any evidence-based reason to think that number is zero, either in Earth’s future or Earth’s past.

Technological longevity has no obvious upper limit

Lb, the lifetime of biosignatures, has a hard upper limit: the inevitable evolution of the planet’s host star which will eventually scorch the surface, and could even envelop the planet within the star.

Technology, however, has no such upper limit. In principle it could allow us to survive on a much hotter Earth, and, as we discussed in Part I, it can spread to other places where it can persist long after the biosphere that generated it.

So there is no reason that Lb must be larger than Lt, and the latter has a higher upper limit.  In other words, there could be alien species—or even future Earth species, like us—that build technology for longer than life exists on Earth.

Technology can exist without biology

Technology can outlive its creators. Obvious Earth examples include the Great Pyramids, which would serve as markers that humans where here and did amazing things for long after something happened to humans.  Our interstellar probes will last a very very long time, no matter what happens to us.  And so such technology on a grander scale from an alien species might be detectable for a very long time.

But also, technology might be able to self-perpetuate. Much has been written about the possibility of self-assembling machines and AI that would allow machines to spread and reproduce much like life. This possibility is almost certainly not limited by any fundamental engineering or physical principle, and the fact that we can even outline how it could be accomplished with something like existing technology suggests it may not be far off.  Unbound by a biosphere, why couldn’t Lt>Lb?

The bottom line on longevity

As we write in our paper:

Using Earth as a guide for our expectations for Lt /Lb is probably unreliable because we do not know Lt for Earth’s past; even if we did we should not use it to predict Lt for Earth’s future, and even if we did, we should not expect Earth to be a good guide for alien life, and even if we did, we should expect a broad distribution of longevities across alien species.

…and we should also not assume that technology must be bound to biology at all!

Next time: Ambiguity and Detectibility.

Is SETI a good bet? Part I: What the Drake Equation misses

There are two major ways we can look for alien life:  look for signs of biology or look for signs of technology.

SETI includes searches for the latter—technosignatures. These might include big bright obvious information-rich beacons (in the radio or with lasers, for instance) , or they might be passive signs of technology like waste heat, pollution, or leaked radio emission from radar and communications.

I have often seen the argument that this is nice to look for but must be less likely to work than searches for biosignatures. The flaws in this argument have been pointed out and analyzed for as long as SETI has been a thing (most of them were hashed out in the ’60’s). But that discussion isn’t actually familiar to most astronomers and astrobiologists, and so working with the CATS collaboration led by Adam Frank (for Characterizing Atmospheric Techonsignatures) I’ve written a paper summarizing them.

In this series of posts I’ll break down the argument, following the argument in our new paper in Astrophysical Journal Letters.

The Role of the Drake Equation

One part of the argument goes back to the Drake Equation.

Frank Drake in front of a whiteboard with a pen pointing at his eponymous equation

The man himself with the equation

Let’s look at why:

If we just count up the parts of the Drake Equation that lead to some kind of life to be found, we might end up with something like

where R* is the rate of star formation, multiplied by the usual fraction of stars with planets and the mean number of planets per planet-bearing star that can support life, and the fraction of those planets on which life arises.

Here, Lb is the lifetime of detectable biosignatures. N(bio) is then the number of biosignatures there are to find out there.

We can also rewrite the full Drake Equation in a similar manner for any technosignature:

N(tech) = R*fp*np*fl*ft*Lt

Here, we’ve added ft representing the fraction of planets where technology arises, and now Lt is the lifetime of detectable technosignatures.

Based on this reasoning, SETI looks like a terrible idea compared to searches for biosignatures! It’s a tiny, tiny subset of all possible ways to succeed, because (this reasoning goes):

N(bio)/N(tech) = ft * Lt/Lb <<1

Why? Because ft <= 1 by definition, and since you need to have non-technological life before you can have technological life, Lt<Lb. This would seem to justify the huge imbalance in time and money that NASA spends on astrobiology in general compared to SETI (which has been almost zero until recently).

This is the abundance argument against technosignatures, and it is wrong, for many reasons! Let’s take a look at why.

Abundance

First of all, let’s think about the Solar System.  N(bio) is, as best as we can tell, exactly 1.  If there are other biosignatures in the solar system, we have not noticed them yet, so they must be very hard to detect.

And what is N(tech)?  Well, based purely on what we can detect with our equipment it’s at least 4!  Earth is loaded with technosignatures, but we also detect them from Mars all the time, and Venus and Jupiter also have them. We also have several active interplanetary and interstellar probes, and many many more derelict objects are out there too.

This gives us our first clue about how the reasoning above fails: if technology can spread through space, then one site of biology can give rise to many sites of technology.

And this, of course, has been appreciated by SETI practitioners for decades. It’s the basis of the Fermi Paradox, which asks why alien life hasn’t spread so thoroughly through the Galaxy that it isn’t here right now. Drake’s equation is based on the idea that it’s easier to communicate via radio waves than to travel via spacecraft, but of course one doesn’t preclude the other, and if both are happening, then N(tech) could be much larger than the equation says.

This is not really a major failing of the equation, whose original purpose was to justify SETI.  After all, if you can conclude there is something to find in the absence of spreading, that’s a sufficient condition to go looking.  The equation is often misinterpreted as foundational, like the Schrödinger Equation, as if you can calculate useful things with it.  Instead, it’s best thought of as a heuristic, a guide, and an argument.

So, how large could N(tech) be? Well, in the limit of the Fermi Paradox reasoning, it could be upwards of 100 billion, even for a single point of abiogenesis!  We’ve written about this before, for instance here.

So, the argument isn’t that this will happen, just that N(tech) has a higher ceiling than N(bio).  This long tail out to large possibilities (both in the sense that we are ignorant of the right answer, and in terms of a distribution among all of the alien species) means that it is not just possible but plausible that SETI is much more likely to succeed than other life detection strategies.

Next time: The second reasons to do SETI: technosignatures may be long-lived.

The Lifetime of Spacecraft at the Solar Gravitational Lens

This is a guest post by Stephen Kerby, a graduate student at Penn State.

Imagine you are a galaxy-spanning species, and you need to transmit information from one star to another.  You can just point your radio dish at the other star, but space is big, and your transmission is weak by the time it reaches its destination.  What if you could use the gravitational lensing of a nearby star to focus your transmission into a tight beam while monitoring local probes? What if you could use this nice yellow star right here, the locals call it the Sun? What if the locals notice your transmitting spacecraft from their planet right next to the star?

Recently, there has been renewed interest among human scientists in using the solar gravitational lens (SGL) to focus light for telescopic observations (as in the FOCAL mission) or for interstellar communication (as described in Maccone 2011). A spacecraft positioned >500 AU from the Sun could collect focused light bent by the Sun’s gravitational field, dramatically increasing the magnification of a telescope or the gain of a transmitter for a point on the exact opposite side of the Sun (the antipode). The picture below shows how the SGL could be used for transmission of an interstellar signal, and the arrangement can be reversed to focus light onto a telescope.

In the Astro 576: “The Search for Extraterrestrial Intelligence” graduate course at the PSU Dept. of Astronomy and Astrophysics, I participated in a collaboration with over a dozen colleagues to examine a parallel question; might an extraterrestrial intelligence (ETI) be using an SGL scheme to build an interstellar transmission network? If so, we might be able to detect the transmitting spacecraft if its transmissions intersect the Earth’s orbit (as proposed by Gillon 2014). Such a spacecraft would visible opposite on the sky from its interstellar target and would be most visible if it is along the ecliptic plane (the same plane as Earth’s orbit).

While the collaboration focused on conducting a prototype search at the antipode of Alpha Centauri using Breakthrough Listen at the Green Bank Telescope (paper forthcoming!), I also conducted a side project to make predictions about what sort of engineering would go into such a transmission scheme.  A paper based on that project and co-authored by Dr. Wright was recently accepted for publication in the Astronomical Journal and is now available on the arXiv (http://arxiv.org/abs/2109.08657).

Initially, my project set out to tackle a broad question; it’s physically possible to use the SGL for an interstellar transmission, but is it productive from an engineering standpoint? After all, if an ETI needs to overcome myriad challenges to get the SGL transmission system online, it might be easier just to skip the mess and be more direct.  If we can quantify the challenges facing an SGL scheme, we might be able to predict which stars might be included in an ETI transmission network and whether our Sun is a likely host.

First, we focused on the difficulty of maintaining an alignment with the target star. Normally, when transmitting using a radio dish, you need to point the dish to within a few arcminutes of the target, depending on the gain (degree of focus) of the outgoing beam.  However, the impressive gain boost of the SGL means that the interstellar transmission could be only an arcsecond across, 60x narrower and much more intense. A spacecraft trying to aim at a target star needs to stay aligned with that much precision.

We soon found that there are numerous dynamical perturbations on the spacecraft-Sun-target alignment.  First, the Sun is pulling the spacecraft inwards; if the craft drifts closer than about 500 AU to the Sun, it can’t transmit using the SGL.  Next, the Sun is being jostled around by its orbiting planets (shown in the GIF below); the spacecraft needs to expend propellant to counter these motions, coming out to 10x greater than the inwards force. A couple of linear effects like the proper motion of the target star are small corrections as well.

This has implications for local artifact SETI searches. While the Sun has several perturbations (mostly the reflex motion from Jupiter), it is a much better host for an SGL than a star with a close binary companion or a close-in giant planet. Close binary systems like Alpha Centauri and Sirius are terrible hosts for SGL spacecraft because of the reflex motions from the other stars in the systems. If we are trying to detect an SGL interstellar transmission network, we could focus on nearby stars that are unperturbed by massive planets, like Proxima, Barnard’s Star, or Ross 154.

Next, we addressed how those challenges might be overcome.  Clearly, a spacecraft could just fire its engines and counter the perturbations to maintain the alignment with the target.  Doing a quick back-of-the-envelope calculation, we found that a modern chemical, nuclear, or electric rocket engine could maintain alignment with an interstellar target for up to a few thousand years. Table 2 from the paper shows how long different propulsion systems could resist the perturbations of the sun’s gravity (~0.5 m/s/year acceleration) or including the reflex motions imparted on the Sun by the planets (~8 m/s/year).

On a human timescale, this is a long time; Voyager 2, our longest-lived active probe, is 44 years old, and there are obviously other challenges to operating autonomously for such a long period. In artifact SETI, ten thousand years is a blink of an eye.  The universe has existed for billions of years, which means that an ETI might have activated their relay spacecraft around the Sun millions of years ago. We could only detect it actively transmitting if it has survived and maintained alignment for the whole time.

So, how could an ETI extend the longevity of their spacecraft? They could reduce the total gain of the system so that they can ignore perturbations by the planets, but that blunts the benefits of an SGL arrangement. They could use advanced rocketry like fusion engines or solar or EM sails to dramatically increase their propulsive capabilities. They could use clever navigational techniques to get efficiency in exchange for simplicity or downtime. Finally, they could just let their probes die off and fall derelict, sending along a constant stream of replacements when needed.

So, we’ve used the dynamical features of the Sun and solar system to predict a few engineering challenges that must be overcome to use the SGL for transmission or science.  Then, we used those challenges to predict what to look for during an artifact/radio SETI search at the antipode of a nearby star.  As mentioned earlier, a collaboration is analyzing observations at one such antipode.  With a few proposals flying around, it looks like it will soon be an exciting time to be a gravitational lens!

If I were an eccentric trillionaire and wanted to help detect signals from an ETI, I would fund the construction of the All-Sky-All-Time-All-Wavelengths array.  Placing millions of telescopes of all kinds around the globe and across the solar system, I could survey every single spot in the sky at all wavelengths, nonstop. Certainly, if an ETI is sending signals then we should be able to detect them with a system like that. Sadly, no amount of money in the world can make this dream a reality, so we need to narrow down our SETI investigations. We can’t look for signals all the time or at all wavelengths or at every position.

A valuable avenue of SETI research is making predictions to guide observations to those with a reasonable chance of providing valuable scientific results.  In the past, notable predictions of this type include the hypothesis of “watering hole” frequencies and focused searches on stars that can observe Earth as a transiting exoplanet. Artifact SETI, the search for signs of physical ETI technology near our own solar system, starts with educated guesses about what that technology looks like.

Of course, it’s impossible to say now whether there actually is an ETI-placed spacecraft using the SGL to transmit until we’ve surveyed more antipodes. Still, our research into the challenges of operating an SGL relay is informative both for our SETI searches, and for aspirational proposals to use the SGL for our own science.

 

Strategies for SETI III: Advice

In this series on the content of my recent paper on the arXiv (and accepted to Acta Astronautica) I’ve mostly just described ways to do SETI.  I conclude with some highly subjective advice on SETI for those jumping into the field.

  1. Read the literature

There are a lot of SETI papers, and very few of them have ever been cited. Going through it as we have for the SETI bibliography, it’s striking how many times the same ideas get discussed and debated without referencing or building on prior work on the topic.

This is partly because the field is scattered across journals and disciplines, and because there’s no curriculum (yet!). The result is a lot of wasted effort.

Fortunately, you can now keep up with the field at seti.news and search the literature at ADS using the bibgroup:SETI search term.

  1. Choose theory projects carefully

I was taught by my adviser (who got it from his adviser, George Herbig) to “stay close to the data”. I took this to mean both to always make sure I understood the data and didn’t go chasing data reduction artifacts (I like to try to see strong signals myself, by eye, in the raw data when I can to confirm it’s real), but also to in theory projects really think hard about what the data say, and how it might mislead.

The most useful theory projects are the ones that help searches. A paper that calculates the observability of a particular technosignature using parameters that let observers translate their upper limits into constraints on technologies is staying close to the data.  One speculating on the far future fate of aliens at the edge of the universe—well, it may be very interesting, but it’s not close to the data.

Two topics that I think are probably overrepresented in the literature are the Fermi Paradox and the Drake Equation. Now, I’m very proud of the papers I’m on about the Fermi Paradox, so I won’t say to avoid the topic, but ultimately the Fermi Paradox is not actually a problem that I think demands a solution. Such work also works best when it leads to observational predictions, and so informs searches and the interpretation of data.

But continuing to argue about it after so much ink has been spilled about it, and in a situation where we have so little data to go on, creates diminishing returns. Kathryn Denning describes the “now-elaborate and extensive discourse concerning the Fermi Paradox” as being “quite literally, a substantial body of analysis about nothing, which is now evolving into metaanalysis of nothing.” continuing, “I would not suggest that these intellectual projects are without value, but one can legitimately ask what exactly that value is, and what the discussion is now really about. And, refering to early work on the problem:

Thinking about that future [of contact with ETI] was itself an act of hope. Perhaps it still is. But I want to suggest something else here: that the best way to take that legacy forward is not to keep asking the same questions and elaborating on answers, the contours of which have long been established, and the details of which cannot be filled in until and unless a detection is confirmed. Perhaps this work is nearly done.

I think she’s right, and this goes for work on the “Great Filter” and “Hard Steps” models in SETI, too.

The Drake Equation, similarly, occupies a big chunk of the theory literature. The equation is very useful and in a way sort of defines the field of SETI, but ultimately it’s a heuristic and its purpose is to help us think about our odds of success. But even Frank will tell you that while it’s useful to plug in numbers to get a sense of whether SETI is worthwhile (it is!), it’s not meant to be solved or made exact. It’s not a foundational equation like the Schrodinger equation from which one derives results, it’s more like a schematic map of the landscape to help orient yourself.

So while there’s no problem with using the Drake Equation to illustrate a point or frame a discussion, I think working to refine and make it better is to misunderstand its role in the field.

  1. Think about the nine axes of merit

Sofia Sheikh has a very nice paper describing how to qualitatively describe the merit behind a particular technosignature.  When proposing a new technosignature, I recommend thinking about them all, but a one in particular: “ancillary benefits,”. This gets to Dyson’s First Law of SETI Investigations: “Every search for alien civilizations should be planned to give interesting results even when no aliens are discovered.”

There are three reasons for this. The first is the funding paradox that null detections must be used to justify yet more effort. If there are ancillary benefits, then this is easier. The second is that doing other work with the data or instruments you use means you stay connected to the rest of astronomy (this also helps junior researchers get jobs and stay employed). The third is that it’s easy to get discouraged after years of null results. Having something to work on in the meantime helps keep one going.

This point should not be taken too strongly however. Radio data of nearby stars might really have no practical application beyond a null detection, and that’s OK. Those null detections are still good science! Also, the skills one uses to do that search, and the equipment built to do it, are all transferable to interesting astronomy problems.

    1. Engage experts

Lots of SETI papers written by physicists (and others) go way outside the authors’ training. There’s a particular tendency among physicists (and others) to feel like since we’re good at physics and physics is hard and everything is fundamentally physics, that we can just jump into a field we know little about and contribute.

Engaging experts in those fields will both help us not make mistakes and broaden the field by bringing them into it so they can see how they can contribute. It’s win win! And we should do it more.

  1. Plan for success when designing a search

One should think hard about upper limits and what result one will have when one is done searching before one starts the search. This is easier said than done, but really helps sharpen one’s work, and ensures that a useful result will come out at the end.

It also helps draw in experts! A SETI skeptic might not want to help you, say, look for structures on Mars lest they be drawn into another Face on Mars fiasco, but if they see that they’re contributing to an upper limit (that confirms their priors!) on such faces, they will be more likely to really help.

  1. Stay broad minded

We all come to the problem of SETI with very different priors for how SETI can succeed, and so will invariably encounter practitioners pursing what we feel are very unlikely or misguided paths to success. It helps to remember that the feeling may be mutual

In particular, we can acknowledge the value in the exercise of, say, considering ‘Oumuamua as an alien spacecraft without falling into the “aliens of the gaps” trap. That is we should distinguish between claims that

  1. Our prior on a particular technosignature is too small, using a particular case study as an example, and
  2. A particular case study is likely to be a technosignature

The first is entirely appropriate. Before ‘Oumuamua, I did not think much about the possibility of alien spacecraft in the solar system. Now, I think I have a much better informed prior on the likelihood and form of such a thing.

The second requires extraordinary evidence because our prior on such a thing is (presumably) quite small.

  1. Stay skeptical, but not cynical

I’ll close by just quoting the end of my paper:

Not all SETI researchers believe they will have a good chance of success in their lifetimes, but such a belief surely animates much of the field. It can therefore be challenging to maintain a scientist’s proper, healthy skepticism about one’s own work, especially when coming across a particularly intriguing signal.

I suspect everyone who engages in the practice long enough will come across what looks to be a Wow! Signal and, at least briefly, dream of the success that will follow. The proper response to such a discovery is a stance of extreme skepticism: if one is not one’s own harshest critic, one may end up embarrassing oneself, and losing credibility for the field. It is at these moments that Sagan’s maxim should have its strongest force.But one should also not let the product of such false alarms be a cynicism that leads one to focus entirely on upper limits and dismiss all candidate signals before they are thoroughly examined as just so much noise. There is a wonder that brought many of us into the field that must be nurtured and protected against the discouragement of years or decades of null results that Drake warned about. One should cherish each false alarm and “Huh? signal” as an opportunity for hope and curiosity to flourish, “till human voices wake us, and we drown.”

 

You can find the paper here.

Strategies for SETI II: Upper Limits

Last time I discussed the “kinds of SETI” I laid out in my new paper on the arXiv. This time, I’ll discuss a nifty plot I made about upper limits.

At a workshop at the Keck Institute for Space Science a couple of years ago, I put together a graphic describing how I think we should think about placing upper limits in SETI:

The idea is that if you do a search and find nothing, you need to let people know what it is that you did not find so that we can chart progress, draw conclusions, and set markers down for what the next experiment should look like. My analogy is the dark matter particle detection community, which (similarly to SETI) must solve the funding paradox of using a lack of results to justify continued funding.

The idea is that you have some parameter that marks the strength of a purported (ambiguous) technosignature (like waste heat for Dyson Spheres).  If you perform, say, a search of all-sky data along that axis, then you will end up with many potential detections. Virtually all of these are (presumably) natural sources, so what can you do?

Well, the easiest first step is that one of the sources is the strongest, meaning that you instantly have an upper limit: no technosignatures exist above that threshold. If you’re the first to interpret the data that way, then you’ve made progress!  We now know something we didn’t before.

Then the sleuthing kicks in.  What are those top, say, 10 sources?  Are they all well-known giant stars with dusty atmospheres and certainly not Dyson Spheres?  If so, then you’ve just lowered the upper limit.

As you work your way down, your upper limit keeps improving, and you keep learning about what’s not out there. You also learn what you need to do to weed out all of the false positives to get to more meaningful and stringent upper limits.

This works for almost any technosignature with many confounders: structures on the moon, transiting megastructures, “salted” stars.

This formalism even works for machine learning-based anomaly detection, although in that case it might be hard to translate your upper limit into something physically meaningful, because the mapping between an anomaly score and the characteristic of the technology that would give that score might be obscure.

Next up: advice!

Strategies for SETI I: Kinds of SETI

Inspired by a discussion at the Technoclimes workshop, I started thinking about all of the different approaches to SETI, as distinct from the different technosignatures to search for. This eventually evolved into a paper where I was able to incorporate lots of odds and ends I had written and collected over the years about SETI in one place.  I think it came out well!  It’s on the arXiv here, but here are some of the highlights.

There are a few ways to think about SETI searches, and most fall onto one side of a few divides:

  1. Communication vs. Artifacts, including:
    • Small vs. large scale
    • Kinds of artifacts / carriers
    • Derelict vs. active artifacts
  2. Ambiguous vs. Dispositive Technosignatures
  3. Commensal/Archival vs. Dedicated Searches
  4. Model-based vs. Anomaly Searches
  5. Searching for “Beacons” vs. Eavesdropping
  6. Passive vs. Active Searches (i.e. METI).

Communication vs. Artifacts:

Last year we put a lot of effort into proposing a big interdisciplinary research consortium for astrobiology to NASA (an “ICAR”) . The proposal was unsuccessful, but along the way we found a useful way to frame searches:

Table showing different scales and kinds of technosignatures

The idea here was to map out the kinds of “artifact” technosignatures that exist, and think about how they all relate.  Along the top roughly tracks both scale and distance: first nearby things in the solar system; next, roughly Type I Kardashev scale things on the surfaces or in orbit around nearby planets; then things approaching Type II with lots of circumstellar activity; and finally Type II sorts of technosignatures on the right.

Vertically, we take three things to actually look for: physical structures, environmental alteration, and excess heat.  There are many other kinds of technosignatures too, of course. In particular, communication SETI is not really on this chart, but  by and large I think this captures the breadth of a pretty big swath of SETI.

Artifacts are neat because we might be able to detect them even if they are no longer being maintained. Depending on the artifact, they might be detectable for a very long time after their creators are gone.

Ambiguous vs. Dispositive Technosignatures

“Dispositive” means that something settles (“disposes of”) a particular question. It’s a term from law, and I like it because it’s a useful word with no synonyms and can help distinguish among different kinds of null results (i.e. failing to find anything because you didn’t look hard enough, and showing something does not exist because you looked more than enough).

One of the really nice things about communication SETI is that it’s probably dispositive: if you see a communicative signal, especially a narrowband one, you know it’s from technology. Then you’ve solved several problems at once, scaling the entire “Ladder of Life Detection” in one go.

Hunts for Dyson Spheres, on the other hand, and not very dispositive. Waste heat can come from dust just as well as from technology, and no matter how weirdly shaped a light curve implies an occulting object is, there always seems to be some pathological natural explanation for it.  Such searches can, at best, find good candidates for technosignatures that would then have to be validated by other means.

But, that’s also true of many searches for biosignatures! Just finding, say, oxygen isn’t enough. Hey, no one said astrobiology was going to be easy!

Commensal/Archival vs. Dedicated Searches

Some kinds of searching requires dedicated hardware. The Breakthrough Listen Initiative builds large supercomputers on site at their facilities to record voltages measured at the telescopes extremely quickly and save the reduced data products. PanoSETI will perform unprecedented observations of the transient sky because of its innovative design.

Other searches are “commensal,” using some hardware simultaneously with other observers. The old SERENDIP project at Arecibo had specialized hardware that occupied a different part of the focal plane from the main instruments, and so was always on, searches whatever part of the sky it could. This kind of searching sacrifices the ability to choose one’s targets for getting in order to use powerful equipment.

Other projects can be done with no additional hardware. Searches that use archival data, for instance, piggyback on general purpose astronomy data.  This kind of searching is cheap, but one can only search for things that happen to fall within the parameters of the databases you’re looking at.

Model-based vs. Anomaly Searches

This is a neat one. There’s an idea called “generalized SETI” by George Djorgovski which roughly says that since we don’t really know what form alien technology will take, you should look for anything out of the ordinary in big public data sets.  This has the upside that you are sensitive to the unexpected, and the idea has been applied to many aspects of SETI, for instance this work by Daniel Giles.

Model-based searches look for a particular technosignature in that data by modeling its signal and filtering on that. This has the big advantage that you can say what it is you did not find, because you have parameterized your technosignature and its strength. This makes it much easier to calculate upper limits.

Putting such upper limits on anomaly-based searches can be much harder because it can be difficult to know, especially with machine learning algorithms, exactly what it is the computer is keying on, on what it would have missed. This is a major problem worth tackling, because anomaly-based searches have enormous promise.

Searching for “Beacons” vs. Eavesdropping

In the early days of SETI, a mix of optimism and necessity led Frank Drake and others to search for “beacons”—big, loud, obvious signals designed to get our attention. Perhaps, one line of reasoning went, there was a community of species welcoming technologically young species like ours into their Galactic Society with such signals.

Such signals are easy to spot because they would be designed to be easy to spot, and so it makes sense to look for them first.  This was especially true because early radio equipment could search only a very narrow range of frequencies at once. A lot of work went into thinking about what frequencies such beacons would be at—the first SETI paper (Cocconi & Morrison 1959) guessed the 21-cm line, and many “magic frequencies” have since been proposed as the ones “they” would use to get our attention.

Indeed, there is a whole concept in game theory called “Shelling points” used to describe this dynamic.

Today, radio observatories can search billions of channels simultaneously, eliminating the need to guess, and our sensitivity is much better, so we could potentially detect even “leaked” emission intended only for short-range communication, for instance among planets in a distant planetary system. So the distinction is no longer quite so important, but still influences survey design and target selection.

Passive vs. Active Searches (i.e. METI)

This one gets a lot of attention!  METI is the attempt to establish contact backwards: to send a signal that gets attention in hopes that it triggers a response we wouldn’t miss. Some people get very upset about METI, worried we might catch the attention of dangerous aliens. My position is pretty nuanced, but ultimately I’m not worried about METI. Earth has many signatures of life and technology that I think are more obvious than any METI programs, so ultimately their value is performative, to get us thinking about contact.

Next time: Upper limits!

The first 2020 PSU SETI Course project: updated bibliography

The pandemic has been hard, but we have managed to get some research done at the PSETI Center for the past few months.

In Fall 2020, we had the second instance of the Penn State graduate SETI course (now on the books officially as ASTRO 576) and the students’ final projects were great! Some of the students have chosen to polish them up and submit them for publication, and the first one is now out!

Julia LaFond has extended the work of Alan Reyes from the 2018 instance of the class to flesh out and expand the SETI bibliography at ADS.  The paper, now peer-reviewed and accepted to JBIS, is on the arXiv here, describes how we now categorize papers, and our workflow for finding new papers and providing monthly updates.

Thanks to work by Macy Huston, we are now working with James Davenport to maintain the SETI.news mailers.  Macy uses ADS and its nifty library and search features to find potential new papers and applies Julia’s criteria to add papers to the SETI bibgroup on ADS.  Then you can restrict any search you like at ADS to bibgroup:SETI and these papers will be included, like this:

Screenshot of ADS showing how to use the bibgroup:SETI keyword

Every month, Macy then adds a soupçon of editing to the new entries and sends the latest batch to James, whose algorithm produces the SETI.news mailing for our subscribers:Screencap of the SETI.news site

We hope this is useful for the community.  Do subscribe to seti.news, use the ADS bibliography, and tell us what you think!

How a Species Can Fill the Galaxy

The Fermi Paradox is about why there are no aliens on Earth today. “Where is Everybody?” Enrico Fermi asked his lunchmates one day in 1950, pondering that it must be that alien spacecraft—at almost any speed, really—have had plenty of time to get here by now, since the Galaxy is so very old.

The story’s been told lots of times, and Bob Gray has a nice paper about the real history of the term (it’s neither Fermi’s, nor a paradox!). The term is also sometimes used to refer to things like why SETI hasn’t found anything yet, but of course that’s not what Fermi meant (after all, when Fermi asked his question, the first modern SETI program wouldn’t even start for another 9 years!)

But is it true that a species would fill the Galaxy, given the capabilities? Working with Jonathan Carroll-Nellenback, a team I was part of tried to simulate things, and found that, sure enough even slow ships would fill the galaxy pretty swiftly. What we didn’t do, though, was make a nice movie showing how it happens.

Well now we’ve fixed that!

In this movie, Jonathan carefully tuned the parameters so that the maximum range of ships is about 3 parsecs, or within range of a couple dozen stars for Earth. The way exponential growth works, if the local density of stars is such that you always have plenty of targets within range, you’ll grow, but if that happens rarely, you won’t. And because stars move, you’ll always have fresh stars nearby to settle, if you wait long enough.

The whole movie spans about 1 billion years. The expansion front takes so long because we don’t let any settlement launch a new ship to settle a new star more frequently than once every 100,000 years.  As the front expands, the parts moving inwards will encounter higher stellar densities and the expansion wave will accelerate.  The parts moving vertically or outwards quickly run out of stars and stall.

What’s neat is that in this simulation, because the ship range is small and ships are sent out infrequently, the wave goes slowly enough that it is actually the motions of the stars that do most of the work, and you can see how they take what might have created a bubble of inhabited stars and smear it out, like jam getting mixed into oatmeal or cream getting stirred into coffee.

Eventually, the front reaches the middle part of the galaxy where the stars are typically closer together than in the outer parts, and then the expansion proceeds very quickly, but the outer reaches of the Galaxy never get inhabited.

Now, this is just an illustration—in truth if something like this happened the ships wouldn’t have a hard limit of 3 parsecs for their motions, and who knows how often new settlements would happen. We’re working on a new paper now that explores these things.

You can find a fuller description of the movie in our new Research Note of the AAS here.

Enjoy!

Making Astronomy Safer: An Apology and Recommitment

I write this to acknowledge the harms caused by having Geoff Marcy involved in the California Planet Legacy and other papers, and apologize for my role in that. 

Context:

Two papers by the California Planet Search (CPS) were recently accepted by AAS Journals and put on the arXiv.  Both papers leveraged decades of data collected as part of the CPS and its predecessors, all of which were once led or co-led by Geoff Marcy. As a graduate student and postdoc, I was a member of the CPS and its predecessors from 1999 until around 2007 or so. In 2015, UC Berkeley found that Geoff had repeatedly violated its sexual harassment policy, and when this and other information became public Geoff ended his employment there.

Today, the California Planet Survey is led by Andrew Howard at Caltech (whom I know well from our Berkeley days). The CPS has many members across the country including many of Andrew’s current and former advisees, and many “participating scientists” (including me) who occasionally work with CPS by contributing expertise, telescope time, and other resources for specific projects and papers. Geoff Marcy does not have any current affiliation with the CPS.

These papers were led by Lee Rosenthal (a graduate student at Caltech) and BJ Fulton. Both had extensive author lists, including both me and Geoff Marcy. All co-authors were on email chains giving comments on mature drafts of these papers. Since 2015, Geoff Marcy has also co-authored and been acknowledged in other research with astronomers, including papers I have led.

Since the two CPS papers appeared on the arXiv, there has been extensive discussion on and off social media regarding the propriety of having Geoff as a co-author on these and other papers, and the harms this causes. Andrew Howard has argued that for these two papers, the AAS Authorship Rules required Geoff to be a co-author because of his foundational role in the project. 

Some of the harms done:

For brevity, I will discuss the harm done to students, because we senior scientists have an especial obligation to protect them from harm, but these harms are also applicable to other scientists, especially other early career researchers.

There is harm is to the students in the group who have had to deal with this. What should be a celebratory moment of some fantastic science results vindicating decades of work has been overshadowed by its author list. 

Students have also been put in the inappropriate situation of having to choose between getting credit for their work on an important paper and avoiding association and interaction with Geoff. This does them both professional and personal harm. 

Professionally, this means that students who choose to avoid Geoff must limit their opportunities for research and credit in ways other scientists do not. Personally, it takes a large mental and social toll on students who have to deal with this issue, and this toll persists whenever a student’s work is used in a paper Geoff is a co-author on, whenever they interact with others who are working with him, and whenever they must ascertain whether a new project they might become involved in involves Geoff. 

There is also harm to students and others associated with senior researchers on these papers. Junior researchers need to be able to trust us to provide a safe environment, community, and profession. All of our associations with Geoff, including mine since 2015, erode that trust.

There is also harm to the profession at large. This incident and others like it tells all astronomers that Geoff continues to play at least some role in the field, and more generally that actions like his by any astronomer will at least to some degree be tolerated. This harm is especially acute for survivors of sexual harassment and assault.

Regarding authorship on these papers:

I was a co-author on these papers at the invitation of BJ and Lee primarily because of my work on the underlying data set from work over a decade ago. I also provided comments and feedback on the drafts after agreeing to join.

Although reasonable people can differ on this point, in my opinion the ethics of authorship did not obligate the team to invite Geoff (or me) to be on the paper. And even if they did, there are alternative ways to publish these data that would have separated most of the authors of these papers from being co-authors with Geoff. 

In general, I have a nuanced position regarding who needs to be an author on a paper, and I do not think the current AAS authorship rules reflect the theory or reality of how we assign authorship in astronomy. Many factors should come into play when deciding who should be an author on a paper, including broader ethical ones. 

Regarding how we can do better:

Since 2015, my advisees do not interact with Geoff Marcy, and do not co-author papers with him. Both in the past and quite recently I have encouraged Andrew Howard to also adopt a policy like this for his advisees.

More generally:

We need to have better norms and standards for how we deal with authorship in situations like this. The AAS Code of Ethics Committee and the AAS Publications Committee should update the Code and the AAS Journals Authorship policy to better reflect the realities of how authorship is assigned and acknowledge the many factors that need to go into authorship of papers. I note that while the AAS has very limited purview over non-members, it has complete purview over its journals, just as it does its meetings, and should use this incident as a case study in shaping new rules.

We need to have meetings and research environments where astronomers do not need to worry about who is safe and who is not, and which research projects or collaborations might put them in a situation where they will be forced to make hard choices or interact with sexual harassers (or worse). We can do this better.

And to my fellow research group leaders: we need to listen to our advisees and create spaces where they are safe to tell us how to keep astronomy a place where they can feel and be safe, and where they can thrive, focusing on their science. I appreciate the trust my advisees have put in me by doing this, and I aim to reward that trust with action.

If you count yourself among the people that need to understand my role in all of this in more detail, please reach out to me directly. I will aim for transparency and honesty with you on this, I will listen to what you say, and I will do better.

Reductionism and Emergence II: Two Philosophers’ Takes on My Post

A short while back I posted my amateur position on the issues of reduction and emergence in physics.

I shared it with Chlesea Haramia, a philosopher that works on problems of ethics in astrobiology, and she and Thomas Metcalf kindly responded with a lengthy discussion. I’m really appreciative of the time they have taken to “translate” my post into proper terminology of philosophy and give professional feedback.

So, if you’re interested in a professional take on the ideas I raised in my prior post, please read on!

Reduction, emergence, and the limits of physics

Thomas Metcalf and Chelsea Haramia

Jason Wright presents a thoughtful and interesting discussion of his views on emergence and reduction. In academic philosophy, these terms and concepts are both widely used and the subjects of vigorous debate. In this short note, we want to outline how philosophers think of reduction and emergence and in which ways these concepts can illuminate some topics Jason mentions.

Reduction vs. emergence, weak and strong

Put simply, those who believe in emergence (rather than reduction) maintain that the big (holistic) stuff is as real as the small (basic) stuff. Minds and waves of bathwater, then, are as real as electrons and molecules. It makes sense to have scientific theories that talk about tornadoes, mammals, colors, planets, phobias, and so on, instead of merely having theories that talk about the atoms and fields that compose those things.

Emergent phenomena (e.g. planets) are distinct from their basic components or substrates (e.g. atoms of silicon and iron), but there are different ways of describing this distinctness. We’ll look at two: strong and weak.

One way is to maintain that, while complex combinations of the small stuff cause and fully determine the nature of the big stuff, the big stuff is not “realized in” the small stuff: it comes from the small stuff but it’s not ultimately the same stuff as the small stuff. Some of this emergent stuff, in fact, might not even be physical at all, or might not interact with the physical world at all. This “strong emergence,” then, is normally taken to be incompatible with what philosophers call “physicalism,” i.e., the thesis that everything is ultimately physical. Strong emergence occurs when the emergent phenomenon is of a fundamentally different type of stuff than is the stuff it emerges from.

A different way to posit emergence is to take the view Jason favors. Emergent properties are still fully physical, but they’re realized at different scales and often require different academic disciplines, approaches, and analyses for their study. These disciplines study entities and phenomena that are just as real as what the quantum physicist or neuroscientist studies, but we might still need to understand the small stuff to properly identify and fully understand the big stuff. Despite this emergence’s compatibility with a fully physical world, we may call it “weak emergence.” What separates this weak emergence from reductionism is that the emergent (big) stuff is still real in itself (and useful to talk about and include in scientific theories), and crucially, can often be realized in very different sets of small stuff. For example, perhaps there could be silicon-based, rather than carbon-based animals. We would still properly call them “animals,” but “animal” wouldn’t be reducible to “carbon-based (among other things)” because there could be animals that aren’t carbon-based at all. Being an animal would emerge from (among other things) being carbon-based, but it could also emerge from (among other things) being silicon-based. In contrast, perhaps only H2O would ever really be “water.” Something that looked and acted just like water at the macroscale, but wasn’t made of H2O, wouldn’t really be water. If so, then water wouldn’t just emerge from H2O; it would reduce to H2O. (The example of “water” vs. “H2O” is ubiquitous in philosophy; you can read more here and here.)

One of the virtues of Jason’s view is that it provides a coherent avenue of response for anyone who finds that, often, those who attempt to make appeals to emergence have not actually posited anything beyond the purely physical realm. Some emergent accounts are congenial to reductive accounts, and these accounts may all manifest in a fully deterministic, measurable, physical world. The compatibility of Jason’s “weak physical emergence” and reductionism is a useful way of responding to certain claims of emergence—a way of demonstrating that many purported appeals to emergence are actually perfectly compatible with strong physicalism.

Again, these concepts and terms are commonly debated in philosophy, so for much more discussion on the topic, we invite you to visit this link.

The case for strong physical emergence

As you can see from the linked entry just above, there may be reason to quibble a bit with Jason’s (and our) definitions of both “weak emergence” and “strong emergence.” Nonetheless, when addressing issues of strong emergence, we’re happy to help ourselves to Jason’s terminology: that  “strong physical emergence” refers to a phenomenon that is truly, fundamentally real, and emerges from some set of physical causes, but is not itself realized in any set of physical objects. For example, consciousness might arise from neurons, but not be identical to any set of neurons, and it might have fundamental properties that neurons don’t have.

What might those properties be like? Well, the four most-commonly discussed are consciousness, intentionality, perspective, and unity. Consider these four pairs of premises (you can imagine how the rest of each argument would go):

The Argument from Consciousness

C1. At-least-some minds have conscious experiences.

C2. No atoms have conscious experiences.

The Argument from Intentionality

I1. At-least-some beliefs are about things.

I2. No atoms are about things.

The Argument from Perspective

P1. At-least-some experiences necessarily have first-person, subjective perspectives inherently attached to them.

P2. No sets of atoms necessarily have first-person, subjective perspectives inherently attached to them.

The Argument from Unity

U1. At-least-some minds are unified: they are not made of individual parts.

U2. All sets of atoms are disunified: they are made of individual parts.

All these arguments would then conclude that minds, or beliefs, or experiences aren’t ultimately just sets of atoms.

We don’t pretend that these arguments are all decisive; many philosophers would reject them, and most philosophers believe that the mind is ultimately physical. And there are important arguments against dualism about the mind, i.e., the thesis that the mind and the brain are two different objects, which might imply that the mind is non-physical. If we think there are two fundamental categories of stuff—say, physical and non-physical—then we have to explain how these fundamentally different things could possibly interact with each other. At least as far back as the philosopher Elisabeth of Bohemia (1618–1680), skeptics about dualistic views of reality have offered this challenge. (You can read more here.) But you can see how someone might argue, based on the alleged intrinsic properties of mental states, for the strong emergence of minds.

One more thing for now: There are lots of other arguments that the mind isn’t a physical object. We’re not going to get into such arguments much here, but you can read about them if you want.

Against strong emergence

Now we can consider Jason’s argument against strong emergence. It’s based on a good point. We have reason to believe that consciousness and intentionality at-least-weakly-emerge from neurons, since as far as we know, destruction of neurons harms or destroys consciousness and intentionality. If you cut off the current to the broadcast antenna, you lose most of the photons. But that’s all compatible with weak emergence and even with reduction. The interesting question for us is whether causation goes in the other direction. Is there any reason to believe that some extra thing—beyond our neurons and the corresponding current and neurotransmitters—has any causal influence on anything physical? Can my beliefs cause me to do things without my neurons’ causing me to do things? If so, then this would begin to look like what Jason calls “strong physical emergence.”

Well, let’s take a minute to identify an alternative view: epiphenomenalism. Strictly speaking, strong emergence of minds can occur without those minds’ having any causal influence on the physical world. Maybe minds are just extra things, floating out there, passengers along for the ride that never put their hands on the steering wheel. This mind could still be an example of strong emergence; it could still have intrinsic properties (such as, arguably, consciousness) that atoms don’t have. In correspondence, Jason gave the apt analogy of a child’s holding a disconnected video-game controller and watching the video-game feed on a screen, falsely believing that they’re the one controlling the video game. (You can read more about epiphenomenalism here.)

Maybe we think epiphenomenalism is implausible. Maybe we think, for example, that consciousness would have no reason to evolve if it didn’t have some influence on our bodies. Let’s set epiphenomenalism aside for now and go back to Jason’s argument: in essence, that we haven’t found any good candidates for strong emergence yet. We haven’t, for example, found neurons that just kind of fire for no reason at all. If we did, then maybe some strongly emergent beliefs or desires would be the cause of that firing. Similarly, we haven’t found any good evidence of a “life force” or anima that determines whether an animal is alive or dead. So that’s a good point. Maybe if we haven’t found any evidence of something, then by Occam’s Razor, we should dismiss it until we acquire such evidence. (One of us has criticized a version of Occam’s Razor in print, however.)

Potential examples of strong emergence?

Of course, Jason grants that we may have already found something that seems indeterministic in that way, i.e., that seems to result without a sufficient antecedent cause. If I measure an electron’s spin about some axis and then measure its spin about an orthogonal axis, then perhaps the second measurement’s result can’t be explained by anything intrinsic to the electron. This might even make room for something like indeterministic free will, if somehow there were some event or force that could influence the probabilities of our making certain decisions, while still leaving room for other possible decisions. This is a very interesting case and possibly one of the best routes for arguing for indeterministic free will. (Of course, this only really works if we believe in an indeterministic physical story.) If the actual free-will decisions are non-physical events, or if the tie between microscopic particles and free-will decisions is merely a law of nature (such that God, say, could have changed that relationship, by rewriting the laws of nature), then this looks like strong emergence. The free-will decisions are of a fundamentally different type of entity than are the neurological events.

Let’s go back to the question of neurons, then. Does the fact that we haven’t found any firing-for-no-reason neurons suggest that there are no “extra,” strongly emergent beliefs and experiences out there (beyond our neurons) causing our neurons to fire? Let’s grant the empirical premise: maybe we really haven’t found any such neurons. But as far as we know, no one has fully traced the entire process of stimulus-response in a way that rules out any extra influences. At the present moment, we have some fancy devices that allow us to scan brains in gross terms: we can see where blood is flowing, or where there’s lots of chemical activity, for example. But that’s a far cry from (say) getting everything reduced down to something like an observable chain of falling dominoes from external stimulus to neuronal firings to external response. But suppose we did reach that point. Even then, as noted, that wouldn’t rule out strong emergence. For one thing, as noted, the strongly emerging events might be epiphenomenal: they are caused by the physical realm, but don’t cause anything in the physical realm.

In response, one may reasonably be suspicious of a view that arguably inherently rules out the possibility that we could empirically verify its truth. But of course it’s possible for an empirically unverifiable theory to be true, and the position “We should only believe in empirically verifiable claims” is infamously potentially self-defeating. In any case, there’s substantial current debate about whether there is a detectable role for (say) quantum-mechanical decoherence in brain events. (See here for more information.)

We also want to discuss a very interesting example Jason gives: a hypothetical behavior of gravity, dark energy, or some other force. The idea, in brief, would be of a force or field (call it “Force X”) that seems to manifest, say at large scales, and in proportion or otherwise in relation to familiar, “light” matter, but can’t be explained by any of the microphysical-scale events or objects. Force X might influence the light matter around us, but we can’t find any individual particle that constitutes or mediates this force.

This might be evidence that Force X was strongly emergent. After all, Force X might seem to be related to the presence of light matter, but not composed of light matter nor of anything else we can specifically detect. If it was composed of some fundamentally different type of stuff, and not realized in the familiar particles of the Standard Model, then this would look like strong emergence. And this would, in turn, push us toward having to discuss the very deep question of what it means for something to be physical or a part of physics. (You can read more here.) If we never discover any candidate particle to be the matter or mediator of Force X, do we have to conclude that physics itself is a fundamentally incomplete description of reality? Or, perhaps by induction, are we entitled to conclude that Force X is realized in, and mediated by, physical particles that are simply undetectable to us? What if they’re apparently forever undetectable—may we really say that those particles are still part of physics, or still part of physical reality?

These are obviously difficult issues that we can’t solve here. But we mention them to give you an idea of how philosophers think about these issues and to potentially generate further discussion. And we’d like to thank Jason for a simulating post and for the opportunity to present our thoughts here.

Semi-technical appendix: Varieties of reduction and emergence

Okay, for those of you who have followed so far and want to know, in more explicit terms, how to tell the differences between reduction, weak emergence, strong emergence, and complete independence, here we go.

First, it helps to understand the difference between “physical” and “metaphysical” possibility. Something is physically possible when it’s compatible with the laws of physics, whatever they are. For example, to accelerate to half the speed of light, or to undergo an exothermic reaction. Something is metaphysically possible when it could happen, whatever the laws of physics happen to be. For example, to accelerate to twice the speed of light, or to know the exact position and velocity of a particle. (If an omnipotent God exists, then she can create whatever is metaphysically possible, even if it’s not physically possible—after all, she can change the laws of physics.) Of course, some things aren’t even metaphysically possible. Arguably, it’s metaphysically impossible for the number eight to be prime, and metaphysically impossible for something to exist and not exist at the same time. (By the way, there are far more than two varieties of possibility; see here for a much longer discussion.)

Now that we’ve got an idea of those two varieties of possibility, we can think about a procedure for distinguishing emergence, reduction, and so on. (We don’t intend this to be 100% correct and foolproof, but instead, to give a generally useful procedure.)

Take two events, phenomena, or objects. Let’s call them “Micro” and “Macro” since typically, emergent phenomena are on the larger scale than are the phenomena they’re alleged to emerge from. Now suppose we want to know whether Macro is reducible to Micro, or emerges from Micro in some way, or is independent of Micro. We can begin by asking some sets of questions in order.

  1. Is it physically and metaphysically possible for Macro to exist alone in the universe? If “yes,” then Macro is independent of (i.e. non-emergent-from and non-reducible-to) Micro. If “no,” then proceed.
  2. (a) Does Macro have inherent properties or powers that Micro doesn’t have? (b) Is Macro non-physical while Micro is physical, or is Macro otherwise a fundamentally different type of entity than Micro is? (c) Does Micro produce Macro by physical or psycho-physical law (or law of nature) but not by metaphysical necessity? If the answer to all of these is “yes,” then Macro strongly emerges from Micro. If not, then proceed.
  3. (a) Is Macro equally real as Micro? (b) Is it possible for true theories to mention Macro explicitly? (c) Could Macro be realized in many different sets of objects instead of Micro as well? (d) Does Macro produce Micro by metaphysical necessity? If the answer to all of these is “yes,” then Macro weakly emerges from Micro. If not, then proceed.
  4. Does Macro exist? If “no,” then Macro is just a myth. If “yes,” then Macro is reducible to Micro, or we’re at some borderline case.

What about those borderline cases? There are a few possibilities left unaddressed by this procedure, in which in ##2–3 some but not all of the (a)–(c) or (a)–(d) criteria are satisfied. In those cases, we’re probably dealing with some borderline case between strong and weak emergence or between weak emergence and reduction. For more, check out this article.

 

UFOs and SETI

I posted a Twitter thread and it blew up, so I thought I’d record it for posterity here.  Here’s the thread:

Unrolled:

I know now a lot of people want to identify parallels between SETI and UFOlogy. There are a few big differences, though:

1) SETI is based on the premise that alien tech follows the laws of physics as we know them. UFOlogy identifies alien tech from violations of those laws.

Asking me to consider UFOs as alien is asking me to believe two very unlikely things: that they are visiting and imperfectly hiding, and that it’s possible to violate conservation of momentum! This is not a parsimonious explanation for these things.

2) SETI is all about the hunt for good candidates, ones that can definitively survive intense scrutiny. Right now, we have virtually none (I’d say the Wow! signal is the best).

UFOlogy is awash in candidates. It’s starting from the opposite side of the problem.

3) SETI is based in astronomy and related fields. We astronomers have very few skills that translate into the fields needed to study UFO sightings.

It’s fine to scientifically study UFO sightings and understand our airspace, but why drag astronomers into it?

4) SETI works in a domain we don’t have a very good handle on: outer space. It could be *filled* with alien civilizations and signals, but it’s such a big haystack, it’s not hard to understand why we haven’t seen anything yet.

UFOlogy’s domain is the atmosphere, which we know *very well* because we’ve studied it for millennia. There’s not a lot of space for alien spacecraft to mostly hide from meteorologists, air traffic controllers, etc. and still be sort of barely detected the way they are.

Finally, lots of people get excited about UFOs as aliens because they infer from news stories that the government is interested in them, or is hiding what they know about them, or that military pilots or senators are very sure aliens are visiting.

This kind of tea-leaves-reading is not very persuasive to me. I already know a lot of people think UFOs are alien, and it makes sense the military would study weird aircraft and be secretive about that. Yet another article confirming that isn’t new evidence aliens exist.

Finally finally, I appreciate that studying UFOs as non-alien craft is a thing. That’s fine! I’m sure plenty of these things are real aircraft. The above is just about connecting them to aliens, and distinguishing UFOlogy from SETI.

To learn more about all of this, I recommend Sarah Scoles’ books and this article by Katie Mack.

 

Reductionism and Emergence

OK, time for more armchair philosophy!

Inspired by some Twitter posts by Adam Frank, I’ve been thinking about reductionism and emergence.  Here’s the thread that started me off:

In studying this, I’ve found that there are lots of different meanings of the terms “reductionism” and “emergence”, and a lot of the discussion seems to come from people talking past each other because they’re using different definitions. My thinking on this, I should note, is heavily influenced by Sabine Hossenfelder’s essay here.

In one sense, the terms are polar opposites. If by “reductionism” we mean the general approach to problem solving or studying something of reducing a problem to its component parts and working up from there, then its opposite is “holism” which presumes that a system’s behavior is best considered from the top down.

For instance, if I want to study how water sloshes in a bathtub, then starting from atomic physics or quantum field theory is a foolish approach. The water waves in the bathtub are described by equations of fluid flow that are insensitive to the underlying physics. For simple, low-amplitude waves, one is much better served with linearizing things from the equations for gravity waves, plugging in the measured properties of water, determining the modes in the bathtub, and working from there. For more complex situations you could numerically simulate the water in the tub, maybe with the full set of Navier-Stokes equations plus some corrections for surface tension and stuff. But there’s no need to go working out the van der Waals forces between water molecules or the quark interactions in their nuclei.

We call this an “emergent” property: the combined interactions of all of the water molecules obeying the laws of electromagnetism and quantum mechanics appear, on a sufficiently large scale, to be well described by equations that describe the bulk properties of the matter. One quality of an emergent property is that it is insensitive to the underlying physics: you can’t deduce the molecular structure of water from watching waves because there are lots of potential kinds of microphysics that could (and do!) give rise to the same macroscopic phenomena.

This kind of emergence has many levels: at the bottom we have quantum field theory, special relativity, and the Standard Model which describe how all particles interact. At the next level up we have atomic theory and quantum mechanics, which give us the basis for studying molecules. At this level things like the vacuum states of matter and the strong and weak forces don’t matter: they happen “underneath” at scales too small to matter, and we can summarize their contribution in quantities like an atom’s magnetic moment and rest mass (for instance).

From there, we get physical chemistry, but things quickly get too complicated to calculate, so we begin talking about sigma bonds and valences and electronegativity and now we’re into ordinary chemistry. At larger scales we can talk about the bulk properties of the material like its temperature, which conceals but successfully summarizes even more properties of the aggregate. Again, you can determine some things about atoms from chemistry, like the periodic table, but this can only take you so far. Ultimately if you really want to know the structure of the atom you have to study it directly; you can’t distinguish the plum pudding model of the atom from the Bohr model in a chemistry wet lab. Chemistry is thus an emergent property of atomic physics.

And so on to biology, psychology, sociology, and so on, as xkcd put it:

Purity

In one sense, using chemistry instead of quantum field theory is a “holistic” approach because it uses emergent properties instead of a reductionist approach, but it also reveals a second definition of “reductionism” which I’ll call “physical reductionism” to distinguish it: the scientific approach (axiom?) that all physical behavior arises from more fundamental laws at a smaller scale (or, if you like, at a higher energy).

Now, precisely defining reductionism in this way is the job of philosophers of science and I’m sure one can find holes in the way I’ve put it above, but I think my description above defines things more or less well: emergent behavior at each layer (except, I suppose, the bottom layer, wherever that is) is ultimately the sum of all of underlying microphysics, and not some new physics.

We often write reductionism means that we “could” calculate an emergent phenomenon in principle from a more fundamental theory, but I think that clouds the essence of physical reductionism because it unnecessarily introduces issues like predictability and computability. I’d say reductionism is better simply described as the view that there’s nothing else going on beyond lots small scale interactions. Also, emergence is sometimes defined in terms of “surprising” physics that shows up at large scales, but that’s way too squishy for me.

So from this perspective, there is no contradiction or tension at all between emergence and physical reductionism; indeed, as I’ve defined them the terms don’t even really make sense except with respect to each other, as Sara Walker wisely pointed out:

Now, some philosophers distinguish two kinds of emergence: weak and strong emergence. The precise definitions here seem slippery and I’m not sure I totally grasp them, so to distinguish how I’m going to (improperly?) use the terms I’ll refer to weak physical emergence and strong physical emergence.

The most useful definition of weak (physical) emergence to me as a physicist is basically the emergence that follows from reductionism. If it’s a behavior that arises from the sum of lots of smaller interactions, then that’s weak physical emergence. There is then no tension with reductionism at all because it’s consistent with reductionism by definition.

What, then could strong physical emergence be?

Strong emergence is often invoked to describe the kind of behavior that arises from complex systems that is thought to be more than “just atoms” as Adam put it at the top, and is fundamentally in opposition to physical reductionism.

The usual things people point to when asked for examples of strong emergence are life and consciousness.  To illustrate my point, I’ll use an old example.1

Many cultures have historically taught that animals are distinguished from inanimate objects by their anima, some sort of supernatural quality that imbues their physical bodies with motion. The details vary from culture to culture (for instance, the degree to which these overlap with life, the soul, consciousness, and free will) but the essence is that there is something else in the body beyond its corporeal form that makes it move. When an animal dies, that ineffable something leaves the body, and it stops moving. In this view, the body is just a vessel or puppet for the stuff of animate life.

This is decidedly not physically reductionist. We now know today how it is that living things can generate their motion and maintain their metabolic processes biochemically. We haven’t “solved” life by any means, but we do understand the biomechanical mechanisms for how living things move.

Now, it didn’t have to be this way. We could have discovered as we got better at studying living things, for instance, that living animals and dead animals were exactly the same inside physically and biochemically, except living things could move. We might have had to conclude that some things had an extra something that we couldn’t find just by looking inside of them. In fact, some might argue we still might prove that someday, but I’m sure most biologists would say this is not going to happen.

One reason is that we can analyze the biochemistry of life in great detail, and we know that when something dies it’s for a particular reason (like, for instance, lack of blood to the brain, which makes the neurons stop firing), not because its animating force departed. Another reason we can be so confident it’s not going to happen is that it would imply “downward causation”: the electrons in the animal would have to be moving due to some force other than electromagnetism caused by that supernatural anima. The animal’s leg moves because of its muscles, which are triggered by neurons, which are part of an enormously complex central nervous system. But at some point, if the anima2 were responsible and not just lots of individual electrons and ions doing their thing, then somewhere in that chain some electron or ion in some neuron had to pushed by the anima, and not just by its neighbors. If not, then there would be no difference between the inanimate and the animate versions, and clearly something is!

So that would have been an example of strong physical emergence. Another candidate, often brought up, is consciousness and free will. Lots of ink has been spilled over the Hard Problem of consciousness and qualia and so on, and I’m not going to dive into it here. Ultimately, it has the same problem of downward causation: if my consciousness and free will is due to a strongly emergent phenomenon (whether a supernatural soul or something less metaphysical) then at some point the neurons in my brain are responding to that new phenomenon and not just each other (“just atoms”) when they tell me to type these words.

But this is testable! That something firing that neuron could, in principle, be studied scientifically (for instance, by finding a neuron firing for no physically reductionist reason). If there’s more than “just atoms,” then at some point atoms need to respond to something other than “just atoms.”

There are other examples I can think of too (and ones less laden with religious implications). For instance, what if gravity has a smallest scale?  By this I mean, what if gravity only works above some threshold, when enough mass gets together in one place? The exact equation near this threshold might not be expressible in terms the sum of the gravitational forces of individual masses—that is, Newton’s formula could be correct when m and r are above some level, but incorrect below that.

Now this would be very surprising because Newton showed that his formula held even if you summed up the individual actions of all of the underlying atoms—in other words, that it is consistent with being a weakly physically emergent phenomenon.  Also, there are theories of gravity that are strictly physically reductionist that do predict gravity will behave differently or even go away on small scales, so it’s more complex than I’ve described.

Or, more straightforwardly, perhaps the dark energy of the universe or the dark matter works as a physical force that only manifests on large scales and simply can’t be described as an underlying field or sum of interactions of smaller pieces. I think that would be another example of strong physical emergence as I’ve defined it, though I admit this might be inconsistent with how the term is used by others.

One way that Adam has teased that he’s going to look at the problem of strong emergence is in terms of life as a processor of information. I’m looking forward to it, but ultimately information is a statistic or other description we assign to the arrangement of matter and energy in time and space. We have rules for how matter and energy react to each other in time and space, so ultimately any information-based description of life is, once again, physically reductionist. In order for information processing to generate strong physical emergence, there would have to be something else, some definition of information that went beyond a Shannon entropy or something, and I can’t imagine what that would be. If it’s not based on matter and energy’s distribution in time and space, then what is it based on?

One way I’ve seen people try to get around the downward causation problem is with various aspects of quantum mechanics. One avenue is by working with the concept of an “observer” (which is an unfortunate jargon term in physics whose conventional meaning invites us to give consciousness a privileged role in physical phenomena, leading to all sorts of popular misconceptions about quantum physics.)  The other is the apparently random and irreversible phenomenon of wavefunction collapse, which is the source of lots of debate about the meaning and nature of quantum mechanics.  These issues are tricky and still unsettled in quantum mechanics (indeed, they are at the heart of the Measurement Problem, which has capital letters, so you know it’s important) so this could be a way in for a strongly physical emergent phenomenon to push electrons around. Maybe! That seems at least plausible to me, though others have thought about it a lot more than I have. Indeed, Sabine Hossenfelder has gone so far as to define exactly what it would mean for there to be free will without metaphysics and shown that it’s at least possible in principle.

Anyway, that’s where I am on the topic! Again, I’m not a philosopher, so I’m sure I’ve gotten a lot wrong. My purpose was to be clear about my definitions, and hopefully clarify my “physicist’s perspective”.


And now two philosophers have written up their perspective on these ideas! You can read their take here.

Because I don’t want to change things out from under them, I’m making annotations to the above instead of edits and corrections:

1 I did not have the word ready at the time I wrote the post, but this long-discredited view is called vitalism.

2 I should have written vital spark, referring to whatever the extra thing is that vitalism is about. “Anima” is a term from Jungian psychology and refers to a property of the mind.

Bilogarithmic functions

When plotting over a big range, especially when plotting power laws or things with exponential dependence, the logarithm function is your friend. The log-log plot and semi-log plot are standard tools for visualization in science.

But while the logarithm function works on the domain (0,∞), many applications in science operate on other domains, like (-∞,∞) or (0,1).  In these cases, logarithmic plotting is not always appropriate, but in some cases it can be and it can be frustrating to find a good visualization tool that captures what you want.

For instance, I’m teaching stellar structure and evolution again and it’s tricky to make a plot that captures the details of the stellar atmosphere, which is only 1 part in 10 billion of the mass, and the details of the core,  where you want to see details covering only, say, the last 0.1% of the mass.

Let’s say you wanted to compare, for instance, the radiative temperature gradient ∇rad in the sun to the actual temperature gradient (this difference is related to the efficiency of convection). This quantity spans a huge range in the very thin outer layer of the star, so trying to plot its dependence as a function of mass inside the sun gives a pretty useless result:

A rather useless plot. The atmosphere is on the right, the core is on the left.

Maybe we just need to plot logarithmically?  Let’s try it:

A log version of above

That definitely helps!  We see that there is stuff going on down in the core, and apparently the gradient returns to around 1 near the surface, but we still can’t really tell what’s going on.

There are lots of solutions—you can plot in radius instead of mass, or use log(Pressure), for instance, or perhaps we could take the log of 1-m/M?  That would expand the region of interest logarithmically.  Let’s try that:

A log-log version. Note the minus sign on the x-axis which keeps the atmosphere on the right.

Now we’re talking! Suddenly we can see all of the action in the convective envelope.  But…we’ve lost resolution on the core.  Is there some way to expand both ends logarithmically?

There certainly is! We can use the logit function, which I’ve written about it before as a way to expand things on the domain (0,1). This is just the sum of log(m/M) and -log(1-m/M). [nb: logit is usually defined base-e, so you’ll have to divide it by ln(10) to get logit10].

This has the lovely property that is logarithmic in both directions, near zero and near 1.  So a value of -5 means I’m at 10-5, and a value of 5 means I’m at 1-10-5, or 0.99999.  In other words, negative vales “count zeroes”, and positive values “count nines”.  0 corresponds to a half, right in the middle.  So it’s intuitive to read and understand and gives you the dynamic range you want at both ends.  It’s a great way to plot things like transmission or conductivity when you have values that hug 0 and 1, and you care about how close they get.

Let’s try it:

A log-logit plot

Perfect!  Now the core is expanded way way out so we can see what’s going on (for the experts: it’s slightly convective here because this is the ZAMS sun and we have extra CNO fusion going on until all the carbon burns out).

There are other situations where you have functions that have power law dependence in both the negative and positive directions, as well. This has come up occasionally with me, and I’m always tempted to do something like sign(x)log10(|x|), but it doesn’t work because near zero it blows up.

What you want is a function that is linear near zero, and asymptotes to sign(x)log10(|x|).

It turns out matplotlib has a ‘symlog’ scaling that addresses this!  I’m not totally sure how it works, but here it is. This is a linear-symlog plot of a function against itself (so, the 1:1 line):

A ‘symlog’ scaling of the 1:1 line in Python.

This is nice, but there’s a clear kink in the data where it switches from logarithmic to linear in between.  It’s pretty kludgy.

But it turns out that there is an already-defined not-at-all kludgy function that already does this smoothly!  It’s the inverse hyperbolic sine function!  Specifically, in base-10 you’d call arcsinh(x/2)/ln(10), which is also log10(x/2+√(x/2)2+1)). And it behaves just like you want:

The bilogarithmic scaling

BTW, it’s not just that arcsinh has a sigmoid-y kind of shape so gets the job done, it’s actually the optimal function here.  That’s because it is the inverse of sinh, which is exactly the function that should generate a straight line in this scaling: sinh(x) is the average of exp(x) and -exp(-x), which are the functions you want on either side of zero; far from zero the other one decays away, and at zero the function is nicely linear (the second order term is zero).

[Update: Prior art from Warrick Ball, who implies this is on old trick and super useful for color scales!:

]

I don’t know what to call this scaling (inverse hyperbolic sine is too long and technical and obscure for what it does) but I’ll suggest bilogarithmic which seems to be a term that people sometimes use to mean “logarithmic in both directions” but has no well established meaning.

Anyway that’s how to get functions that are logarithmic in both directions, for domains that are bounded on both, one, or no sides!

The log scaling is already in matplotlib, of coure. There is also a ‘logit’ scaling but it does not work very well for me in the above examples, so could use some tweaking—in particular it would be good to have a base-10 version.  And ‘symlog’ could be replaced with arcsinh(x)/log(10).  This would reduce some of its functionality (right now you choose the range over which to make it linear) but I think it’s worth it to have it better behaved at the kink.

I don’t know enough Python to implement them myself, but perhaps it can be a feature request if enough people start using these!

Going to Mars is Hard

I love the idea that humanity will become an interplanetary species, and that our descendants will be interstellar. The far future of life and the universe is a fascinating topic.

And I love how close it all feels now that we are in the Space Age, with rockets routinely headed to Mars and beyond, and humans once again primed to leave low Earth orbit for the first time in almost 50 years.

Elon Musk is riding this movement with audacious plans for things like a million people on Mars by 2050. His vision of human expansion is very different from Sagan’s—he’s explicitly a colonizer of space, selling a very throwback vision of domination of the cosmos by (some) humans. He even openly mocked Sagan’s Pale Blue Dot, where Sagan pointed out that there is no place—not even Mars—that humans can migrate to yet. Musk says yes we can too migrate to Mars now—as if putting a person on Mars is the same thing as humanity migrating there!

You see, “migrate” doesn’t mean “send a few people, maybe even permanently”. It means you pack up a huge fraction of people living somewhere and go somewhere else. Very different things. DNLee rightly asked whose “version of humanity is being targeted for saving?” when Musk speaks of those sorry souls that will remain “stuck here on Earth.”

Musk is getting a lot of justified criticism for his vision, and I especially liked Shannon Sitrone’s take on it (which included a quote I tweeted by Carl Sagan):

Shannon takes on the trope that Mars is “Plan(et) B”, a way to escape Earth, pointing out that Mars is a terrible place to live. And it’s true! The worst winter night in Antarctica is a thousand times more habitable (and a million times easier to get to) than the best day Mars. And going there is not going to “save” anything on Earth, except incidentally via the technology we would need to live there.

Of course, the idea that humanity needs to become interstellar as a hedge against disaster makes sense, and if life on Earth has a future beyond about 500 million years, it will be in space, not on Earth. But acknowledging that is not the same thing as buying Musk’s vision for a million people on Mars in our lifetime.

After all, it’s hard to imagine any cataclysm on Earth that could possibly leave it less habitable than Mars short of something that destroyed everything on the surface—even the worst scenarios for climate change and nuclear war leave it with water, oxygen, one gee, and a biosphere.

Don’t get me wrong: there is a real, powerful vision for human migration into space, and I am excited to be alive to see it begin, but Musk is not doing the hardest parts of what needs to be done to make it happen.

“But SpaceX!” the Musk-ovites protest (and check out Shannon’s Twitter responses if you want to see how ugly the protest can get, especially when it’s woman criticizing Musk). And it’s true that SpaceX is amazing—I’m thrilled to see a new age of rocketry dawn with reusable rockets and a new philosophy that breaks the slow and expensive rut NASA and the rest of the aerospace industry has been stuck in. Tesla is amazing too—I look forward to owning one!

But there’s actually no contradiction here between criticizing Musk’s marketing and being impressed by and even in awe of the possibilities unlocked by the work of the engineers of the companies he owns.

To illustrate my point, think about weightlifting.

Getting into shape, especially building visible muscles, is a whole industry, an even mixture of science, engineering, medicine, biology, and psychology. If you want to be more Charles Atlas than 97 pound runt, there is a whole world of knowledge and a whole economy of trainers and equipment to help you get there.

If going to Mars is getting into shape in my analogy, who is Musk? To the Musk-ovites, he is some mix between Arnold Schwartzenegger and a cast member of The Biggest Loser: he’s the expert who will get us there.

But he’s not. He’s a BowFlex salesman.

Musk is the public face of SpaceX, which gets billions of our dollars (via taxes) to launch things into space. He has brilliantly built multiple companies using (among other assets) the force (I’d say cult) of his personality.

A million people on Mars is an amazing vision—especially if Mars has no extant life, it will be amazing when one day it happens.

But think about getting into shape: when someone needs to be in shape, like a professional athlete or an actor for a muscle-y role, what is the most important thing they do? Is it to buy a bunch of weights?

No! Not that they won’t need weights, of course, but in a pinch a sack of potatoes will do. What they mostly need is discipline, hard work, and a coach or trainer. The BowFlex equipment might make their training easier, but it’s neither necessary nor sufficient.

But you wouldn’t know that from the ads for exercise equipment and get-big-quick schemes, which promise if you just buy this thing you too can look like the models in the ads. This tactic goes back to Charles Atlas himself:

Image of old Charles Atlas ad including a cartoon of a 97 pound "runt" getting sand kicked in his faceAtlas promises you the body of “The World’s Most Perfectly Developed Man…without weights, springs or pulleys. Only 15 minutes a day of pleasant practice—in the privacy of your own room.”

He’s basically selling a book of exercises you can do. Of course, knowing what exercises to do is important, but that’s not the hard part—the hard part is the training itself. But you don’t sell lots of books by promising hard work, you sell them with pictures of Charles Atlas.

To Musk, we’re the 97 pound runts “stuck on Earth,” and we want to be Charles Atlas up on Mars. Musk wants us to think he can get a million of us there by 2050 because he’s selling rockets.

But rockets aren’t the hard part!

We’ve been sending rockets to Mars for decades. Yes, his rockets are really cool and cheap, but also a BowFlex setup is much fancier and better than the weights Charles Atlas used to maintain his physique. And no matter how good BowFlex is, just buying it won’t make you Charles Atlas.

The hard parts of going to Mars are understanding how humans can live so long in space, and how to build (nearly) self-sufficient habitats with limited materials. We really don’t know how to do those things. Heck, we can’t even build a self-sufficient habitat in Arizona!

You don’t build a big physique in a day, and we can’t jump to Mars all at once. Musk wants you to think the stepping stones to Mars are bigger and bigger rockets, and maybe even a Moon base. But he says that because those things need rockets, and he’s selling rockets.  And it’s not right—we’ve already built Skylab and the International Space Station. We’ve already put things on the Martian surface.  Building a bigger rocket will certainly help, but it’s not the hard part.

The actual stepping stones are fully self-sufficient artificial biospheres on Earth and a better understanding of human physiology in low-gravity environments. When we can live in a fully-enclosed habitat Antarctica for years with limited or no resupply, I’ll believe we might be able to translate that technology to Mars.  When we understand enough about plant ecology to maintain a whole, closed ecosystem that can recycle enough oxygen for human use, I’ll believe Mars habitats have a chance. When people can spend more than a year or two in space and not suffer horrible physical degradation, I’ll believe humans might last on Mars.

And here’s the tell with Musk: he’s not solving those problems. He hasn’t bought Biosphere 2 to make it work, he’s not investing in the sorts of technologies you need to maintain human life in space. He’s taking a 1950’s sci-fi approach to the problem: send up more oxygen as needed, build bigger and better machines that will protect people, build bigger rockets to send it all to Mars. Because he’s selling rockets.

Like any good salesman Musk knows: don’t sell the steak, sell the sizzle. So when he talks, he doesn’t sell the rockets, he sells the things you could do with them, the same way a kitchen gadget salesperson sells delicious food or a perfume salesperson sells a beautiful lover.

Radiantly Wild Perfume Ads : Avon Instinct Fragrance

But of course if those salespeople were really interested in you having those things, they’d be helping you with a lot more than food processors and aromatic oils. That’s how you can tell the difference. And Musk isn’t selling the hard part of space (the way Kennedy did for Apollo), he’s just selling the rockets.

Now, I know that by offering any criticism of Musk I’m inviting the Musk-ovites and trolls to come flame me (although I won’t get it as bad as Shannon does). So in case any of them have gotten this far, let me offer this:

It’s fine to be excited about SpaceX (I am!) and Tesla (I am!). It’s great to be excited about humanity’s future in space, and to help us get there. It’s reasonable to respect Musk for his entrepreneurship and success in business, and for his vision… just like it’s fine to admire Charles Atlas for his muscles and his marketing prowess.

But even the biggest Atlas fan can admit that his ads were making bodybuilding look easy when it’s actually hard. And even Musk fans can acknowledge that Musk’s salesmanship is just that: salesmanship, and that his vision of Mars that will take a lot more than he’s offering, and will probably need to change to accommodate the realities of Mars, humans, and ethics.

But there is a big contingent of super-fans that feels it’s important to publicly defend Musk against every criticism, to insist that his sales pitch is actually a complete and perfect vision of our best future, and that anyone who disagrees is somehow anti-space, or doesn’t understand the importance of space travel, or doesn’t appreciate how revolutionary SpaceX is. Musk carefully cultivates this kind of hero worship as part of his brand, and it creates a toxic atmosphere around the whole thing, hurting the whole cause.

I’ve had arguments with some of them on Twitter, some of them making, believing, and insisting on ridiculous claims like  “every prediction he has ever made has come true.” When I point out obvious counter-examples, they are brushed aside on technicalities. To them, Musk cannot fail, he can only be failed, and his critics are the enemies of the future of humanity.

Anyone serious about getting into shape knows about the whole world of advice and gadgetry, gyms and personal trainers, supplements and cuisines that surrounds the enterprise. Manufacturers like BowFlex are part of that and have helped a lot of people get into shape, and, yes, their sales force is an important part of that.

But think about how ridiculous you’d sound if you made one BowFlex salesman the first and last word on the entire subject?

If getting to Mars is really important to you, take a lesson from bodybuilders and go learn about all the pieces of that endeavor, and help humanity realize it. And don’t believe everything a salesman tells you, no matter how much you like his gadgets.

[Edit: It’s amazing how many Musk defenders will concede that we will not have a million humans on Mars by 2050 as Musk claimed, but still want to have arbitrarily long Twitter arguments to defend him against my accusation that his claims are unrealistic salesmanship.  Like, I said SpaceX is amazing and all, but Musk is overselling it. We agree!]

NSF and NASA Funding for SETI

I wrote way back when about the amount of funding that NASA and the NSF has provided for SETI since the program at NASA was canceled in 1993. It turns out that there have been a few grants scattered over the years, but they amount to not even enough to fund two people over that time.

Since then, there have been some successful grants! In addition to the NASA Technosignatures Workshop in 2018, there have now been a few grants to external PIs.

Below is a (continuously updated) list of the ones I know about:

NSF:

  • AST-2003582: “Participant Support for the first Penn State SETI Symposium” $49,400. PI: Jason Wright, 1/1/2020–12/31/2021, extended until symposium meets
  • NSF 1950897: “REU Site: Berkeley SETI Research Center” $323,947. PI: Stephen Croft, 3/1/2020–2/28/2022
  • NSF 2244242: “REU Site: Berkeley SETI Research Center” $437,654. PI Stephen Croft, 3/1/2023–2/28/2026

NASA:

  • NASA 80NSSC20K0622 (Exobiology): “Characterizing Atmospheric Technosignatures” $286,926. PI: Adam Frank, 12/15/2019–12/14/2022
  • NASA 80NSSC20K1109 (Exobiology): $11,679, “TechnoClimes: A Workshop to Develop a Research Agenda for Non-Radio Technosignatures” PI: Jacob Haqq-Misra
  • NASA 80NSSC21K0398 (XRP) $593,536, “From Exocomets to Technosignatures: Hidden Occulters in Planetary Systems” PI:Ann Marie Cody, 1/1/2021–12/31/2023
  • NASA 80NSSC21K0575 (XRP) PI: “A Search for Exoplanets Around Newly Discovered Exoplanets” $362,939 PI:Jean-Luc Margot 2/8/2021-2/7/2024