How thick is “blood”? Am I really related to my 5th cousin?

Here’s a picture of my great grandparents John Henry Hattersley and Bertha Herrmann at Niagara Falls in 1910:

Photo of two well dressed people posing on rocks in front of Niagara falls

I’m presuming this is a real photograph and not staged with a backdrop or something, but I really don’t know. I think this was taken on Luna Island on the American Side.

I don’t know much of anything about them except that Bertha is the only non-English-origin great-grandparent of mine I’m aware of; I don’t remember my grandfather talking about them.  At one point I really got into tracking my family tree: I even discovered at one point that my lineage can be traced back to Boston Colony (via John Viall through my paternal grandmother’s father Clifford Viall Thomas). But I long ago stopped imagining that I was somehow learning about myself once I got to people beyond living memory.

It’s always bothered me when people say siblings share half of their genes. Similarly, people who can trace their decent to someone famous (Charlemagne, Jefferson) and seem to think this reflects well on their genes or something.  There are a few reasons this can’t be right, a few of which I think about in particular:

  1. We share 98.8% of our DNA with chimpanzees—we must share much more than that with our siblings!  In fact it seems that the “average pairwise diversity” of randomly selected strangers is around 0.1%.
  2. There is some level of discretization with DNA inheritance.  Obviously it can’t be at the base-pair or codon level, or else we wouldn’t be able to reliably inherit entire genes.  If the “chunks” we inherit from each parent are large enough, small number statistics will be push the number significantly from 50%.
  3. Mutations slowly change genes down lineages
  4. Combinations of genes and epigenetic factors have strong effects on traits

Point 2 is not something I really understand yet, except that talking to biologists I think the number of “chunks” that get passed down on each side is ~hundreds, but is also random making the problem quite tricky.  Still, ~hundreds means that we are probably close enough to ~50% inheritance from grandparents on each side (+/- 10% or less) that we can get a rough idea of how related we really are to people on our tree in terms of shared DNA.

So let’s take a closer good at point 1 above:

Let’s assume the amount of identical DNA we get from each ancestor is given by 2-n where n is the number of generations back they are (grandparents: n=2, so 25% inherited DNA, ignoring discrete “chunks” and mutations). This makes sense: except for (many) details like the X/Y chromosomes, mitochondria, and probably a bunch of other things, each ancestor a given number of generations back has an equal chance of having contributed a bit of DNA to you.

Finding the amount of shared inheritance is thus a matter of going back to the first shared ancestor and counting all of the shared ancestors at that level (which will be 1 in the case of for half siblings and 2 for full siblings, except for details coming later).

So cousins share 2/4 grandparents, each of whom had a 2-2 chance of contributing a bit of DNA, so they have 1/8 shared bits of DNA or around 12.5%.

Second cousins (the children of first cousins) share 2/8 grandparents, so the number is 6.25%. Each generation gap gives us a factor of 1/4: a factor of 2 from the extra opportunity on each line to “lose” that bit of DNA, one on each side.

Now we get into the fun of “removed cousins”, which just counts the generation gap between cousins. You don’t usually get big numbers of “removals” among living people because it requires generations to happen much faster along one line than another—big numbers like “1st cousins 10 times removed” are usually only seen when relating people to their distant ancestors.

So my kids are my first cousins’ “first cousins once removed”, and all of their kids would be “second cousins once removed”. The rule is that if you have “c cousins r removed” (so c=2 r=1 means “second cousins once removed”) then you have to go back n=c+r+1 generations from the one and n=c+1 from the other to find the common ancestors.  So removals count the number of opportunities to “lose” a bit of DNA that occur on only one side of the tree.

Putting it all together: the amount of shared DNA we share with a cousin is 2-(2c+r+1) (siblings have c=r=0, subtract one from the exponent if the connection is via half siblings).

But there’s a limit: this only works if none of the other ancestors are related, but in the end we’re all related. If cousins have children, this increases the number of shared ancestors and raises the commonalities. And, of course, mutations work the other way, lowering the amount of identical bits.

So why is this interesting? Because the “we’re all related” thing is true at the 0.1% level in DNA, meaning that if you make c high enough, you’ll get an answer that’s below the baseline for humans. Since log2(0.1%) = -10, we have that if 2c+r+1 >10, the DNA connection is no stronger than we’d expect for random strangers.

This means that if you meet your 4th cousins (i.e. your great-grandparents were cousins) your genealogical relationship is mostly academic and barely based on “blood”!  By 5th cousins, you’re no more related than you are to the random person on the street in terms of common DNA.

Even worse, if we have hundreds of “chunks” we randomly inherit from parents, then it’s even possible (and here I’m a bit less sure of myself) that you share no commonly inherited genetic material with someone as distantly related as a 5th cousin!

Again, this calculation makes a lot of assumptions about genes from different ancestors being uncorrelated, and in particular communities that have been rather insular for a very long time must have at least a bit more kinship with each other than they do with similar communities on different continents.  But from what I’ve gathered this effect isn’t that large: the variance in genetics within a community, even an insular one, is still usually larger than the difference across communities.  That is, the average person from one place is more similar genetically to the average from another place than they are to a random person in their own place.

And also, this doesn’t mean you can’t prove descent from someone more than 10 generations past via DNA—that might indeed be possible by looking at where common bits of DNA are in the chromosomes and similar sorts of correlations (I would guess).

Anyway, the bottom line is that it’s fun to do family trees and learn about our ancestors where we can, but we definitely shouldn’t get too hung up on the idea that we’re learning about the origins of our genes and kinship via biology—even setting aside the fact of old family trees being full of adopted and “illegitimate” children, the actual genetics dilute out so fast it hardly matters past great-grandparents.

 

Measuring Stellar Masses with a Camera using Mesolensing

I love the Research Notes of the AAS.  They are a place for very short, unrefereed articles through AAS Journals, edited (but not copyedited!) by Chris Lintott. They are a great place for the scraps of research—those little results you generate that don’t really fit into a big paper—to get formally published and read.

You might think that without peer review and with such a low bar for relevance, such a journal would have a very high acceptance rate, but actually I’ve read it’s the most selective of the entire AAS family of journals, including ApJL!  The things it publishes are genuinely useful, and shows that there’s a need for publishing models for good ideas that are too small to be worth the full machinery of traditional publishing.  The curation by Chris also ensures that the ideas really are interesting and worthy of publication.

A while back I wrote a Research Note on how to prove the Earth moves with just a telescope and a camera. Nothing that would leave to novel results, but it has inspired some amateurs to try it out!

For my latest note, I’ve got another trick you can do with nothing but a telescope and a camera, although in this case they’ll cost billions of dollars and do something useful and novel!

Whenever I hang out with Eric Mamajek we end up talking science and coming up with cool ideas. This often ends with one of us starting an Overleaf document for a quick paper that never ends up getting written.  But the idea we had on my last trip was good enough that I was determined to see it through!

The idea goes back to Eddington’s eclipse experiment, wherein he showed that a gravitational field deflects starlight at the level predicted by General Relativity (which is twice the level one might deduce from Newtonian gravity).

To do this, he imaged the sky during a total solar eclipse, when he could make out stars near the sun.  Comparing their positions to where they were measured during the night at other times of year, he showed they were significantly out of place, meaning the Sun had bent their rays. Specifically, he found that they were farther from the Sun by about an arcsecond (in essence, the Sun’s focusing effects allows us to see slightly behind it, and so everything around it appears slightly “pushed away” from its center.)

This led to a great set of headlines in the New York Times I like to show in class:

undefined

This is actually an example of what today we confusingly call microlensing. This actually captures a broad range of lensing effects, as Scott Gaudi explained to me here (click to read the whole thread).

Microlensing is most obvious when a source star passes almost directly behind a lens star—specifically within its Einstein radius, which is typically of order a few milliarcseconds. This level of alignment is very rare, but if you look at a dense field stars, like towards the Galactic Bulge, then there are so many potential lenses and sources that alignments happen frequently enough that you can detect them with wide-angle cameras.

In close alignments like this, the image of the background source star gets distorted, magnified, and multiply imaged, resulting in it getting greatly magnified in brightness, as shown in this classic animation by Scott.  Here, the orange circle is the background source star passing behind the foreground lens, shown by the orange asterisk. The green circle is the Einstein ring of the foreground lens.  As the background star moves within the Einstein ring, we see it as two images, shown in blue.

We typically do not resolve the detail seen in the top panel, we only see the total brightness of the system.  The brightness of just the background source is plotted in the bottom panel.

But in the rare cases where we can resolve the action, note what we can see: the background star is displaced away from the lens when it gets close, just like in Eddington’s experiment. This effect is very small, just milliarcseconds, and has only been measured a few times. This is called astrometric microlensing.

Rosie Di Sefano has a nice paper on what she dubs “mesolensing”: a case where instead of a rare occurence rate of lensing among many foreground objects, like in traditional microlensing surveys, you have a high rate of lensing for a single foreground object.  This occurs for very nearby objects moving against a background of high source density, like the Galactic Bulge.

The reason is that the Einstein ring radius of nearby objects is very large—for a nearby star it is of order 30 mas, or 0.”03.  Now, there is a very low chance of a background star happening to land so close to a foreground star, but foreground stars tend to move at several arcseconds per year across the sky, so the total solid angle (“area”) covered by the Einstein ring is actually a few tenths of a square arcsecond per year, which is starting to get interesting.

Things are even more interesting if you don’t require a “direct hit”, but consider background stars that get within just 1″ or so of the lens: even though it’s 30 Einstein radii away, the astrometric microlensing effect is still of order 1 mas, which is actually detectable!

Now, most of these background objects are very faint, so this isn’t really something you can exploit. Twice, people have used the alignment of a very faint white dwarf and some background stars to see this happen, and also once with the faint M dwarf Proxima. But most main sequence stars are so much brighter than the background stars, that their light will completely swap them.

But detecting very faint objects within a couple of arcseconds of bright stars is exactly the problem coronagraphy seeks to solve with the upcoming Habitable Worlds Observatory!  This proposed future flagship mission will block out the light of nearby stars and try to image the reflected light of Earth-like planets orbiting them.  And while it’s at it, it will see the faint stars behind the nearby one at distances of a few to dozens of Einstein radii.

So, for target stars in the direction of the Galactic Bulge, HWO will detect astrometric microlensing! And it will do this “for free”: it will be looking for the planets orbiting the star, anyway!

So, who cares? Is this just a novelty? Actually, it will be very useful: measuring the astrometric microlensing will directly yield the mass of the host star. This is great, because we have almost no way of doing this otherwise: we need to rely on models of stellar evolution, which are great but still require conversion to observables, which comes with systematic uncertainties of order a few %.  Directly measuring stellar masses will allow us to avoid those systematics, and better understand each star’s history—and that of its planetary systems.

Now, if we find planets with orbital periods of a few years or less, we can also measure the host star masses using Kepler’s Third Law, but this is an independent way to do this, and it also works on stars without planets. In principle, you could even go pointing HWO at all of the stellar mass objects towards the bulge to do this measurement, making it a pure stellar astrophysics engine (precise stellar masses don’t sell flagship missions like exoplanets do, though).

The final piece of this calculation was we needed to know what the background source density of HWO target stars were. As luck would have it, my recent advisee Dr. Macy Huston had just graduated, and the final chapter of their thesis is on a piece of Galactic stellar modeling software that does exactly this calculation for microlensing! It’s called SynthPop and you’ll hear about it soon, but in the meantime they were able to calculate how many background sources we expect from an example HWO architecture around likely HWO targets.

Macy finds that the best case of 58 Oph will likely have over 15 stars in the coronagraphic dark hole that will show astrometric microlensing, giving us a ~5% mass measurement of the star every visit. These numbers are very rough by the way—the precision could easily be better than this.

Anyway, this RNAAS was a lot of fun to write, and you can read all of the details in it here.

The bottom line is that HWO will be able to measure the masses of all sorts of stars towards the Galactic Bulge directly, with no model dependancies!

Enjoy!

Codes of Conduct at the PSETI Center

Why I am Responding Here

As the head of the PSETI Center I need to address a controversy and correct some factual errors circulating about the Codes of Conduct at various PSETI-related activities, one of which led to the abstract of an early career researcher (ECR), Dr. Beatriz Villarroel, being rejected from an online conference organized by and for ECRs, the Assembly of the Order of the Octopus.  This controversy has been brewing on various blog posts and social media, and recently became the subject of a lengthy email thread on the IAA SETI community mailing list.

Sexual harassment is widespread in science and academia in general, it is completely unacceptable, and, when these kinds of issues arise, our focus and priority as a community should be to protect the vulnerable members of our community.

Much of this criticism has been directed at the PSETI Center and specifically at its ECRs and former members. This discussion has also been extremely upsetting not only for these ECRs, but for researchers across the SETI community and beyond, especially those that have been victims of sexual harassment, and to minoritized researches who need to know that the community they are in or wish to join will protect them.

For these reasons and many more, these attacks warrant a response, explanation, and defense from the PSETI Center.

Background

I don’t want to misrepresent our critics’ positions. You can read Dr. Villarroel’s version of events and their context for yourself here.

The background for this story involves Geoff Marcy, who retired from astronomy when Buzzfeed broke the story that he had violated sexual harassment policies over many years. Since then many more stories of his behavior have come to light, and the topic of whether it is appropriate to continue to work with him and include him as a co-author has come up many times. I am particularly connected with this story because Geoff was my PhD adviser and friend, and for a while I continued to have professional contact with him after the story broke. I have since ended such contact with him and apologized after discussions with my advisees and with victims of his sexual harassment gave me an understanding of why such continued association was so harmful.

This particular story begins at an online SETI conference in which Dr. Villarroel presented research she was doing in collaboration with Marcy, during which she showed his picture on a slide. This struck some attendees as gratuitous and as potentially an effort to rehabilitate Marcy’s image in the astronomical community. It also struck some as insensitive to victims of sexual harassment and assault, especially to any of Marcy’s victims that may have been in attendance.

Around this time, a group of ECRs in SETI decided to revive the old “Order of the Dolphin” from the earliest days of SETI, rechristened as the “Order of the Octopus.”  This informal group of researchers builds community in the field, and the PSETI Center is happy to have provided the it some small financial and logistical support.  The Order decided to meet online during the COVID pandemic in their first “Assembly.”  As they wrote: “In designing the program for this conference, we are also striving to incorporate principles of inclusivity and interdisciplinarity, and to instill these values into the community from the ground up.“

I was not an organizer of the Assembly, but I think it is fair to say that they wrote their code of conduct in a way that would ensure that image and presence of any well-known harasser like Marcy would not be welcome. This effectively meant that abstracts featuring Marcy as co-author would be rejected, and that participants were asked to not gratuitously bring Marcy up in their talks.

What happened: The Assembly of the Order of the Octopus and the Penn State SETI Symposia

More than one researcher that applied to attend the 2021 Assembly of the Octopus, including Dr. Villarroel, had a history of working with Marcy. To ensure there were no misunderstandings, those applicants were told in advance that they were welcome to attend the conference provided that they abided by the aspects of the code of conduct, and the language in question was highlighted.

When the organizers of the Assembly learned that Dr. Villarroel’s abstract was based on work published with Marcy as the second author, they withdrew their invitation to present that work, but made clear she was welcome to attend and even to submit an abstract for other work.

Dr. Villarroel chose not to attend the Assembly.

Similar code of conduct language appeared in two later PSETI events, the Penn State SETI Symposia in 2022 and 2023.  Dr. Villarroel did not register to attend or submit an abstract for either symposium.

What happened: The SETI.news mailing list and SETI bibliography

Another source of criticism of the PSETI Center involved me directly. At the PSETI Center we maintain three bibliographic resources for the community: a monthly mailer1 of SETI papers (found via an ADS query we make regularly), an annual review article, and a library at ADS of all SETI papers.

Dr. Villarroel wrote a paper with Marcy as a second author which does not mention SETI directly, but obliquely via the question of whether images of Earth satellites appear in pre-Sputnik photographic plates. This paper did not appear in our monthly SETI mailer, and Dr. Villarroel contacted me directly to ask that it appear in the following month’s mailer.

I declined. As I wrote:

Hi, Beatriz.Thanks for your note.

Your paper slipped through our filter because it doesn’t mention SETI at all, or even any of the search terms we key on. Did you mean to propose an ETI explanation for those sources? At any rate, if you mean for it to be a SETI paper we can add it to the SETI bibgroup at ADS so it will show up in SETI searches there (especially if there are any followup papers regarding these sources or this method).

As for SETI.news, that is a curated resource we provide as a service to the community, and we have decided that we don’t want to use it to promote Geoff Marcy’s work. This isn’t to say that we won’t include any papers he has contributed to, but this paper has him as second author and, since I know his style well, I can tell he had a heavy hand in it.

Best,

Jason

Dr. Villarroel has taken exception to my message, saying that it implies she “wasn’t the brain behind [her] own paper.” I have also learned that Dr. Villarroel feels this implication is sexist.

My meaning here was simply that Marcy’s name on an author list wasn’t an automatic bar from us considering it—we were specifically concerned with recent work he had made substantive (and not merely nominal) contributions to. I think the first part of the offending sentence makes this clear. As someone who worked very closely with Marcy for many years (and as someone who is familiar Dr. Villarroel’s other work) I felt that I could tell that he had more than a nominal role in the work behind the paper. I felt that this and his place on the author list justified that paper’s exclusion from the mailer.

But while it was certainly not my meaning, I do acknowledge the insidious and sexist pattern of presuming that papers led by women must not be primarily their own work, and that men on the author list—especially senior men—must have had an outsized role in it. Now that Dr. Villarroel has pointed this out, I do regret my choice of words, acknowledge the harm they’ve caused, and here apologize to Dr. Villarroel for the implication. For the record: I do believe that paper was led by Dr. Villarroel and is primarily hers.

Dr. Villarroel

While I obviously disagree with some of Dr. Villarroel’s interpretations of these events, I don’t think she has publicly misrepresented them. Others have, however, perpetuated misinformation in the matter.

Specifically, I want to make clear that Dr. Villarroel was never “banned” from any PSETI-related conference, and Dr. Villarroel is not being punished for her associations with Marcy with our codes of conduct.  The prohibitions at the PSETI symposia are targeted at harassers, and include work they substantially contribute to. Dr. Villarroel is welcome to attend these conferences in any event, and to present any research that does not involve Marcy.  She and her work have not been “cancelled”, and her work with Marcy appears in the SETI bibliography we maintain.

I also want to acknowledge the large power differential between me and Dr. Villarroel. I understand that I have some power to shape the field and her career, while she has almost none over my career. It is for this reason that I have avoided discussing her in public up to this point, or initiating any engagement with her at all.  If she were more senior I would certainly have defended our actions and pushed back on her characterizations of me and the PSETI Center sooner.

At any rate, I do not bear her any ill will and I absolutely do not condone any harassment of her. That said, I understand why people are upset that she would continue to work with Marcy, and they are entitled to express that displeasure, even in potentially harsh terms, especially in private or non-professional fora, as long as they are not “punching down,” doing anything to demean, intimidate, or humiliate her, or sabotaging her work.

The Order of the Octopus SOC

I also want to acknowledge the large power differential between many of the PSETI Center’s critics and the chairs and the organizing committees of our various conferences that have contributed to the Codes of Conduct, many of whom are ECRs. This is another reason that I am responding here: to give voice to those who have far less power than I that are being attacked.

Critiques of the PSETI Center’s actions here should therefore be directed at me: I am the center director and conference chair of both symposia, and I take full responsibility for our collective actions here.

What Sorts of Codes of Conduct are Acceptable

Many have argued our bar against harassers’ work is completely inappropriate, being both unfair to Marcy and even more unfair to his innocent co-authors. I disagree, and argue that it is in fact an appropriate way to protect vulnerable members of the community who are disproportionately harmed by sexual harassment and predation.

As an aside, I note that the PSETI Center is not alone in this position; it is also consistent with our professional norms. I would point to the AAS Code of Ethics which includes a ban from authorship in AAS Journals as a potential sanction for professional misconduct. Such a sanction is analogous to a ban from authorship on conference abstracts.  It is true that this ban also affects innocent co-authors, but a harasser should not be able to evade a ban by gaining co-authors. That is not guilt-by-association for the co-authors; it is a consequence of a targeted sanction. It is certainly not harassment of those co-authors.

I admit I find this whole episode to be somewhat confounding.  A small group of ECRs got together to hold a meeting and had a no-harasser rule, this was enforced, and now years later it’s the subject of a huge thread on the IAA SETI community mailing list, the subject of Lawrence Krauss blog posts, the basis of an award by the Heterodox Academy, and creating so much drama that I need to address it here.

I also find it ironic that many complaining about the Order of the Octopus being selective about who they decide to interact with at their own conference are ostensibly doing so to protect the principle of…freedom of academic association. To be clear: Dr. Villarroel is free to collaborate with Marcy or anyone else she chooses. This is a cornerstone of academic freedom. Are the members of the Order of the Octopus not equally free to dictate the terms of their own collaborations and the scope of their own meeting, and to select abstracts as they see fit? Freedom of association must include the freedom to not associate, or else it would be no freedom at all.

Now, I acknowledge that there are limits to this freedom: one should not discriminate on matters that have nothing to do with science, especially against minoritized people. But that’s not what’s going on here: Marcy’s behavior is worthy of sanction, and our sanctions are entirely focused on harassers like him and their research, and only to protect vulnerable members of the community.  As I wrote, Dr. Villaroel is not guilty by association, and is welcome at future PSETI symposia, provided she abides by the Code of Conduct.

As for what behavior is appropriate towards those, like Dr. Villarroel, who choose to work with Marcy and the like, I think this is nuanced.  Especially in large organizations, we should honor people’s freedom of association and in general this means those people should not lose roles or jobs for this choice alone. There should be no guilt by mere association, especially by past association—indeed, as a longtime collaborator of Geoff’s, including for years after his retirement and downfall, I am particularly sensitive to this point.

But the choice to work with Marcy will have inevitable consequences. If you are working with him, many people will rightly not want to do work with you that might involve them with him, and there are excellent reasons why one might avoid working with those who have an official record of sexual harassment violations. My students are wary of working with groups that involve Marcy, because this had led to students finding themselves on conference calls with Marcy, finding themselves on author lists with him, and getting emails from him as part of the collaboration.  For me to honor my students’ freedom to not associate with Marcy, I have discovered the hard way that I need to be very careful with anyone working with him, and that I must turn my own interactions with him down to zero.

Affirmative Defense of our Codes of Conduct

At any rate, we’ve done nothing wrong. We’ve decided where we at the PSETI Center will draw the line on notorious sexual harassers like Marcy and I am confident it is the right choice for us. Other meetings and organizations will deal with this in their own way that might be different from or very similar to ours, but either way I’m confident that the majority of astronomers are comfortable with the choice we’ve made.

There is a troubling lack of empathy for the victims of sexual harassment in these abstract discussions about academic freedom. When a notorious harasser’s face and name and work pops up in a talk, we need to remember their victims may be in the audience. Other victims may be in the audience. Allowing that to happen sends a message to everyone about what we, as a community, will tolerate, and whose interests we prioritize.

And the attacks on our code of conduct and the stance we have taken continue to do harm. The ECRs that helped write and enforce these codes are reminded that no matter how badly an astronomer acts, there will always be other astronomers there to apologize for them, to ask or even demand that their victims forgive them, to accept them back into the fold, to act like nothing happened, to insist that only a criminal conviction should trigger a response, to question, resist, and critique sanctions, and to attack astronomers that would insist otherwise.

If we, as a community, claim that we won’t tolerate sexual harassment, we need to show that we mean it by enforcing real sanctions that seek to keep our astronomers feeling safe. If we can’t do that for as clear and notorious a case as Geoff Marcy, then we can’t do it at all, and we will watch our field hemorrhage talent.

I am grateful to the many astronomers and others that passed along words of support to our ECRs as this criticism has rained in. I hope in the future more astronomers, especially senior ones, will speak up publicly to defend a strong line against sexual harassment in our community and show with their actions, voices, and platforms that all astronomers can be safe in our field.


1I should have been more precise in my language. James Davenport the sole owner and operator of the seti.news website and mailer. For a while I and other PSETI Center members supplied the data that populated it (we haven’t had the bandwidth for a while now, but hope to start up again soon). This is why the issue of Dr. Villarroel’s paper went through me.  You can read Jim’s position on the topic here:

Avi and Oumuamua: Setting the Record Straight

As an astrophysicist that searches for signs of alien technology beyond Earth, I’m often asked these days what I think about Avi Loeb.  

Loeb, you might know, recently rose to public prominence with his claims that the first discovered interstellar comet, ‘Oumuamua, is actually a piece of an alien spacecraft passing through the Solar System.  Since then he has headlined UFO conventions, written a very popular book about his claim, and raised millions of dollars to study UFOs with his “Galileo Project” initiative. His latest venture with that money is to sweep a metal detector across the Pacific to find fragments of what he claims is another interstellar visitor that the US military detected crashing into the ocean, resulting in the headline “Why a Harvard professor thinks he may have found fragments of an alien spacecraft” in the Independent.  

Loeb has the credentials to be taken seriously.  He is a well-respected theoretical cosmologist that has made foundational contributions to our understanding of the early universe.  He served as the chair of the Harvard astronomy department, and leads the distinguished Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics.  He is well known as an outside-the-box thinker who is brave enough to be wrong often enough to occasionally be right in important and unexpected ways. He is a prolific paper writer, mentor to many students and postdoctoral researchers, and a leader in the community.  I, in particular, was strongly influenced by a lecture he gave on “diversifying one’s research portfolio” to include a lot of safe but passé research, some more risky cutting edge work, and a small amount of outré science.  It’s important advice for any scientific field.

But his shenanigans have lately strongly changed the astronomy community’s perceptions of him. His recent claims about alien spacecraft and comets and asteroids largely come across to experts as, at best, terribly naive, and often as simply erroneous (Loeb has no formal training or previous track record to speak of in planetary science, which has little in common with the plasma physics he is known for). His promotion of his claims in the media is particularly galling to professionals who discover and study comets, who were very excited about the discovery of ‘Oumuamua but have found their careful work dismissed and ridiculed by Loeb, who is the most visible scientist discussing it in the media.

Most recently, his claims to have discovered possible fragments of an alien ship in the Pacific  have been criticized by meteoriticists at a recent conference. Loeb claims the metallic spherules he found trawling the ocean floor are from the impact site of an interstellar object (dubbed 20140108 CNEOS/USG) but they point out that they are much more likely to have come from ordinary meteorites or even terrestrial volcanoes or human activities like coal burning ships or WWII warfare in the area. And, they argue, 20140108 most likely did not come from outside the Solar System at all. (It also appears that Loeb may have violated legal and ethical norms by removing material from Papua New Guinean waters—you’re not supposed to just go into other countries and collect things without permission.) 

Also frustrating is how Loeb’s book and media interviews paint him as a heroic, transformational figure in science, while career-long experts in the fields he is opining on are characterized as obstinate and short-sighted. His Galileo Project has that name because it is “daring to look through new telescopes.” In his book claiming ‘Oumuamua is an alien spacecraft, he unironically compares himself to the father of telescopic astronomy, Galileo himself. The community was aghast when he blew up at Jill Tarter, a well-respected giant in the field of SETI and one of the best known women in science in the world. (When Tarter expressed annoyance at his dismissal of others’ work in SETI, he angrily accused her of “opposing” him, and of not doing enough for SETI, as if anyone had done more! Loeb later apologized to Tarter and his colleagues, calling his actions “inappropriate”).  

It is true that there is much work to be done to normalize work on SETI and UFOs in scientific circles. Tarter herself has worked for decades to change attitudes about SETI at NASA and among astronomers generally, to get them to embrace the serious, peer-reviewed work to answer one of the biggest questions in science (as I’ve written about before). Scientifically rigorous studies of UFOs have also begun to make inroads, most notably with NASA’s recent panel advising it on the topic (Loeb was pointedly not involved; I must note that I see the UFO and SETI questions as scientifically unrelated). But Loeb’s work is unambiguously counterproductive, alienating the community working on these problems and misinforming the public about the state of the field. 

So it is against all of this background that, even when asked, I have generally stayed quiet lately when it comes to Loeb, or tried to give a balanced and nuanced perspective. I do appreciate that he is moving the scientific “Overton Window”, making SETI, which used to (unfairly) seem like an outlandish corner of science, seem practically mainstream by comparison. I appreciate the support he’s given to my work in SETI, and I generally discourage too much public or indiscriminate criticism of him lest the rest of the field suffer “splash damage.”

I have noticed, however, that Loeb’s work and behavior have been seen as so outrageous in many quarters that it essentially goes unrebutted in popular fora by those who are in the best position to explain what, exactly, is wrong about it. This leaves a vacuum, where the public hears only Loeb’s persuasive and articulate voice, with no obvious public pushback from experts beyond exasperated eye-rolling that feeds right into his hero narrative.  

So for the past several months, I’ve worked with Steve Desch and Sean Raymond, two planetary scientists and experts on ‘Oumuamua, to correct the record.  It has taken a lot of time: as Jonathan Swift wrote, “falsehood flies, and the truth comes limping after it.”  I read Loeb’s book on ‘Oumuamua, cover to cover, and carefully noted each of his arguments that ‘Oumuamua is anything other than a comet or asteroid. The three of us then went through and did our best to take an objective look at whether his statement of the evidence is correct, whether it really supports the alien spacecraft hypothesis, and whether it is actually consistent with ‘Oumuamua being a comet. No surprise, we find that under careful scrutiny his claims are often incorrect, and that there is little to no evidence that ‘Oumuamua is an artificial object. We’ve done our best in our rebuttal to avoid criticizing Loeb or his behavior, and to focus instead just on what we do and do not know about ‘Oumuamua. You can find our analysis here.  

There is little joy in or reward for debunking claims in science. We would all rather be finding new natural phenomena to celebrate than spending a lot of time correcting the mistakes or false claims of others published years earlier.

Because the truth is, we’re entering a new era of astronomy where we can for the first time contemplate studying samples from other solar systems, where we are seeing the first serious and comprehensive searches for signs of alien technology among the stars, and where truly new telescopes and methods are unlocking secrets of the universe that will thrill fans of science around the world, without any need for sensationalism. Now that we’ve addressed Loeb’s most outlandish claims about ‘Oumuamua, I’m excited to get back to work on it!

 

JWST Proprietary periods

NASA is reportedly moving towards ending all proprietary periods for NASA missions, including GO programs. This would mean that a researcher who wins JWST time in future cycles will not have any exclusive access to the data—it will be available to the world the moment it lands.

I wrote an Op-Ed for SciAm on the topic, which summarizes my position. You can read it here.  I summarize it below, but one thing I noticed is a huge split in reactions to my Op-Ed between near-unanimity by astronomers that proprietary periods are important, and non-scientists who don’t understand why public access and moving science as fast as possible isn’t more important than the needs of astronomers. This seems especially clear on Reddit.

Click for my SciAm Op-Ed

I’ve learned that the public doesn’t appreciate that a scientist who spends years developing an idea reasonably expects to get credit for it by publishing the final result. They see that as somehow selfish and bad for science. I need to keep this in mind when I bring it up in the future.

Anyway here’s the argument:


The push for ending JWST proprietary time is supposedly coming from the White House, which is promoting open access to research output and data upon publication. That’s great!

Zero EAP certainly has a significant role to play in astronomy, especially for survey data and programs that were designed with broad community input. Data generated *by* the community should be data available *to* the community. TESS and Kepler showed how powerful this can be.

But zero EAP is badly inappropriate for GO programs conceived and designed by small groups. Data generated *by* a small group should be available *to* that small group, so they can get full credit for their work. This is standard across science.

A citation to a proposal number in the acknowledgements of a scooping paper is not meaningful credit that a proposer can use in their career. I explore this more here.

Also, if GO programs have zero proprietary period there will be strong social pressures not to use those data for a while, especially if the PI is a student. But many astronomers will ignore these pressures, and others might not appreciate students are PI of a program.

This will lead to unnecessary and difficult ethical challenges in the community. Zero EAP means we have to navigate these things through fuzzy and evolving community norms with little guidance. EAPs keep the rules clear and benefit everyone, keeping honest people honest.

NASA seems to be arguing fully open data is an equity issue, but zero EAP benefits well-resourced astronomers most. They are the ones who can afford to hire teams to hoover up archival data and quickly turn it around, scooping the PIs of the data. EAPs keep astronomy fair.

Zero EAP will be bad for the profession because it will encourage poor work-life balance as astronomers go into “crunch time” mode as soon as data land to avoid being scooped (or to scoop others). EAPs allow astronomers to work at an appropriate pace.

Zero EAPs will lead to sloppy results, as astronomers prize being first over being right. Good science takes time, and scientists should be *encouraged* to get it right, not balancing care against the risk of losing the whole project. EAPs keep astronomy rigorous.

Zero EAP is inconsistent with open access standards, which require data to be public upon publication. Forcing small team’s data to be public before analysis would make NASA and astronomy outliers among sciences.

Zero EAP is equivalent to requiring chemists to post their lab notebooks and raw data online as they do their experiments as a condition of winning grant money. It does not pass the smell test.

NASA has intimated that if zero EAP hurts under-resourced scientists, then the solution is to give them more resources. First of all: if NASA’s really going to address structural inequities across science, GREAT! But, can we do that first please?

Also: The relevant resource is time, and money is an imperfect substitute for this. Many researchers are not at institutions where they can “buy themselves out” or realistically hire a postdoc. It also doesn’t help students. EAPs are a narrowly tailored solution to the problem.

I encourage my colleagues to fill out the STScI poll with their opinion, and to share the poll widely in the field.  The survey closes Feb 15, 2023.

Why NASA should have a do-over on the name of JWST

The name of JWST, the James Webb Space Telescope is in the news again.  If you’re not familiar with the story, I recommend the Just Space Alliance video here:

which summarizes the case against keeping the name.

As I write this, I’m told the release of a NASA report on James Webb’s role in the Lavender Scare and the firing of LGBT NASA employees is about to become public. I’ve been involved in this because I sit as an ally on SGMA, the committee which advises the American Astronomical Society on LGBTQ+ issues. On this committee, I was lead on the issue of learning about what NASA was doing about this. I spoke with the Acting NASA Chief Historian, Brian Odom, about his research on this.

Below is how I see it.  If you think we should keep the name, please read the following with an open mind. Note, some of what appears below was drafted in collaboration with other SGMA members, as part of our recommendation to the AAS.

The name of the telescope really matters, and we need to get it right

The Hubble Space Telescope (HST) has shown that the name of NASA’s flagship observatories can become synonymous with astronomical discovery and gain deep resonance and symbolism among both astronomers and the public at large. Astronomers tout the discoveries of Hubble in interviews and public talks, they festoon their laptops and backpacks with Hubble mission patches and stickers, and some of the most talented young astronomers bear the title “Hubble Fellow.” For many members of the public, the Hubble Space Telescope may be the only scientific instrument or laboratory they can name.

Since JWST is in many ways a successor to HST, and is likely to occupy a similarly important role in astronomy and the public’s perception of the field, it is especially important that its name be appropriate, that it inspire, and that be something everyone who works on and with it can be proud of.

Despite this, NASA gave the telescope an uninspiring name

When the name was announced, there was a distinct sense of confusion and disappointment in the community.  “Who’s that?” was the refrain.

I and many others sort of accepted it because we didn’t really think too hard about it, but it’s a huge missed opportunity. The name doesn’t inspire. When people ask for why it’s called that, most astronomers shrug and say “he was the NASA administrator during the Apollo era” and move on to the next topic. It’s a name only a NASA administrator could love.

This isn’t to say that administrators don’t do important things that should be acknowledged! Administration is hard and good administration is so valuable it absolutely should be celebrated. And perhaps if his legacy were different astronomers would celebrate his name and be glad to see his name on this telescope.

But the name just has no resonance here.

Despite this, NASA named the telescope with no input from stakeholders

NASA’s international parters were not involved in the decision. Astronomers were not involved in the decision. The people who built it were not involved in the decision. Lawmakers and policymakers were not involved in the decision. Elected officials were not involved in the decision.

The name was poorly chosen, and does not reflect NASA’s (purported) values

The decision was made by one NASA administrator, to name the telescope after another NASA administrator, and this name has been stubbornly kept by a third NASA administrator.

This is bad precedent, and the current fallout is a great illustration of why. In James Webb’s NASA, gay employees were fired. Clifford Norton was arrested, interrogated, and fired.

This is not the organization that today’s NASA aspires to be (we hope!).

It’s not too late to change it

NASA changes the names of space telescopes and missions all the time. Its very common for things to have boring names on the ground (AXAF, SIRTF) and inspiring names once they’re working (Chandra, Spitzer).  We all adjust. It’s not a big deal.

At this point, NASA’s resistance has gone from stubbornness to recalcitrance. Already, NASA employees are refusing to use the name in prominent publications. The Royal Astronomical Society says it expects authors of MNRAS not to use the name.  The American Astronomical Society has twice asked the administrator to reopen the naming process (and received no response!).  This is an error that only grows as NASA refuses to fix it.

NASA needs to think about the people using the telescope

Think for a moment about the LGBT NASA employees working on JWST today. They want to be proud of their work, proud of the telescope, proud as LGBT NASA employees.

But just to use the name of the telescope is to name a man who, undisputedly would have had them fired. This feels perverse to me.

Right now, the premier fellowship in astronomy is the Hubble Fellowship. When Hubble finally goes, will it become the Webb Fellowship? If you advise students, how would you feel recommending an LGBT student apply for that fellowship? How would you feel when they tell you they’re uncomfortable attaching the name of someone who undisputedly would have fired them to their career, to their CV, to their job title?

This, of course, isn’t just a “gay issue”. We all have LGBT colleagues, friends, and family. Beyond that, we want astronomy, space, and NASA to be inclusive and inspiring in all ways. What precedent does this whole fiasco set for that future we seek?

The telescope deserves a better name.  Astronomy deserves to have a telescope that reflects our values. America and the world deserve a telescope that inspires. Even those who are defending Webb have to concede the current name is not doing those things.

Let’s do better. Why not?


All that said, there is a lot of interest in the specific accusations of homophobia and bigotry by Webb. I’m pretty sure that will be the focus of the NASA report that’s about to come out and of most of the ensuing discussion.

I think this is a distraction. Now, the evidence seems to indicate that, at the very least, he did not see enough humanity in LGBT people needed to protect them from unjust policies. But regardless, his bigotry is not part of my argument for changing the name. (That said, if there is some sort of smoking gun document revealing his personal involvement in these firings or personal animosity towards gay people, that makes the case even stronger).

And even though they are beside my point, I find most of the defenses of Webb lacking.  Here are some common ones I see and hear:

All of the accusations against Webb (the misattributed homophobic quote, his place in the chain of command) are false.

There is a long back story to how this issue came up, of a few specific accusations that turned out to be false, and others that turned out to be very true, and so on.  You can easily find it if you Google around or search on Twitter.

The bottom line is that he had a leadership role at State during the Lavender Scare and was chief administrator at NASA when LGBT employees were fired (and worse). This is undisputed, and it is enough.

This is just a woke mob “canceling” and smearing the name of an innocent man.

This isn’t James Webb on trial. I’m not basing my argument on his being a nasty bigot, because even if he wasn’t we should still rename the telescope.

The standards for putting someone’s name on the most important scientific instrument of a generation should be very high, and there’s no shame in not having your name on it.

But what if he was, in his heart, not a bigot and actually worked behind the scenes in undocumented ways to minimize the Lavender Score? I think, given the balance of evidence, that this is unlikely, but just to entertain the logical possibility: in that case I’m sorry his legacy is caught in the middle of this and I’m sure this is infuriating for his family and people who respected him a lot, but this is much bigger than James Webb and his legacy. Again, this is not “James Webb on trial”; it’s “what should we name the telescope?”

Wasn’t Webb just a “man of his time”? Why should we judge people in the past by standards of today?

This argument all but concedes he was a bigot, which is enough to rename the telescope. But, entertaining it:

First of all, plenty of people at the time understood that sexual orientation had no bearing on one’s ability to work at NASA. Most LGBT people understood that, for starters.

Secondly, the argument that it made them susceptible to blackmail to foreign adversaries and so it was objectively reasonable to fire them is not as strong as it looks. After all, one way to fix that problem is to make it absolutely clear to employees that if they are outed, they won’t lose their livelihood.  Every fired gay employee is a gift to potential blackmailers, handing them leverage over other closeted employees on a silver platter.

But even granting he was a man of his time, this argument completely fails.

Of course we are judging the namesake of the telescope by today’s standards.  Why would we choose any other? We are here today, with the telescope of today. Its name should reflect today’s standards! Why wouldn’t it?

Don’t you worry that people of the future will “cancel” great people from our time for moral lapses by future standards?

I don’t worry about that at all. If I end up (in)famous for something and people in the year 2500 spit after saying my name because I ate meat from slaughtered livestock, which they consider an unspeakable evil—well, that makes sense right? Why would you celebrate people who lived lives antithetical to your values?

Firing LGBT people at State and NASA was the law of the land at the time.  There’s little he could have done and he wasn’t directly involved anyway.

If we concede that he was just doing his job, then we also concede away the only good argument for naming the telescope after him.  James Webb did not design or build the Saturn V rockets, he did not calculate the trajectories of the capsules, he did not walk on the Moon. He was a (by all accounts highly effective) administrator who oversaw those things.

If he gets credit for the good things that happened on his watch obviously he should get demerits for the bad.

There’s no evidence he’s a bigot. His heart wasn’t in firing LGBT people the way it was in, for instance, integrating NASA. 

There’s a double standard at play where simply listing his (very impressive!) accomplishments at NASA is sufficient for justifying the name, but when it comes to bad things happening on his watch we need some sort of smoking gun, evidence of mens rea, to understand where his heart was on the matter.

Anyone demanding evidence of his bigotry should be ready to put forward evidence of his personal virtues on other items, not just lists of things good happening on his watch.

OK: James Webb went above and beyond to integrate NASA. He gave an impassioned speech about it.

Based on what I’ve seen, we really don’t know his views on race.  We do know that Johnson charged him with using NASA as a lever to integrate the South.  We do know he was a loyal foot soldier who understood the assignment and got it done.  It’s unclear to me what extracurricular activities he was doing to promote racial equality.

But isn’t every name problematic? Everyone in the past had something that people today will object to.

First of all,  I’m sure we can find people who didn’t have a demonstrated track record of ruining innocent people’s lives like Webb’s NASA did.

Secondly, the onus of solving the problem of what the perfect name is should not be on the people pointing out the current problem! This is a great question and one that obviously needs addressing before we name a project as important as JWST. NASA should put together a process for addressing it, which means reconsidering the name of the telescope!

 

Writing good telescope proposals

I used to chair the HET TAC (time allocation committee) at Penn State, and we didn’t have the bandwidth to give detailed feedback on proposals. But we did want to help out proposers, especially junior scientists, who were writing proposals that were not getting the time they requested (this is a SCHOOL after all!).

So as a compromise between that goal and the limitations of our resources, I drafted a generic “how to write a good telescope proposal” document, which I’m pretty proud of.

It’s not about how to do the technical parts, it’s about the actual “grantsmanship” involved. This can often feel like crass salesmanship, so part of my intent was to put the reader in the shoes of a TAC member who wants to give their proposal time so they can appreciate why this sort of thing is not just useful but actually an essential part of good science.

(On that note I recommend all astronomers who must compete for resources read Marc Kuchner’s Marketing for Scientists, which really helped me overcome my hangups about self-promotion, and appreciate the difference between honest (and necessary) marketing, and slimy, unethical salesmanship. In fact, one lesson I learned from that book is to write blog posts like this one!).

A lot of the advice I wrote works for any kind of proposal writing (fellowships, grants, jobs) so I’ve modified it to be generic, and pasted it below.

One piece that’s not in here is writing a proposal to meet the rubric criteria.  If you’re lucky enough to be proposing for time to NOIRLab or another body that publishes its rubric, then you can actually score your proposal against it yourself (or swap proposals with someone) to see where you can strengthen it.

OK, here it is. What have I missed?

Enjoy!

Writing a Great Telescope Proposal

Telescope time can be quite competitive, with oversubscription ratios of 2-10 or even higher. Because of this, a lot of meritorious science will not get the time requested.

This means that TAC members will select proposals not just based on their technical feasibility and scientific importance but on how well justified the science is, whether they are personally excited about the science, how well the proposal is written, and harder to define subjective criteria. Writing a winning proposal is not, then, just a matter of describing your science well, but of conveying to the committee your own sense of excitement and importance of the work. Doing this well is an art.

To illustrate this: consider two proposals, both alike in worthiness, in the fair conference room where we lay our scene of the TAC deliberations. Forced to choose between them for the last 2 nights of time, the committee cannot help but consider these factors:

  1. Following the directions:
    One proposal has 12 point font throughout, 1 page of figures and references, and all the text stays in the box, just as the instructions required.

    The other has target tables with illegibly small entries, violates the page limits, and has text bleeding outside of the boxes. The TAC gets the sense that the proposers are trying to unfairly include more information than other proposers were allowed to include, and also that the proposers are not giving the proposal process its due attention.
  2. Justifying the request for time:
    Telescope time is precious, and the TAC needs to know how much you really need to succeed.

    One proposal has a signal to noise calculation rooted in the underlying science. The TAC has a good sense of where the number comes from. The contingency section has a careful description of what would happen if the proposal got, say, half the time requested.

    The other is requesting visits to ten targets with no prioritization, with exposure times calculated for SNR of 100, with no justification for that number.

    One proposal notes that the time previously awarded for the project by the TAC resulted in data that has been reduced; shows a figure illustrating how the data can be translated into compelling science; and explains why additional observations are needed in order to publish. The other proposal notes that it was awarded time previously, but does not mention whether it was reduced or not, or why they need additional time.
  3. Justifying the request for queue priority / temporal restrictions:
    One proposal has calculated the number of nights in the semester during which the observations could be made, justifying its cuts on airmass and moonlight contamination.

    The other proposal has a brief statement that they need the tight constraint because their “observations are time sensitive.”
  4. Having a compelling figure:
    One proposal synthesizes what makes the science so compelling in an easy to read figure. It has large font, is not too busy, uses multiple, redundant point/line properties to clearly illustrate a third dimension, and conveys a few key ideas. From the figure the TAC quickly understands (for instance) the strength of the signal expected by the proposed observations, the new physical parameter space explored by them, or the factor by which the number of such detections will increase if the time is awarded. The caption text explains exactly what the TAC members should understand by looking at the figure, and connects it to the proposal text.

    The other proposal has a very hard to read and interpret figure filled with extraneous information, perhaps because the figure was taken from another context with little or no modification. The colorblind TAC member cannot distinguish the points and so needs it explained to them by the other members. The TAC members spend a lot of time arguing about what it is trying to convey because the caption, while technically accurate, does not interpret the figure in the context of the underlying science.
  5. Showing a clear path to an important result:
    One proposal shows that these observations will triple the number of examples of a newly appreciated phenomenon. It connects this phenomenon to an important question in astrophysics, and illustrates how this is a result that the community will be excited to see, regardless of the outcome. The proposal explains the reasons this exciting science has not been done before, emphasizing the competitive advantage this telescope offers the TAC host institution in answering the question, so the TAC understands why this is an excellent use of the telescope.

    The other proposal is to observe a few more examples of a phenomenon that, as far as the TAC can tell, has been observed dozens of times before with other instruments. The proposal argues simply that the observations “will inform studies” of the phenomenon.
  6. Arguing with strong prose:
    One proposal is easy to read, written in the active voice with tight, forthright prose that has been proofread and polished by the co-authors. The scientific justification lays out the problem being addressed clearly, emphasizing the place of these observations in the broader scientific landscape. The TAC members finish reading it quickly, with a good sense of the nature and importance of the work. A small number of key messages are in boldface or italics, so the TAC can quickly find them when deliberating.

    The other proposal was written in a single draft a few hours before the deadline. It is written in dense, highly technical prose in the passive voice, and filled with technical hedging, irrelevant qualifications, and unnecessary verbiage. Some references are malformed, some words are misspelled, and there are many run-on sentences. The TAC members have to reread sections of it multiple times before they can quite parse what is being conveyed. The TAC members finish reviewing the proposal with a vague sense of the importance of the science and the way the observations fit in. During deliberation, there are long pauses while TAC members hunt for a key piece of information they think they remember reading.

Clearly, the first proposal will (and should!) get the time, and the second will not. This is because even though the actual merit of the science is identical for both proposals, the first proposal makes that merit easy to see, and the second does not. So make sure you are writing the first proposal, not the second one!

 

The Geopolitical Implications of a SETI Detection

A couple of years ago I posted about what I felt was a misguided paper by Wisian & Traphagen about what they felt was an underappreciated danger of SETI: not that we might find something dangerous out there, but that finding something would trigger a geopolitical fight over the discovery that would endanger the personal security of the scientists involved, and their families.

Briefly, they imagine that the discovery would involve communication between Earth and a technologically advanced culture, which would be monopolized by the country responsible for the discovery.  Since this monopoly might grant that country a huge technological and military advantage, “realpolitik” analysis predicts a subsequent cascade of political, espionage, and even military struggle among nations, with radio telescopes and the scientists involved caught in the middle.   The authors recommend that radio telescopes have hardened security, like nuclear facilities, and that SETI practitioners consider personal security for themselves and their family.

In my post, I broke down what I felt were the shortcomings of the paper, that the contact scenario they envision was highly contrived, and that their recommendations were unnecessary. I also discussed how it would have been a better paper if they had consulted actual experts in SETI before publishing about how it works.

Well, taking my own advice, I asked if I knew any experts in international law that could help me write a proper rebuttal.  Twitter to the rescue!

Grabriel Swiney at the time worked at the State Department, where he was an architect of the Artemis accords. Exactly the sort of expert that should weigh in on this sort of thing!  Although we had never met (and still haven’t!) IRL, we got to work drafting a response.

Later, I started interacting with Chelsea Haramia, a philosopher at Spring Hill College (we discussed emergence here) who joined the effort. She helped us pick apart the realpolitik component of the W&T paper.

We summarize the narrowness of the Wisian & Traphagen analysis as requiring 9 elements:

1) the signal must be from one of the nearest stars, 2) communicative, 3) intelligible, and 4) information rich; 5) it must be strong enough to provide dense information content, but 6) weak enough that only the largest telescopes or telescope arrays can detect it; 7) a small number of exchanges must be sufficient to derive information about “new physics”; and 8) this new physics must be powerful enough to be translated into a dominating technology, but 9) it is not so “advanced” that we have no hope of quickly understanding and implementing it.

While such a scenario is salient (it and the subsequent geopolitical fallout are essentially the plot of Arrival and Contact), we take issues with each of the 9 points.

We next take aim at the use of realpolitik to analyze the situation, which says that nations actions are ultimately guided by the idea that “power only respects power”. While such an analysis might motivate some state-level actions, we point out both the theoretical and empirical flaws in that analysis.

Now, Wisian & Traphapen take pains to point out that this realpolitik analysis does not have to be correct, it (and the 9-point scenario above) only have to be plausible to warrant advance planning to deal with it.  However, we point out that it’s not enough for a scenario to be plausible, it has to dominate a competition with other potential future outcomes in order to be action-guiding.  We mention other scenarios that argue for different reactions, and ask why it is the realpolitik scenario that should control our actions.

Furthermore, we argue that following Wisian & Traphagen’s advice and hardening security at SETI facilities (in addition to radically hampering radio astronomy) would be counterproductive.  In addition to being ineffective (Wisian & Traphagen seem to think that securing only a small number of large facilities would allow for a long-lived information monopoly on a signal from space), such actions could lead to the perception that some important military technology had been gleaned from contact with aliens, and thus trigger the kind of fallout Wisian & Traphagen are worried about. We argue that rather than taking such fallout as a foregone conclusion, we should avoid the scenario in the first place.

We point to international collaboration on a wide variety of sensitive topics, including the management of nuclear fusion technology under ITER, as familiar examples of alternative frameworks for avoiding international strife. We also argue that educating the relevant policymakers on the nature of SETI (including the virtual impossibility of an information monopoly and the extreme unlikelihood of it being militarily useful) in advance is a far more effective way to prevent the ills Wisian & Traphagen foresee.

Finally, we argue that a policy of open data sharing and transparency is an antidote to those ills, and that this stance is one the community currently leans towards.

In the end, we conclude a lot of work needs to done in the realm of post-detection protocols to protect SETI researchers and ensure an eventual discovery does not do harm here on Earth.

It was great doing this sort of interdisciplinary work (humanities, social sciences, law, and physical science!). I recommend it!

After two and a half years of work (I’ve never had such a drawn out review process!) the paper has been accepted to Space Policy (where the original article appeared) but you can find it on the arXiv here.

Enjoy!

 

First Artillery Punch

When I was little a wintertime tradition was the preparation of “Artillery Punch,” which I understood to have been derived from a military tradition from my grandfather’s time in the service. It was chilled and served outside in the snow.

Recently, my mother found the recipe we used, which appears to be a stained photocopy of a stained typewritten original. Here it is:

A photograph of stained paper with a typewritten recipe for the punch

The recipe for First Artillery Punch

The text reads (with my annotations as footnotes):


Given to us1 by General Ruhlen2   Fort Banks 1960

As a memento of this occasion, herewith the recipe of the concoction you have been drinking.

This is alleged to be First Artillery Punch,3 and in view of its ingredients, which were in common use some 100 years ago, it has a certain ring of authenticity about it. It was given to my father about 50 years ago by Colonel Marshall Randol, who in turn got it from his father4 who was the Commanding Officer of the First Artillery Regiment in the Civil War. The elder Randol stated that this was a recipe which was frequently used before, during and after Civil War times by the First Artillery.

Prepare a pint of triple strength black tea and a pint of triple strength green tea and blend the two together.

Place in the punch bowl or a suitable container about 1/3 of a pound of loaf sugar5. Grate upon it the rinds of 3 lemons, then their juice, and the juice of 2 oranges.

Pour over all the boiling hot tea mixture. Stir well and put aside to cool, covering the container to prevent the escape of the aroma.

When perfectly cool, stirring slowly, add 1 quart of Jamaica Rum (not the light bodied Puerto Rican variety); then 1 quart of good sherry, and then 1 pint of good brandy. Mix the ingredients well and chill. Years ago the chilling was accomplished by surrounding the container with snow or ice.

When ready for use place a block of clear ice in the bowl and then to the mixture add a quart of champagne with greatly improves the punch and gives it life.

I understand that prior to the Civil War apple or peach brandy was used instead of champagne. The quantities as given above are suitable for small groups, such as we found on one or two company posts—about 25 people. I was also told that when entertaining other branches of the service it was necessary to dilute the punch with an equal amount of mineral water or tea, but this seems an unnecessary degradation of good punch.


1 This italicized text is handwritten. Presumably given to my grandfather Elwood “Van” Hattersley’s family when he was stationed there or attending a function there at Fort Banks, in Massachusetts.

2 Presumably Maj. Gen. George Ruhlen (1911-2003) son of Col. George Ruhlen Jr. https://corregidor.org/archive/ruhlen/mills/html/mills_03_07.htm

3 Not to be confused with Chatham Artillery Punch, a similar drink: https://en.wikipedia.org/wiki/Chatham_Artillery_Punch

4 Alanson Merwin Randol (1837–1887) https://en.wikipedia.org/wiki/Alanson_Merwin_Randol

5Also called sugarloaf, a hard form of sugar common before the introduction of granulated sugar and sugar cubes. https://en.wikipedia.org/wiki/Sugarloaf

Is SETI a good bet? Part III: Ambiguity and Detectability

In Part I I established the claim that technosignatures must be less prevalent than biosignatures, and showed that while that certainly could be true, the opposite is actually quite plausible, and by a huge factor.

In Part II we looked at the longevity term and, again, found that even though technology has been on Earth for much less time than life has, it’s still possible, and even plausible, that it typical lifetime in the Galaxy is actually much longer than that of life.

In this part, we look at two more criteria: detectability, and ambiguity.

Detectability

How detectable are technosignatures?  Except for a few things like radio and laser transmissions, it’s not actually very clear. Most technosignature strengths have not been worked out in detail!  An ongoing project led by Sofia Sheikh is to determine what Earth’s detectability is because of our technosignatures.

Héctor Soccas-Navarro proposed a nifty metric called the inchoscale that compares a technosignature strength to that produced by Earth today. So, Earth today has, by definition, i=1 for all of its technosignatures.  How does their strength compare to our biosignatures?

If you ignore one-offs like the Arecibo Message, it’s actually not clear what our “loudest” technosignature is.  To stars that see Earth transit, they could try to measure our atmospheric composition, and Jacob Haqq-Misra has worked out roughly how hard it would be to detect our CFCs, and Ravi Kopparapu has done something similar for NOxs.  Both would be very challenging to detect…but then, so would our ozone and methane.  Which is stronger? I’m not sure.

I do know that the full SKA is supposed to be strong enough to have a shot at detecting our regular aircraft radar emissions at interstellar distances in coming decades. This means that being able to detect ichnoscale=1 technosignatures is a few decades out, and that feels similar to the time before we could detect biosignatures around an Earth analog.

The bottom line is that we don’t know whether Earth’s technosignatures are more or less detectable than its biosignatures with Earth technology from nearby stars, but it’s probably a close call, and it could easily be that technosignatures win.

Ambiguity

The ambiguity of technosignatures depends on the signature. Waste heat from Dyson Spheres any circumstellar material should generate waste heat. A narrowband radio signal, however, can only be technological (although its origin could be ambiguous).

Waterfall plot of the Voyager I carrier wave

Dynamic spectrum of the Voyager I carrier wave—a clear example of an unambiguous technosignature

So technosignatures run the gamut. Clearly, searching for an unambiguous one is better on that score, but ambiguous ones may require less contrivance—waste heat is an inevitable consequence of energy use, but there’s no reason aliens would have to use narrowband radio transmitters. Balancing this requires thinking about the axes of merit of technosignatures.

But the same is true for biosignatures! There are examples of what an unambiguous detection would look like (microbes swimming in Europa’s subsurface ocean), but there are plenty on the other end, too, especially for remote detection: detecting oxygen or methane in an alien atmosphere is a potential biosignature, but both species can also be generated abiotically.

Even iidentifying something that would serve an “agnostic” (not specific to Earth life) and unambiguous biosignature is a major challenge in astrobiology. The most probable path to success, IMO, is identifying a “constellation” of ambiguous biosignatures that together suggest strong disequilibrium chemistry maintained by metabolism (oxygen and methane together for instance).

So as far as ambiguity goes, biosignatures and technosignatures share the same problems, and neither has a clear advantage. Both have many examples of ambiguous signatures, and both can offer examples of clean detections.

Conclusions

This last point illustrates something important: biosignature searches and technosignature searches have a lot in common. Both search for the unknown, trying to balance being open-minded about what there is to find while letting what we know about Earth life to inform us. Both struggle with identifying good signatures to hunt for, how to handle ambiguity, and how to interpret null results.

But the communities don’t talk about these much between one another. Indeed, astrobiologists have called for and launched an ambitious project to nail down standards of life detection, without acknowledging or even mentioning the significant work on the topic over in SETI. Similarly, technosignature search would benefit from a this sort of rigorous exercise.

I hope our new paper will inspire better cross-pollination between the two communities, and a better balance of effort between the two methods of finding life. Since we don’t know which has a better chance of success, we should follow a mixed strategy to maximize our chances.

Our paper, written with Adam Frank, Sofia Sheikh, Manasvi Lingam, Ravi Kopparapu, and Jacob Haqq-Misra, is now published in Astrophysical Journal Letters.

Is SETI a good bet? Part II: Drake’s L for biology and technology

In Part I, I laid out part of the argument for why SETI is not worthwhile compared to searches for biosignatures. In this part, I’ll address the next big part of the argument: longevity.

Longevity

Last time we summarized an argument for looking for biosignatures instead of technosignatures as:

N(bio)/N(tech) = ft * Lt/Lb <<1

Let’s take a look at that last ratio, between the time on a planet technosignatures are detectable and the time biosignatures are detectable. Need it really be small?

The zeroth-order way to think of it is in terms of the history of the Earth: biology has been here for around 4 billion years, and biosignatures detectable from space for some significant fraction of that—at least 1 billion years, say.  But our technosignatures only just recently got “loud” enough to be detectable—decades ago, say.  That’s a factor of about 10-8!

And that’s fair, but there are a few reasons to expect in some cases it could be much larger, and even greater than 1!

Humanity may be a poor guide to longevity

Lots of people are pessimistic about our future on Earth, but we should be careful not to project our perception of human nature onto aliens. Technological alien life may be very different from ours, with a different evolutionary trajectory and relationship with its planets.

But even if we do use humanity as a guide, we should not assume we have a good sense of what our longevity is. Unless something happens that makes humans go extinct, there’s no reason to think our technosignatures will permanently disappear.  Even ecological disasters, nuclear war, or deadly pandemics, horrible as they would be, are unlikely to actually erase Homo sapiens from the planet completely.  Over geological or cosmic timescales, we might have many periods of tragedy and low population, but that only keeps the duty cycle less than 1, it does not shrink Lt by orders of magnitude.

And, of course, technology doesn’t just give us a way to end our species existence, it offers a way to save it.  We can, in principle, deflect killer asteroids, cure pandemics, and alter ourselves and our environment to survive in paces and times our biology alone would not allow.

In other words, it’s not at all clear why Lt for humans could not be orders of magnitude larger than it has been in the past.

And, even if you are pessimistic about humanity’s actual tendencies to do those things well, there’s no reason to project that pessimism onto other species.

Humans aren’t Earth’s only intelligent species

Further, even if humans do go extinct soon, that would not necessarily end Earth’s technosignatures! Most or all of the traits that make humans so good at building “big” technology with detectable technosignatures exist elsewhere in the animal kingdom. Tool use, social behavior, communication and language, problem solving, generational knowledge—all of these things can be found not just among distant relatives in Chordata but over in Mollusca, too, in the squids and octopi.

Image of a group of squid

Squid communicate, hunt, and live together.

That’s important for two reasons: one is that it means that there is no reason another species could not arise that exhibits our level of technology, even in the sea.  Another is that since molluscs and mammals share no intelligent common ancestors, these traits aren’t accidents of human’s particular evolutionary pathway, but have arisen independently multiple times. This means they can arise independently again, and not just on Earth!

Also, surprisingly, we can’t even be sure something like our level of technology has not existed on Earth in the past!  Adam Frank and Gavin Schmidt have advanced the Silurian Hypothesis, that such a thing has happened—not because there is any evidence for it, but to point out the surprising fact that we have almost not evidence against it.  In fact, all evidence of Earth’s “Anthropocene” will have either disappeared or be ambiguously technological after a few million years.

In other words, we need to include other species in Lt for Earth, and we don’t really have any evidence-based reason to think that number is zero, either in Earth’s future or Earth’s past.

Technological longevity has no obvious upper limit

Lb, the lifetime of biosignatures, has a hard upper limit: the inevitable evolution of the planet’s host star which will eventually scorch the surface, and could even envelop the planet within the star.

Technology, however, has no such upper limit. In principle it could allow us to survive on a much hotter Earth, and, as we discussed in Part I, it can spread to other places where it can persist long after the biosphere that generated it.

So there is no reason that Lb must be larger than Lt, and the latter has a higher upper limit.  In other words, there could be alien species—or even future Earth species, like us—that build technology for longer than life exists on Earth.

Technology can exist without biology

Technology can outlive its creators. Obvious Earth examples include the Great Pyramids, which would serve as markers that humans where here and did amazing things for long after something happened to humans.  Our interstellar probes will last a very very long time, no matter what happens to us.  And so such technology on a grander scale from an alien species might be detectable for a very long time.

But also, technology might be able to self-perpetuate. Much has been written about the possibility of self-assembling machines and AI that would allow machines to spread and reproduce much like life. This possibility is almost certainly not limited by any fundamental engineering or physical principle, and the fact that we can even outline how it could be accomplished with something like existing technology suggests it may not be far off.  Unbound by a biosphere, why couldn’t Lt>Lb?

The bottom line on longevity

As we write in our paper:

Using Earth as a guide for our expectations for Lt /Lb is probably unreliable because we do not know Lt for Earth’s past; even if we did we should not use it to predict Lt for Earth’s future, and even if we did, we should not expect Earth to be a good guide for alien life, and even if we did, we should expect a broad distribution of longevities across alien species.

…and we should also not assume that technology must be bound to biology at all!

Next time: Ambiguity and Detectibility.

Is SETI a good bet? Part I: What the Drake Equation misses

There are two major ways we can look for alien life:  look for signs of biology or look for signs of technology.

SETI includes searches for the latter—technosignatures. These might include big bright obvious information-rich beacons (in the radio or with lasers, for instance) , or they might be passive signs of technology like waste heat, pollution, or leaked radio emission from radar and communications.

I have often seen the argument that this is nice to look for but must be less likely to work than searches for biosignatures. The flaws in this argument have been pointed out and analyzed for as long as SETI has been a thing (most of them were hashed out in the ’60’s). But that discussion isn’t actually familiar to most astronomers and astrobiologists, and so working with the CATS collaboration led by Adam Frank (for Characterizing Atmospheric Techonsignatures) I’ve written a paper summarizing them.

In this series of posts I’ll break down the argument, following the argument in our new paper in Astrophysical Journal Letters.

The Role of the Drake Equation

One part of the argument goes back to the Drake Equation.

Frank Drake in front of a whiteboard with a pen pointing at his eponymous equation

The man himself with the equation

Let’s look at why:

If we just count up the parts of the Drake Equation that lead to some kind of life to be found, we might end up with something like

where R* is the rate of star formation, multiplied by the usual fraction of stars with planets and the mean number of planets per planet-bearing star that can support life, and the fraction of those planets on which life arises.

Here, Lb is the lifetime of detectable biosignatures. N(bio) is then the number of biosignatures there are to find out there.

We can also rewrite the full Drake Equation in a similar manner for any technosignature:

N(tech) = R*fp*np*fl*ft*Lt

Here, we’ve added ft representing the fraction of planets where technology arises, and now Lt is the lifetime of detectable technosignatures.

Based on this reasoning, SETI looks like a terrible idea compared to searches for biosignatures! It’s a tiny, tiny subset of all possible ways to succeed, because (this reasoning goes):

N(bio)/N(tech) = ft * Lt/Lb <<1

Why? Because ft <= 1 by definition, and since you need to have non-technological life before you can have technological life, Lt<Lb. This would seem to justify the huge imbalance in time and money that NASA spends on astrobiology in general compared to SETI (which has been almost zero until recently).

This is the abundance argument against technosignatures, and it is wrong, for many reasons! Let’s take a look at why.

Abundance

First of all, let’s think about the Solar System.  N(bio) is, as best as we can tell, exactly 1.  If there are other biosignatures in the solar system, we have not noticed them yet, so they must be very hard to detect.

And what is N(tech)?  Well, based purely on what we can detect with our equipment it’s at least 4!  Earth is loaded with technosignatures, but we also detect them from Mars all the time, and Venus and Jupiter also have them. We also have several active interplanetary and interstellar probes, and many many more derelict objects are out there too.

This gives us our first clue about how the reasoning above fails: if technology can spread through space, then one site of biology can give rise to many sites of technology.

And this, of course, has been appreciated by SETI practitioners for decades. It’s the basis of the Fermi Paradox, which asks why alien life hasn’t spread so thoroughly through the Galaxy that it isn’t here right now. Drake’s equation is based on the idea that it’s easier to communicate via radio waves than to travel via spacecraft, but of course one doesn’t preclude the other, and if both are happening, then N(tech) could be much larger than the equation says.

This is not really a major failing of the equation, whose original purpose was to justify SETI.  After all, if you can conclude there is something to find in the absence of spreading, that’s a sufficient condition to go looking.  The equation is often misinterpreted as foundational, like the Schrödinger Equation, as if you can calculate useful things with it.  Instead, it’s best thought of as a heuristic, a guide, and an argument.

So, how large could N(tech) be? Well, in the limit of the Fermi Paradox reasoning, it could be upwards of 100 billion, even for a single point of abiogenesis!  We’ve written about this before, for instance here.

So, the argument isn’t that this will happen, just that N(tech) has a higher ceiling than N(bio).  This long tail out to large possibilities (both in the sense that we are ignorant of the right answer, and in terms of a distribution among all of the alien species) means that it is not just possible but plausible that SETI is much more likely to succeed than other life detection strategies.

Next time: The second reasons to do SETI: technosignatures may be long-lived.

State College is the Cultural Capital of the US

If you like culture and road trips, I think my town of State College, PA is the best place in America to live! You can see more culture in a reasonable drive from here than from anywhere else.

Locals like to say that State College is “centrally isolated”: we aren’t near anything, but there are lots of cities with good culture all 3-4 hours away (almost like they’re avoiding us!) So when we moved here, Julia and I agreed we would not fear the road trip!

Now Central PA has plenty of charms, especially if you’re an outdoorsy type (we’re the fly fishing capital of the world, there’s Amish Country nearby, State College itself gets plenty of culture at our theaters and big arena, etc., etc.). But in all honesty, it’s probably not all that much more than a typical town or city with a large university. So that’s not what I mean. Obviously, if you want to spend a few days soaking up cultural experiences, it’s better to be in a big city.

A reasonable road trip is about 4 hours. That’s long enough to count as a road trip, but short enough that you don’t lose a whole day to travel, it’s possible to make the trip in a single shot, if you like, and there’s no reason to fly. So that’s our radius: 4 hours on the road. That gives us New York, Cleveland, Pittsburgh, Philadelphia, Baltimore, and Washington. That’s a lot!

In fact, I think State College has access to more great cultural experiences than any other place in America.

To quantify this, we need a proxy for “culture.” I’ll choose NFL and MLB stadiums, not because sports==culture, but because active major league stadia are an easy-to-count proxy for “cultural things to do”. A city with more than one stadium probably has more culture than one with only one (New York City > Buffalo). It’s imperfect, but simple.

So get this: there are fourteen such stadia within a 4 hours’ drive of State College*! 7 MLB, 7 NFL— that is almost a quarter of all of them! We can reach Pittsburgh and Cleveland to the West, Buffalo to the North, and all of the big Eastern Seaboard cities from Washington up except Boston.

I think these numbers are higher than anywhere else in the North America (in both either league and in total). I taught myself the Google Maps API to calculate this properly. My function accepts a location and a driving range, and it returns the number of stadia within that range.

It’s very slow, so I can’t make a proper contour map of the US, so this means I can’t prove that State College is the global maximum in the US, but spot checking elsewhere shows nowhere else comes even close.

If you have another candidate location for a local or global maximum, let me know!

In the meantime, on behalf of Centre County I’m declaring us the cultural capital of the United States.

*Technically, Citi Field is 4h6m from my house, so my radius is 4h6m, not 4 hours even. Bellefonte and many nearby points north on I-99 have 14 stadia within less than 4 hours flat, and the local minimum time to all 14 is probably somewhere near the I-99/I-80 interchange.

The Lifetime of Spacecraft at the Solar Gravitational Lens

This is a guest post by Stephen Kerby, a graduate student at Penn State.

Imagine you are a galaxy-spanning species, and you need to transmit information from one star to another.  You can just point your radio dish at the other star, but space is big, and your transmission is weak by the time it reaches its destination.  What if you could use the gravitational lensing of a nearby star to focus your transmission into a tight beam while monitoring local probes? What if you could use this nice yellow star right here, the locals call it the Sun? What if the locals notice your transmitting spacecraft from their planet right next to the star?

Recently, there has been renewed interest among human scientists in using the solar gravitational lens (SGL) to focus light for telescopic observations (as in the FOCAL mission) or for interstellar communication (as described in Maccone 2011). A spacecraft positioned >500 AU from the Sun could collect focused light bent by the Sun’s gravitational field, dramatically increasing the magnification of a telescope or the gain of a transmitter for a point on the exact opposite side of the Sun (the antipode). The picture below shows how the SGL could be used for transmission of an interstellar signal, and the arrangement can be reversed to focus light onto a telescope.

In the Astro 576: “The Search for Extraterrestrial Intelligence” graduate course at the PSU Dept. of Astronomy and Astrophysics, I participated in a collaboration with over a dozen colleagues to examine a parallel question; might an extraterrestrial intelligence (ETI) be using an SGL scheme to build an interstellar transmission network? If so, we might be able to detect the transmitting spacecraft if its transmissions intersect the Earth’s orbit (as proposed by Gillon 2014). Such a spacecraft would visible opposite on the sky from its interstellar target and would be most visible if it is along the ecliptic plane (the same plane as Earth’s orbit).

While the collaboration focused on conducting a prototype search at the antipode of Alpha Centauri using Breakthrough Listen at the Green Bank Telescope (paper forthcoming!), I also conducted a side project to make predictions about what sort of engineering would go into such a transmission scheme.  A paper based on that project and co-authored by Dr. Wright was recently accepted for publication in the Astronomical Journal and is now available on the arXiv (http://arxiv.org/abs/2109.08657).

Initially, my project set out to tackle a broad question; it’s physically possible to use the SGL for an interstellar transmission, but is it productive from an engineering standpoint? After all, if an ETI needs to overcome myriad challenges to get the SGL transmission system online, it might be easier just to skip the mess and be more direct.  If we can quantify the challenges facing an SGL scheme, we might be able to predict which stars might be included in an ETI transmission network and whether our Sun is a likely host.

First, we focused on the difficulty of maintaining an alignment with the target star. Normally, when transmitting using a radio dish, you need to point the dish to within a few arcminutes of the target, depending on the gain (degree of focus) of the outgoing beam.  However, the impressive gain boost of the SGL means that the interstellar transmission could be only an arcsecond across, 60x narrower and much more intense. A spacecraft trying to aim at a target star needs to stay aligned with that much precision.

We soon found that there are numerous dynamical perturbations on the spacecraft-Sun-target alignment.  First, the Sun is pulling the spacecraft inwards; if the craft drifts closer than about 500 AU to the Sun, it can’t transmit using the SGL.  Next, the Sun is being jostled around by its orbiting planets (shown in the GIF below); the spacecraft needs to expend propellant to counter these motions, coming out to 10x greater than the inwards force. A couple of linear effects like the proper motion of the target star are small corrections as well.

This has implications for local artifact SETI searches. While the Sun has several perturbations (mostly the reflex motion from Jupiter), it is a much better host for an SGL than a star with a close binary companion or a close-in giant planet. Close binary systems like Alpha Centauri and Sirius are terrible hosts for SGL spacecraft because of the reflex motions from the other stars in the systems. If we are trying to detect an SGL interstellar transmission network, we could focus on nearby stars that are unperturbed by massive planets, like Proxima, Barnard’s Star, or Ross 154.

Next, we addressed how those challenges might be overcome.  Clearly, a spacecraft could just fire its engines and counter the perturbations to maintain the alignment with the target.  Doing a quick back-of-the-envelope calculation, we found that a modern chemical, nuclear, or electric rocket engine could maintain alignment with an interstellar target for up to a few thousand years. Table 2 from the paper shows how long different propulsion systems could resist the perturbations of the sun’s gravity (~0.5 m/s/year acceleration) or including the reflex motions imparted on the Sun by the planets (~8 m/s/year).

On a human timescale, this is a long time; Voyager 2, our longest-lived active probe, is 44 years old, and there are obviously other challenges to operating autonomously for such a long period. In artifact SETI, ten thousand years is a blink of an eye.  The universe has existed for billions of years, which means that an ETI might have activated their relay spacecraft around the Sun millions of years ago. We could only detect it actively transmitting if it has survived and maintained alignment for the whole time.

So, how could an ETI extend the longevity of their spacecraft? They could reduce the total gain of the system so that they can ignore perturbations by the planets, but that blunts the benefits of an SGL arrangement. They could use advanced rocketry like fusion engines or solar or EM sails to dramatically increase their propulsive capabilities. They could use clever navigational techniques to get efficiency in exchange for simplicity or downtime. Finally, they could just let their probes die off and fall derelict, sending along a constant stream of replacements when needed.

So, we’ve used the dynamical features of the Sun and solar system to predict a few engineering challenges that must be overcome to use the SGL for transmission or science.  Then, we used those challenges to predict what to look for during an artifact/radio SETI search at the antipode of a nearby star.  As mentioned earlier, a collaboration is analyzing observations at one such antipode.  With a few proposals flying around, it looks like it will soon be an exciting time to be a gravitational lens!

If I were an eccentric trillionaire and wanted to help detect signals from an ETI, I would fund the construction of the All-Sky-All-Time-All-Wavelengths array.  Placing millions of telescopes of all kinds around the globe and across the solar system, I could survey every single spot in the sky at all wavelengths, nonstop. Certainly, if an ETI is sending signals then we should be able to detect them with a system like that. Sadly, no amount of money in the world can make this dream a reality, so we need to narrow down our SETI investigations. We can’t look for signals all the time or at all wavelengths or at every position.

A valuable avenue of SETI research is making predictions to guide observations to those with a reasonable chance of providing valuable scientific results.  In the past, notable predictions of this type include the hypothesis of “watering hole” frequencies and focused searches on stars that can observe Earth as a transiting exoplanet. Artifact SETI, the search for signs of physical ETI technology near our own solar system, starts with educated guesses about what that technology looks like.

Of course, it’s impossible to say now whether there actually is an ETI-placed spacecraft using the SGL to transmit until we’ve surveyed more antipodes. Still, our research into the challenges of operating an SGL relay is informative both for our SETI searches, and for aspirational proposals to use the SGL for our own science.

 

Strategies for SETI III: Advice

In this series on the content of my recent paper on the arXiv (and accepted to Acta Astronautica) I’ve mostly just described ways to do SETI.  I conclude with some highly subjective advice on SETI for those jumping into the field.

  1. Read the literature

There are a lot of SETI papers, and very few of them have ever been cited. Going through it as we have for the SETI bibliography, it’s striking how many times the same ideas get discussed and debated without referencing or building on prior work on the topic.

This is partly because the field is scattered across journals and disciplines, and because there’s no curriculum (yet!). The result is a lot of wasted effort.

Fortunately, you can now keep up with the field at seti.news and search the literature at ADS using the bibgroup:SETI search term.

  1. Choose theory projects carefully

I was taught by my adviser (who got it from his adviser, George Herbig) to “stay close to the data”. I took this to mean both to always make sure I understood the data and didn’t go chasing data reduction artifacts (I like to try to see strong signals myself, by eye, in the raw data when I can to confirm it’s real), but also to in theory projects really think hard about what the data say, and how it might mislead.

The most useful theory projects are the ones that help searches. A paper that calculates the observability of a particular technosignature using parameters that let observers translate their upper limits into constraints on technologies is staying close to the data.  One speculating on the far future fate of aliens at the edge of the universe—well, it may be very interesting, but it’s not close to the data.

Two topics that I think are probably overrepresented in the literature are the Fermi Paradox and the Drake Equation. Now, I’m very proud of the papers I’m on about the Fermi Paradox, so I won’t say to avoid the topic, but ultimately the Fermi Paradox is not actually a problem that I think demands a solution. Such work also works best when it leads to observational predictions, and so informs searches and the interpretation of data.

But continuing to argue about it after so much ink has been spilled about it, and in a situation where we have so little data to go on, creates diminishing returns. Kathryn Denning describes the “now-elaborate and extensive discourse concerning the Fermi Paradox” as being “quite literally, a substantial body of analysis about nothing, which is now evolving into metaanalysis of nothing.” continuing, “I would not suggest that these intellectual projects are without value, but one can legitimately ask what exactly that value is, and what the discussion is now really about. And, refering to early work on the problem:

Thinking about that future [of contact with ETI] was itself an act of hope. Perhaps it still is. But I want to suggest something else here: that the best way to take that legacy forward is not to keep asking the same questions and elaborating on answers, the contours of which have long been established, and the details of which cannot be filled in until and unless a detection is confirmed. Perhaps this work is nearly done.

I think she’s right, and this goes for work on the “Great Filter” and “Hard Steps” models in SETI, too.

The Drake Equation, similarly, occupies a big chunk of the theory literature. The equation is very useful and in a way sort of defines the field of SETI, but ultimately it’s a heuristic and its purpose is to help us think about our odds of success. But even Frank will tell you that while it’s useful to plug in numbers to get a sense of whether SETI is worthwhile (it is!), it’s not meant to be solved or made exact. It’s not a foundational equation like the Schrodinger equation from which one derives results, it’s more like a schematic map of the landscape to help orient yourself.

So while there’s no problem with using the Drake Equation to illustrate a point or frame a discussion, I think working to refine and make it better is to misunderstand its role in the field.

  1. Think about the nine axes of merit

Sofia Sheikh has a very nice paper describing how to qualitatively describe the merit behind a particular technosignature.  When proposing a new technosignature, I recommend thinking about them all, but a one in particular: “ancillary benefits,”. This gets to Dyson’s First Law of SETI Investigations: “Every search for alien civilizations should be planned to give interesting results even when no aliens are discovered.”

There are three reasons for this. The first is the funding paradox that null detections must be used to justify yet more effort. If there are ancillary benefits, then this is easier. The second is that doing other work with the data or instruments you use means you stay connected to the rest of astronomy (this also helps junior researchers get jobs and stay employed). The third is that it’s easy to get discouraged after years of null results. Having something to work on in the meantime helps keep one going.

This point should not be taken too strongly however. Radio data of nearby stars might really have no practical application beyond a null detection, and that’s OK. Those null detections are still good science! Also, the skills one uses to do that search, and the equipment built to do it, are all transferable to interesting astronomy problems.

    1. Engage experts

Lots of SETI papers written by physicists (and others) go way outside the authors’ training. There’s a particular tendency among physicists (and others) to feel like since we’re good at physics and physics is hard and everything is fundamentally physics, that we can just jump into a field we know little about and contribute.

Engaging experts in those fields will both help us not make mistakes and broaden the field by bringing them into it so they can see how they can contribute. It’s win win! And we should do it more.

  1. Plan for success when designing a search

One should think hard about upper limits and what result one will have when one is done searching before one starts the search. This is easier said than done, but really helps sharpen one’s work, and ensures that a useful result will come out at the end.

It also helps draw in experts! A SETI skeptic might not want to help you, say, look for structures on Mars lest they be drawn into another Face on Mars fiasco, but if they see that they’re contributing to an upper limit (that confirms their priors!) on such faces, they will be more likely to really help.

  1. Stay broad minded

We all come to the problem of SETI with very different priors for how SETI can succeed, and so will invariably encounter practitioners pursing what we feel are very unlikely or misguided paths to success. It helps to remember that the feeling may be mutual

In particular, we can acknowledge the value in the exercise of, say, considering ‘Oumuamua as an alien spacecraft without falling into the “aliens of the gaps” trap. That is we should distinguish between claims that

  1. Our prior on a particular technosignature is too small, using a particular case study as an example, and
  2. A particular case study is likely to be a technosignature

The first is entirely appropriate. Before ‘Oumuamua, I did not think much about the possibility of alien spacecraft in the solar system. Now, I think I have a much better informed prior on the likelihood and form of such a thing.

The second requires extraordinary evidence because our prior on such a thing is (presumably) quite small.

  1. Stay skeptical, but not cynical

I’ll close by just quoting the end of my paper:

Not all SETI researchers believe they will have a good chance of success in their lifetimes, but such a belief surely animates much of the field. It can therefore be challenging to maintain a scientist’s proper, healthy skepticism about one’s own work, especially when coming across a particularly intriguing signal.

I suspect everyone who engages in the practice long enough will come across what looks to be a Wow! Signal and, at least briefly, dream of the success that will follow. The proper response to such a discovery is a stance of extreme skepticism: if one is not one’s own harshest critic, one may end up embarrassing oneself, and losing credibility for the field. It is at these moments that Sagan’s maxim should have its strongest force.But one should also not let the product of such false alarms be a cynicism that leads one to focus entirely on upper limits and dismiss all candidate signals before they are thoroughly examined as just so much noise. There is a wonder that brought many of us into the field that must be nurtured and protected against the discouragement of years or decades of null results that Drake warned about. One should cherish each false alarm and “Huh? signal” as an opportunity for hope and curiosity to flourish, “till human voices wake us, and we drown.”

 

You can find the paper here.

Strategies for SETI II: Upper Limits

Last time I discussed the “kinds of SETI” I laid out in my new paper on the arXiv. This time, I’ll discuss a nifty plot I made about upper limits.

At a workshop at the Keck Institute for Space Science a couple of years ago, I put together a graphic describing how I think we should think about placing upper limits in SETI:

The idea is that if you do a search and find nothing, you need to let people know what it is that you did not find so that we can chart progress, draw conclusions, and set markers down for what the next experiment should look like. My analogy is the dark matter particle detection community, which (similarly to SETI) must solve the funding paradox of using a lack of results to justify continued funding.

The idea is that you have some parameter that marks the strength of a purported (ambiguous) technosignature (like waste heat for Dyson Spheres).  If you perform, say, a search of all-sky data along that axis, then you will end up with many potential detections. Virtually all of these are (presumably) natural sources, so what can you do?

Well, the easiest first step is that one of the sources is the strongest, meaning that you instantly have an upper limit: no technosignatures exist above that threshold. If you’re the first to interpret the data that way, then you’ve made progress!  We now know something we didn’t before.

Then the sleuthing kicks in.  What are those top, say, 10 sources?  Are they all well-known giant stars with dusty atmospheres and certainly not Dyson Spheres?  If so, then you’ve just lowered the upper limit.

As you work your way down, your upper limit keeps improving, and you keep learning about what’s not out there. You also learn what you need to do to weed out all of the false positives to get to more meaningful and stringent upper limits.

This works for almost any technosignature with many confounders: structures on the moon, transiting megastructures, “salted” stars.

This formalism even works for machine learning-based anomaly detection, although in that case it might be hard to translate your upper limit into something physically meaningful, because the mapping between an anomaly score and the characteristic of the technology that would give that score might be obscure.

Next up: advice!

Strategies for SETI I: Kinds of SETI

Inspired by a discussion at the Technoclimes workshop, I started thinking about all of the different approaches to SETI, as distinct from the different technosignatures to search for. This eventually evolved into a paper where I was able to incorporate lots of odds and ends I had written and collected over the years about SETI in one place.  I think it came out well!  It’s on the arXiv here, but here are some of the highlights.

There are a few ways to think about SETI searches, and most fall onto one side of a few divides:

  1. Communication vs. Artifacts, including:
    • Small vs. large scale
    • Kinds of artifacts / carriers
    • Derelict vs. active artifacts
  2. Ambiguous vs. Dispositive Technosignatures
  3. Commensal/Archival vs. Dedicated Searches
  4. Model-based vs. Anomaly Searches
  5. Searching for “Beacons” vs. Eavesdropping
  6. Passive vs. Active Searches (i.e. METI).

Communication vs. Artifacts:

Last year we put a lot of effort into proposing a big interdisciplinary research consortium for astrobiology to NASA (an “ICAR”) . The proposal was unsuccessful, but along the way we found a useful way to frame searches:

Table showing different scales and kinds of technosignatures

The idea here was to map out the kinds of “artifact” technosignatures that exist, and think about how they all relate.  Along the top roughly tracks both scale and distance: first nearby things in the solar system; next, roughly Type I Kardashev scale things on the surfaces or in orbit around nearby planets; then things approaching Type II with lots of circumstellar activity; and finally Type II sorts of technosignatures on the right.

Vertically, we take three things to actually look for: physical structures, environmental alteration, and excess heat.  There are many other kinds of technosignatures too, of course. In particular, communication SETI is not really on this chart, but  by and large I think this captures the breadth of a pretty big swath of SETI.

Artifacts are neat because we might be able to detect them even if they are no longer being maintained. Depending on the artifact, they might be detectable for a very long time after their creators are gone.

Ambiguous vs. Dispositive Technosignatures

“Dispositive” means that something settles (“disposes of”) a particular question. It’s a term from law, and I like it because it’s a useful word with no synonyms and can help distinguish among different kinds of null results (i.e. failing to find anything because you didn’t look hard enough, and showing something does not exist because you looked more than enough).

One of the really nice things about communication SETI is that it’s probably dispositive: if you see a communicative signal, especially a narrowband one, you know it’s from technology. Then you’ve solved several problems at once, scaling the entire “Ladder of Life Detection” in one go.

Hunts for Dyson Spheres, on the other hand, and not very dispositive. Waste heat can come from dust just as well as from technology, and no matter how weirdly shaped a light curve implies an occulting object is, there always seems to be some pathological natural explanation for it.  Such searches can, at best, find good candidates for technosignatures that would then have to be validated by other means.

But, that’s also true of many searches for biosignatures! Just finding, say, oxygen isn’t enough. Hey, no one said astrobiology was going to be easy!

Commensal/Archival vs. Dedicated Searches

Some kinds of searching requires dedicated hardware. The Breakthrough Listen Initiative builds large supercomputers on site at their facilities to record voltages measured at the telescopes extremely quickly and save the reduced data products. PanoSETI will perform unprecedented observations of the transient sky because of its innovative design.

Other searches are “commensal,” using some hardware simultaneously with other observers. The old SERENDIP project at Arecibo had specialized hardware that occupied a different part of the focal plane from the main instruments, and so was always on, searches whatever part of the sky it could. This kind of searching sacrifices the ability to choose one’s targets for getting in order to use powerful equipment.

Other projects can be done with no additional hardware. Searches that use archival data, for instance, piggyback on general purpose astronomy data.  This kind of searching is cheap, but one can only search for things that happen to fall within the parameters of the databases you’re looking at.

Model-based vs. Anomaly Searches

This is a neat one. There’s an idea called “generalized SETI” by George Djorgovski which roughly says that since we don’t really know what form alien technology will take, you should look for anything out of the ordinary in big public data sets.  This has the upside that you are sensitive to the unexpected, and the idea has been applied to many aspects of SETI, for instance this work by Daniel Giles.

Model-based searches look for a particular technosignature in that data by modeling its signal and filtering on that. This has the big advantage that you can say what it is you did not find, because you have parameterized your technosignature and its strength. This makes it much easier to calculate upper limits.

Putting such upper limits on anomaly-based searches can be much harder because it can be difficult to know, especially with machine learning algorithms, exactly what it is the computer is keying on, on what it would have missed. This is a major problem worth tackling, because anomaly-based searches have enormous promise.

Searching for “Beacons” vs. Eavesdropping

In the early days of SETI, a mix of optimism and necessity led Frank Drake and others to search for “beacons”—big, loud, obvious signals designed to get our attention. Perhaps, one line of reasoning went, there was a community of species welcoming technologically young species like ours into their Galactic Society with such signals.

Such signals are easy to spot because they would be designed to be easy to spot, and so it makes sense to look for them first.  This was especially true because early radio equipment could search only a very narrow range of frequencies at once. A lot of work went into thinking about what frequencies such beacons would be at—the first SETI paper (Cocconi & Morrison 1959) guessed the 21-cm line, and many “magic frequencies” have since been proposed as the ones “they” would use to get our attention.

Indeed, there is a whole concept in game theory called “Shelling points” used to describe this dynamic.

Today, radio observatories can search billions of channels simultaneously, eliminating the need to guess, and our sensitivity is much better, so we could potentially detect even “leaked” emission intended only for short-range communication, for instance among planets in a distant planetary system. So the distinction is no longer quite so important, but still influences survey design and target selection.

Passive vs. Active Searches (i.e. METI)

This one gets a lot of attention!  METI is the attempt to establish contact backwards: to send a signal that gets attention in hopes that it triggers a response we wouldn’t miss. Some people get very upset about METI, worried we might catch the attention of dangerous aliens. My position is pretty nuanced, but ultimately I’m not worried about METI. Earth has many signatures of life and technology that I think are more obvious than any METI programs, so ultimately their value is performative, to get us thinking about contact.

Next time: Upper limits!

The first 2020 PSU SETI Course project: updated bibliography

The pandemic has been hard, but we have managed to get some research done at the PSETI Center for the past few months.

In Fall 2020, we had the second instance of the Penn State graduate SETI course (now on the books officially as ASTRO 576) and the students’ final projects were great! Some of the students have chosen to polish them up and submit them for publication, and the first one is now out!

Julia LaFond has extended the work of Alan Reyes from the 2018 instance of the class to flesh out and expand the SETI bibliography at ADS.  The paper, now peer-reviewed and accepted to JBIS, is on the arXiv here, describes how we now categorize papers, and our workflow for finding new papers and providing monthly updates.

Thanks to work by Macy Huston, we are now working with James Davenport to maintain the SETI.news mailers.  Macy uses ADS and its nifty library and search features to find potential new papers and applies Julia’s criteria to add papers to the SETI bibgroup on ADS.  Then you can restrict any search you like at ADS to bibgroup:SETI and these papers will be included, like this:

Screenshot of ADS showing how to use the bibgroup:SETI keyword

Every month, Macy then adds a soupçon of editing to the new entries and sends the latest batch to James, whose algorithm produces the SETI.news mailing for our subscribers:Screencap of the SETI.news site

We hope this is useful for the community.  Do subscribe to seti.news, use the ADS bibliography, and tell us what you think!

How a Species Can Fill the Galaxy

The Fermi Paradox is about why there are no aliens on Earth today. “Where is Everybody?” Enrico Fermi asked his lunchmates one day in 1950, pondering that it must be that alien spacecraft—at almost any speed, really—have had plenty of time to get here by now, since the Galaxy is so very old.

The story’s been told lots of times, and Bob Gray has a nice paper about the real history of the term (it’s neither Fermi’s, nor a paradox!). The term is also sometimes used to refer to things like why SETI hasn’t found anything yet, but of course that’s not what Fermi meant (after all, when Fermi asked his question, the first modern SETI program wouldn’t even start for another 9 years!)

But is it true that a species would fill the Galaxy, given the capabilities? Working with Jonathan Carroll-Nellenback, a team I was part of tried to simulate things, and found that, sure enough even slow ships would fill the galaxy pretty swiftly. What we didn’t do, though, was make a nice movie showing how it happens.

Well now we’ve fixed that!

In this movie, Jonathan carefully tuned the parameters so that the maximum range of ships is about 3 parsecs, or within range of a couple dozen stars for Earth. The way exponential growth works, if the local density of stars is such that you always have plenty of targets within range, you’ll grow, but if that happens rarely, you won’t. And because stars move, you’ll always have fresh stars nearby to settle, if you wait long enough.

The whole movie spans about 1 billion years. The expansion front takes so long because we don’t let any settlement launch a new ship to settle a new star more frequently than once every 100,000 years.  As the front expands, the parts moving inwards will encounter higher stellar densities and the expansion wave will accelerate.  The parts moving vertically or outwards quickly run out of stars and stall.

What’s neat is that in this simulation, because the ship range is small and ships are sent out infrequently, the wave goes slowly enough that it is actually the motions of the stars that do most of the work, and you can see how they take what might have created a bubble of inhabited stars and smear it out, like jam getting mixed into oatmeal or cream getting stirred into coffee.

Eventually, the front reaches the middle part of the galaxy where the stars are typically closer together than in the outer parts, and then the expansion proceeds very quickly, but the outer reaches of the Galaxy never get inhabited.

Now, this is just an illustration—in truth if something like this happened the ships wouldn’t have a hard limit of 3 parsecs for their motions, and who knows how often new settlements would happen. We’re working on a new paper now that explores these things.

You can find a fuller description of the movie in our new Research Note of the AAS here.

Enjoy!

Making Astronomy Safer: An Apology and Recommitment

I write this to acknowledge the harms caused by having Geoff Marcy involved in the California Planet Legacy and other papers, and apologize for my role in that. 

Context:

Two papers by the California Planet Search (CPS) were recently accepted by AAS Journals and put on the arXiv.  Both papers leveraged decades of data collected as part of the CPS and its predecessors, all of which were once led or co-led by Geoff Marcy. As a graduate student and postdoc, I was a member of the CPS and its predecessors from 1999 until around 2007 or so. In 2015, UC Berkeley found that Geoff had repeatedly violated its sexual harassment policy, and when this and other information became public Geoff ended his employment there.

Today, the California Planet Survey is led by Andrew Howard at Caltech (whom I know well from our Berkeley days). The CPS has many members across the country including many of Andrew’s current and former advisees, and many “participating scientists” (including me) who occasionally work with CPS by contributing expertise, telescope time, and other resources for specific projects and papers. Geoff Marcy does not have any current affiliation with the CPS.

These papers were led by Lee Rosenthal (a graduate student at Caltech) and BJ Fulton. Both had extensive author lists, including both me and Geoff Marcy. All co-authors were on email chains giving comments on mature drafts of these papers. Since 2015, Geoff Marcy has also co-authored and been acknowledged in other research with astronomers, including papers I have led.

Since the two CPS papers appeared on the arXiv, there has been extensive discussion on and off social media regarding the propriety of having Geoff as a co-author on these and other papers, and the harms this causes. Andrew Howard has argued that for these two papers, the AAS Authorship Rules required Geoff to be a co-author because of his foundational role in the project. 

Some of the harms done:

For brevity, I will discuss the harm done to students, because we senior scientists have an especial obligation to protect them from harm, but these harms are also applicable to other scientists, especially other early career researchers.

There is harm is to the students in the group who have had to deal with this. What should be a celebratory moment of some fantastic science results vindicating decades of work has been overshadowed by its author list. 

Students have also been put in the inappropriate situation of having to choose between getting credit for their work on an important paper and avoiding association and interaction with Geoff. This does them both professional and personal harm. 

Professionally, this means that students who choose to avoid Geoff must limit their opportunities for research and credit in ways other scientists do not. Personally, it takes a large mental and social toll on students who have to deal with this issue, and this toll persists whenever a student’s work is used in a paper Geoff is a co-author on, whenever they interact with others who are working with him, and whenever they must ascertain whether a new project they might become involved in involves Geoff. 

There is also harm to students and others associated with senior researchers on these papers. Junior researchers need to be able to trust us to provide a safe environment, community, and profession. All of our associations with Geoff, including mine since 2015, erode that trust.

There is also harm to the profession at large. This incident and others like it tells all astronomers that Geoff continues to play at least some role in the field, and more generally that actions like his by any astronomer will at least to some degree be tolerated. This harm is especially acute for survivors of sexual harassment and assault.

Regarding authorship on these papers:

I was a co-author on these papers at the invitation of BJ and Lee primarily because of my work on the underlying data set from work over a decade ago. I also provided comments and feedback on the drafts after agreeing to join.

Although reasonable people can differ on this point, in my opinion the ethics of authorship did not obligate the team to invite Geoff (or me) to be on the paper. And even if they did, there are alternative ways to publish these data that would have separated most of the authors of these papers from being co-authors with Geoff. 

In general, I have a nuanced position regarding who needs to be an author on a paper, and I do not think the current AAS authorship rules reflect the theory or reality of how we assign authorship in astronomy. Many factors should come into play when deciding who should be an author on a paper, including broader ethical ones. 

Regarding how we can do better:

Since 2015, my advisees do not interact with Geoff Marcy, and do not co-author papers with him. Both in the past and quite recently I have encouraged Andrew Howard to also adopt a policy like this for his advisees.

More generally:

We need to have better norms and standards for how we deal with authorship in situations like this. The AAS Code of Ethics Committee and the AAS Publications Committee should update the Code and the AAS Journals Authorship policy to better reflect the realities of how authorship is assigned and acknowledge the many factors that need to go into authorship of papers. I note that while the AAS has very limited purview over non-members, it has complete purview over its journals, just as it does its meetings, and should use this incident as a case study in shaping new rules.

We need to have meetings and research environments where astronomers do not need to worry about who is safe and who is not, and which research projects or collaborations might put them in a situation where they will be forced to make hard choices or interact with sexual harassers (or worse). We can do this better.

And to my fellow research group leaders: we need to listen to our advisees and create spaces where they are safe to tell us how to keep astronomy a place where they can feel and be safe, and where they can thrive, focusing on their science. I appreciate the trust my advisees have put in me by doing this, and I aim to reward that trust with action.

If you count yourself among the people that need to understand my role in all of this in more detail, please reach out to me directly. I will aim for transparency and honesty with you on this, I will listen to what you say, and I will do better.