Science is filled with jargon: words that have very specific meaning in the context of science, that might otherwise be ambiguous or have very different meanings. When an economist says that an effect is “marginal” she doesn’t mean it’s small, or not important, or relegated to the margins: she means that it is measured differentially with respect to the current value. For instance, suppose your overall income tax rate (taxes divided by income) is 25%, but because of your tax bracket if you make one more dollar your tax will go up by 33 cents. Your marginal tax rate is then 33%.
Astronomy is filled with jargon. We say that light is “extinguished” (some astronomers say “extincted”, making the more pedantic astronomers flinch), but we don’t mean it “went out”, we mean that it has been absorbed or scattered out of the line of sight to its source by intervening material. We might write that a star has an “infrared color excess of 0.5 magnitude”; in this case the jargon serves as useful shorthand for what would take many paragraphs and equations to explain to someone not trained in the field.
Jargon is good: it saves time, and it allows people to be precise and concise at the same time.
One bit of jargon that I think needs a consensus regards the common practice of finding out that a purported effect or object doesn’t exist. There is a big difference between looking for something and not finding it, and finding that something isn’t there. If you go needle hunting in a haystack and don’t find one, it’s possible that you just missed it. If you burn down the haystack and then methodically sieve and magnetically search every bit of the ashes and still don’t find it, you can say rather definitively that there is no needle.
Carl Sagan like to express a similar idea with the phrase “absence of evidence is not evidence of absence” — not having much information about something (whether the needle is in the haystack) is a lot different from knowing that something is not true.
But we don’t really have any jargon that distinguishes inconclusive searches from searches that conclusively show there is nothing to be found. We ran into this issue with HD 149382 b
when trying to show that it does not exist — we wanted to be clear that it’s not that we didn’t find the planet there, it’s that we found that the planet wasn’t there! It came up again here
The terms I see used are “no detection”, “non detection”, and “null detection”. A quick search on astro-ph shows that “no detection” yields 338 results, “null detection” yields only 46, and “non detection” yield 667 (Google gets confused by papers detecting the molecule NO). Spot checking reveals that all three terms are used in both senses, and interchangeably.
One solution would be to propose that we reserve the term “null detection” for the rejection of a specific hypothesis: one would have detected a certain object or effect, if it existed, but you did not, so it does not. This makes sense to me because you have made a positive development: the detection of nothing.
In this scheme “non detection” is then the simple lack of a detection. I would keep the phrase “no detection,” which is not a noun, with its ordinary English meaning (that is, not jargon at all; its meaning should be clear from context).
Unfortunately, “null detection” echoes the common statistical jargon “null hypothesis” for which one finds “null evidence”. This sense is similar, but perhaps confusingly so, since it is formally impossible to prove most versions of the null hypothesis (at least in the standard statistical formulation of the term).
Also, according to Wikipedia the jargon “null result
” constitutes evidence of absences, but not proof of it, so corresponds to the “non detection” in the scheme above.
I just chatted with astrostatistician Farhan Feroz, who pointed this out, and suggested that saying something is a “non detection to X% significance” resolves the ambiguity. I agree, but I don’t always know “X”; just that it is close to 100. Also, that makes for more unwieldy paper titles.
So instead, let’s try something else: a “dispositive null detection” or “dispositive non detection”. This makes it clear that not only have you not detected anything, but that your lack of detection is so strong that it settles the issue (the term is common in law).
I might start using this terms this way, and footnote it to explain its precise meaning. If other researchers find it useful (or presume that I am repeating a definition, rather than coining jargon), then maybe they’ll teach it to their students, and we’ll have more (useful!) jargon.
It looks like this entry has gotten picked up in the corner of the social-media-verse nearest me (thanks John Johnson). To address a common discussion point there: a “dispositive null result” refutes a specific
hypothesis of a specific magnitude.
If you are trying to rule out a whole class of models down to the limits of your experiment, it’s just an “n-sigma upper limit” (thanks Peter Plavchan). If your upper limit is far below a purported signal, it’s a dispositive null. Scott Dolim points out the classic example: the Michelson-Morley experiment that disproved the ether to high significance (further experimentation was unnecessary because the putative signal had a known magnitude). Adam Kraus approves of the modifier “dispositive” and points to a useful definition
. Charley Noecker pointed out the misspelling of “Michelson-Morley” and echoed Fomaulhaut b as a nuanced example.