Once upon in time, probably in graduate school, someone told me an aphorism, which went something like this:
A theorist only has to be right once to garner a reputation as a good scientist, but an observer only has to be wrong once to ruin theirs.

Alex Filippenko—spreader of the aphorism
I asked about it on Twitter and Facebook, and multiple people pointed to Alex Filippenko as the originator (which may also be where I heard it, when I TA’d for him at Berkeley). I asked Alex, and he wrote that he heard it from other grad students at Caltech, perhaps Richard Wade. I asked Richard, and he wrote “I think it was a fairly common expression around Caltech when I was a grad student, so Alex could heard it from me. I probably heard it from other grads.”
So, I’m not sure where it comes from, but it’s a great quote!
Some of Facebook and Twitter objected to the sentiment it expresses— ” I’d say it’s best forgotten. No good comes from playing it safe the whole damn time… 😉” quipped David Kipping. Brian Metzger writes “I think one has to make an important distinction: theory that in principle is well-motivated and has a sound physical basis but just turns out to be the wrong explanation (but might still lead to progress by posing new questions), versus theory that e.g. employs bad physics or already disproven assumptions and couldn’t in principle have been correct. ”
But I think it’s got a kernel of truth worth discussing.
As I’ve drifted into theory from observation I’ve been struck by how much more comfortable theorists are being wrong than observers (sometimes I call this Steinn’s bad influence on me ;).
But it makes sense. Theorists are expected to work on hypotheses that might turn out to be wrong, and there is no discredit in one’s theory turning out to be wrong if it was interesting and spurred work that eventually turned up the right answer. There’s no real reason for an equivalent “you’re doing a good job even if you’re wrong” land for observers.
I think David Kipping and Alex Teachey’s laudable and cautious approach to their exomoon candidate illustrates the divide. As an observation project, especially a high profile one, they must be extra careful not to overstate the evidence, careful to call things “candidates” and not “discoveries”, and careful to emphasize the uncertainty inherent in the problem. Their peers, journalists, and the public will scrutinize their verbiage and they will get blowback if it turns out to not be an exomoon and their presentation of the evidence, in retrospect, was overstated.
But a theoretical analysis of the abundance of exomoons (or exoplanets!) that turns out to be off by orders of magnitude can still get cited favorably a decade later if it included novel and important components. After all, everyone understands that theory is hard and that we build theories up piece by piece, and so we’ll get it wrong many times before we get it right. And so such work rarely includes the careful hedging that Kipping and Teachey used in their work.
Or, to give a more dramatic example: If inflation turns out to be completely wrong, the theorists who dedicated their careers to it will still be considered good theorists, but the BICEP2 team that got a subtle issue with dust wrong have a whole book about the very public and embarrassing debacle that followed their (incorrect) detection of sign of inflation in the CMB.
I’m not saying this dichotomy in unfair or inappropriate—on the contrary, I think it’s appropriate!—I’m just pointing out how the aphorism resonates because it identifies something real and tacit about the way we judge science.
I think of the recent debate on the Hubble Constant, and I wonder if our observational capabilities are becoming affected by theory. As distances both macro and micro move through orders of magnitude, error margins become more important, and I can’t shake the feeling that maybe we’re more prone to errors as these magnitudes increase across both space and time.
We’ve long ago reached territory of both space and time that are largely theoretical, and so, even observation should be questioned, especially as space and time seem to have more opportunities to relate in strange ways at the endpoints of large and small magnitudes and completely unknown beyond certain boundaries.
I feel that It is okay for even observers to be wrong, as both instruments and the theory of space-time may not be accurate, and I feel less likely to be accurate at longer and tinier distances, not to mention our limited references of time. I’m just glad that we continue to seek more significant figures in both directions in this process, as I feel that this is more important than any experiment or result.
A model based hypothesis in astronomy is reasonable if it is based on mathematics and the input parameters and values reflect those observed in the known universe. This model should also be verifiable by experiment and producing of an outcome that has greater explanatory power than other related models. It should also be clear in the manuscript that its purpose it to introduce an untested mathematical hypothesis.
A review paper may catalog the past hypotheses that have been elevated to theory and the ones that were rejected by experiment or by error.
The success of string “theory” has set a precedent in natural science that there are rewards to generating untested or untestable hypotheses, and this includes testable hypotheses that do not explain nature further than current theory.
This is timely:
https://www.nature.com/articles/d41586-019-02397-8
Speculative ‘supergravity’ theory wins US$3-million prize
Three physicists honoured for theory that has been hugely influential — but might not be a good description of reality.