Astronomical magnitudes get a bad rap.

The Greek astronomer Hipparchus famously mapped the sky and assigned each star a “magnitude” (or size) based on its apparent brightness. The human eye is a surprisingly precise photometer (you can with just a little effort estimate brightnesses differentially to about 0.1 magnitude; I’m sure dedicated amateurs can do better) So Hipparchus could have been thorough about this, but he was actually quite general: he just lumped them into 6 categories: “stars of the first magnitude” (the brightest), “stars of the second magnitude” and so on.

But while the human eye is precise it’s not linear: it’s actually closer to being a logarithmic detector. This gives it a great dynamic range but it means that what seems to be “twice as bright” is actually much, much brighter than that.

In 1856 Norman Pogson formalized this in modern scientific terms by proposing that one magnitude be equal to a change in brightness of the fifth root of 100, with a zero point that roughly aligned with Hipparcus’s rankings so that “first magnitude stars” would have values around 1. This captured the logarithmic scale and spirit of the original system, and has frustrated astronomers ever since.

Astronomers regularly complain about this archaic system. A lot of this comes from trying to explain it in Astronomy 101 or even Astronomy 201 where our students expect a number attached to brightness to increase for brighter objects, and where we have to teach them a system literally no other discipline uses. Especially at the Astronomy 101 level, where we are loathe to use logarithms, we often skip the topic altogether.

But I think astronomers don’t realize how good we have it.

First of all, the scale increases in the direction of *discovery*: there are very few objects with negative magnitudes (the Sun and Moon, sometimes Venus, a few stars in certain bands) but lots of objects up in the 20’s where the biggest telescopes are discovering new things. Big numbers = bigger achievements is much better than “we’re down to -10 now!”, in my opinion.

Secondly, the numbers have a nice span. The difference between 6 and 7 is just enough to be worth another number. This is because the fifth root of 100 is only 8% larger than the natural logarithmic base *e*, which is the closest thing we have to a mathematically rigorous answer to the question “how much is a lot?”*.*

But most importantly, the system is a beautiful compromise between simplicity and precision that allows for very fast mental math and approximations for *any magnitude gap.*

This is because we long ago settled on base-10 for our mathematics, and the magnitude system is naturally in base 10. 15 magnitudes is a factor of 100,000, because every 5 magnitudes is *exactly* 100. 2.5 magnitudes is a factor of exactly 10.

It doesn’t take much practice to get very fast at this. If we used, say, *e* as the base instead, the 8% difference would compound with each magnitude. exp(15) is 3.6 times larger than 100,000.

Finally, and most importantly IMO, because this interval is very close to a factor of e, we get the lovely fact that very *small* magnitude differences translate pretty well to fractional differences. So, a change of 0.01 magnitudes is almost exactly 1% (only 8% off, actually). That’s *so* useful when trying to do quick mental estimates. For instance: a transiting planet with a 10 mmag depth covers 1% of the star, so it has 10% of the star’s radius (since sqrt(0.01) = 0.1). A 1 mmag transit therefore corresponds to 10x less surface covered, so it has 3% of the star’s radius. Easy!

I think of it as akin to the twelfth-root-of-two intervals on an equal-tempered instrument. No interval on such an instrument produces the mathematically perfect 3:2, 4:3, or 5:4 harmonic, but they’re all *close enough* and in exchange you can transpose music and shift keys with ease and without loss of musical fidelity. The pedants may complain, but it’s worked *great* for centuries.